The Interaction Hour is a monthly production of Georgia Tech's School of Interactive Computing, where we will investigate the impacts of computation on life’s big issues like health care, national security, ethics, education, and more. In accordance with our them -- interaction -- we want to hear what you, the audience, wants to discuss. We are a podcast for the people by the people.
Over the past half a decade or so, the field of deep learning has exploded. Artificial intelligence and machine learning are experiencing their moments in the spotlight – both in research corners and in popular culture. As with anything being given so much attention and hype, it’s difficult to separate the fact from the fiction. Users want to understand AI – how does it work? What is it capable of? Why does it come to one decision over another? A lot of effort has gone into opening the proverbial black box to better understand the AI. Not as much time has been spent on the other half of the equation: Humans. Today, we’ll speak to one Georgia Tech researcher, whose team will present work on this primary topic at the upcoming ACM Conference on Human Factors in Computing Systems. What role do humans play in the infrastructure of AI? How can a better understanding of this role lead to better design? For a successful human-AI relationship, do we need to gain trust in our technological counterparts – or do we already trust them too much?Featuring: Georgia Tech Ph.D. student Upol Ehsan
34 min 26 sec
After more than 15 years as a researcher, educator, and administrator at Georgia Tech, School of Interactive Computing Chair (and Interaction Hour host) Ayanna Howard is moving on to The Ohio State University. There, she will become the university's dean of engineering, a role she says will help her to continue her mission of improving equity and inclusivity for the fields of engineering and computer science. In this episode, we take a look back with Dr. Howard over her time in Atlanta, what the future holds for her, and what we can do to continue to make the world a better place through our research and academics.
22 min 47 sec
For nearly a year, we’ve seen the COVID-19 numbers. It’s been an ever-growing climb on news channels, positive cases and daily death totals, fatality percentages, and more. The constant flow of information has been as overwhelming as it has heartbreaking. But what does all of this information mean, and what do we do with it? How do we know what the numbers are telling us? Clio Andris, whose work on a visualization tool released last summer has been met with much media attention and public use, joins the Interaction Hour to discuss.
20 min 23 sec
The Covid-19 pandemic has necessitated a new approach to education, forcing some students to mix in-person learning with remote and others to learn only from the confines of their own homes. The pandemic has brought to the fore new challenges and potential solutions to address the pressing needs of students and educators, and it could expedite a transformation in how we think about education in the long term. Today, we’re joined by David Joyner, the executive director of online education in Georgia Tech’s College of Computing. He’ll help us understand the pressing needs during the pandemic, the long-term benefits of online education, and how we might bridge the gap between the best of both worlds.
39 min 59 sec
One of the biggest challenges to achieving equity, diversity, and social impact in computing is how we engage with traditionally underrepresented populations in the field. Associate Professor Betsy DiSalvo is the principal investigator on the DataWorks project, a program that has brought employment and engagement to non-data scientists. In this episode, we’ll discuss the accomplishments of DataWorks, explore how it engages those without a background in computational thinking, and how it improves our pursuit of equity in computing.
19 min 34 sec
It is a uniquely challenging time in human history. We are facing a deadly global pandemic that has killed more than half a million people and caused a tidal effect across all levels of our society – from health care to the economy to education to our social lives, and much, much more. In our own country, we are facing civil unrest as we reckon with the impacts of centuries of oppression and disproportionate treatment and opportunity of the Black community. And we must meet these pressing needs in the midst of a technological revolution that, if not handled properly, will continue to compound our past failures and threaten to leave us behind, unable to keep up. In times like these, it is important to remember what – or more accurately, whom – is at the center of all we do in Georgia Tech’s School of Interactive Computing.
32 min 32 sec
45 min 11 sec
Are humans too willing to transfer trust to AI systems that may or may not have earned it yet? What factors lead to that trust? What’s the threshold for how trustworthy a system, like autonomous vehicles, must be before we deploy worldwide, and how do we get there?
25 min 23 sec
In a previous episode of the Interaction Hour, we discussed one potential space that could benefit from virtual reality. A group that included one of our faculty, Neha Kumar, was using the technology in the educational space, working with local teachers to develop virtual lessons that showed improved engagement and performance. Today, we return to the topic. Virtual and augmented reality continue to be among the most promising technologies, but what they are, what they will become, and where we will benefit is still up for debate. Even more pressing are the potential pitfalls – like privacy – which, without proper vigilance, could be exploited in much the same ways as social media.
31 min 32 sec
Think about the most recent news headline you read. Was it completely objective, void of any presupposition of truth or language that may lead readers down one particular path of understanding? Or did it, more likely, contain subtle cues about how the message was being framed, casting doubt on its veracity or reliability. Every day, we are inundated with these types of texts that, on the surface, proclaim to be arbiters of truth but, due to simple word choice and message framing, can bias their consumers. Luckily, new tools are being developed to help us become more critical recipients of media. In this podcast, we chat with Diyi Yang about how artificial intelligence can help us identify this subjective bias in text – and how AI itself can reflect our own preexisting biases.
20 min 32 sec
Machine learning. It’s a term often used, but not always understood in the world of technology. Every day, new innovations, products and capabilities are introduced and adopted by people all over the world, but there’s a bit of a disconnect between researcher and consumer. How is a system trained? Why does it make certain decisions under certain conditions? What kind of reasoning goes into its decision making, and how can we trust that its choice is informed, objective and, ultimately, correct?
26 min 35 sec
Online communities like Reddit or Twitter act like town halls, where opinions are shared and everyone, in theory, has a voice. Only, it doesn’t always work like that. What was once optimistically viewed as a solution to public discourse, offering promises of open and logical discussions where anyone with a keyboard and an internet connection could speak their piece, has instead become a bit of a Wild West. Message boards have degraded into sources of harassment, misinformation, radicalization, and more. The question is: How can you moderate, while also maintaining the promise of free speech? How can you avoid discouraging posters whose content was moderated or removed, while encouraging them to remain a part of public discourse?
20 min 59 sec
We are living in a data-centric world. You might not realize, but data influences nearly every important decision you’ll make on a daily basis. Consider your daily commute from Marietta to Midtown Atlanta. You leave at a particular time or take a particular route based on your understanding of the traffic data. You choose a particular restaurant after work or select a hair stylist based on Yelp reviews. You vote and influence the entire direction of your local, national, and global community based on your understanding of political trends or voting records. There’s so much data, that an entire field – data analytics – exists to make sense of it all. But what about people like you and me? How can we, as non-data analysts, take advantage of all of this information to make decisions or come to better-informed conclusions?
20 min 14 sec
Consider for a moment the story of a veteran who has returned home from a tour of duty in a combat zone in Iraq. The physical toll of war has long since worn off, but the traumatic events they witnessed or in which they participated have left mental scars that can never fully disappear. They visit mental health therapists specializing in post-traumatic stress disorder, but feel like they aren’t getting better. The anguish of reliving the experiences makes it difficult to perform the exercises their therapist has recommended, and the therapist has no clear sign of whether or not their patient is being forthcoming in each visit.This is an imagined but common scenario for American veterans, who come home by the thousands with high rates of mental illness. Today, host Dr. Ayanna Howard is joined by Rosa Arriaga, a senior research scientist in the School of Interactive Computing, whose new grant from the National Science Foundation aims to take this challenge head-on.What are the challenges to effective care of patients facing PTSD or other chronic illnesses? Can usable computational tools be the key to improving the effectiveness and efficiency of treatment? Why is it important that we in the computing community continue to think about how our technologies work for people in the real world?
19 min 11 sec
In recent years, as computing as become central to most fields of study, so too has the education and research being performed in Georgia Tech’s College of Computing. One person who has been here through it all is Charles Isbell, the new John P. Imlay Jr. Dean of Computing. We chat with Dean Isbell about the importance of maintaining an interdisciplinary approach to research, the potential challenges facing computer science education and computing as a whole in the coming years, and why equity is the tie that binds all we do toward a fruitful future of computing.
25 min 5 sec
When Zvi Galil, the outgoing John P. Imlay Jr. Dean of Computing, came to Georgia Tech in 2010, there was no such thing as OMSCS. True online degree programs were still a dream, AI teaching assistants unnecessary, and the College of Computing, while excellent, in many ways mirrored its peers in higher education. Over nearly a decade that he has helmed the College, however, it has experienced dramatic growth both in size and reputation. Due in large part to the Online Master of Science in Computer Science program that Dean Galil spearheaded, computing is now Georgia Tech’s largest major and is also a consensus top-8 computer science program nationally. As he prepares for the final month of his deanship at the College of Computing, we’ll chat today with Dean Galil about what brought him to Georgia Tech, his mission and how he fulfilled it, and, of course, the world-renowned online degree program for which he will be most remembered.
18 min 47 sec
In the late 1990s, the United States saw a sharp increase in the number of opioid overdose deaths – rising by nearly 600 percent between 1999 and 2017, according to data provided by the CDC. It has, appropriately, been labeled an epidemic, and in 2018 the country’s life expectancy dropped for the third consecutive year, reflecting the ongoing drug crisis and rising suicide rates. As researchers and clinicians continue to examine the quality of different approaches to treatment, many seeking recovery have taken matters into their own hands. Our guest, School of Interactive Computing Ph.D. student Stevie Chancellor, will present a paper on this subject next week at the ACM Conference on Human Factors in Computing Systems in Glasgow, Scotland. What exactly do these addiction support communities entail? What alternative strategies are people pursuing in recovery, and why? How can we ensure that clinicians are well-informed about the types of self-treatments being used outside of their care?
18 min 8 sec
In the late 1990s, Professor Gregory Abowd of Georgia Tech’s School of Interactive Computing developed a tool to allow people to collect and reflect upon memories over a long period of time. Motivated by his father’s collection of 30 years worth of videos, Gregory wanted to create something that assisted in annotating and searching videos to create short memories. Around 2002, he began using this for his own family memories and made a discovery while watching one of the videos. His oldest son, who was then 5 years old and already diagnosed with autism, demonstrated stark differences in behavior and communication between videos at 18 months and others at 26 months. Amazed by what he saw in the videos, Gregory began to consider other more serious applications of this memory-capturing tool. In the coming years, it would become a key research initiative for Gregory and others at Georgia Tech.
20 min 35 sec
Over the years, virtual reality has become a mythical new medium with promises of immersive gaming and enriched experiences. Novels and movies like Ready Player One have teased the potential – and raised the expectations. In many ways, though, the technology is a largely untapped resource for reasons varying from the usability of the equipment to the premium cost.In this episode, however, we’ll hear from former Georgia Tech student Aditya Vishwanath and current Georgia Tech assistant professor Neha Kumar who are examining the potential for virtual reality in education and instruction. What are the affordances of the technology inside of a classroom, and how can issues of cost and access be overcome to ensure it is a truly democratized medium?Note: Inspirit is now piloting a virtual reality curriculum on fostering a civic mindset by teaching ethics and design thinking as an integrated component of a 4-year undergraduate liberal arts degree program. This project is being conducted in partnership with Krea University in Sri City, India, and will be one of the first of many VR higher-ed curricula on social justice themes at scale.Aditya and Neha, along with colleagues Sally Creel and Tamara Pearson will speak on this topic at SXSW EDU at 5 p.m. March 4 in Room 11AB of the Austin Convention Center in Austin, Texas.
27 min 27 sec
School of Interactive Computing Ph.D. student Kalesha Bullard does research into helping AI gain basic building blocks for how to learn complex tasks. In one example, she describes the goal of packing a lunch box. What are the things that a robot must know in order to complete that task? The size and shape of fruits or beverages? The heigh or circumference of each object? The depth or surface area of the lunch box itself? Taking inspiration from human learners, including her own time as a teacher and student, Bullard offers some input into how these tasks can be achieved.
28 min 5 sec
In most online learning, instructors face challenges in achieving similar levels of effectiveness and retention to their on-campus offerings. With so many students to account for an the inability to meet in person, it’s important to find ways to supplement the interaction between teacher and student. As an instructor for a course in Georgia Tech’s Online Master of Science in Computer Science program, School of Interactive Computing Professor Ashok Goel introduced the world to Jill Watson, a virtual teaching assistant who was so good in her first semester on the job that even students thought she was human. Can AIs like Jill really improve course effectiveness and satisfaction? Will they be used to augment the production of the human assistant, not replace it? And can this method, which has proven successful in an academic setting, be used as a foundation upon which other sectors of the workforce can build?
27 min 11 sec
When it comes to artificial intelligence and automation, there are two common opposing schools of thought: One says that AI is on its way to solve all our problems, work for humans and allow us to perform at peak capacity in our jobs. Another says that it’s on its way to take those jobs from us entirely and leave a substantial part of the population behind. The truth probably lies somewhere between those extremes. Georgia Tech Associate Professor Mark Riedl joins the podcast to help separate fact from fear. Read more at www.ic.gatech.edu/podcasts and follow us on Facebook and Twitter @ICatGT.
23 min 54 sec
Years ago, mothers used to place their hands on their children's foreheads to determine if they had a fever. Thermometers now can provide more precise measurements and, thus, more appropriate health care. Like the thermometer, can we use social media to do the same for mental illness? But what do we risk by opening our social channels to algorithmic observance? Dr. Munmun De Choudhury has spent years investigating what our social media can say about our mental health.
14 min 26 sec
What does a prohibition-era speakeasy have in common with modern-day cybersecurity? How can ancient biblical tales inform our development of such systems? To finally convince mainstream society to adopt good security behaviors in the future, is it imperative that we look, instead to our past? School of Interactive Computing Assistant Professor Sauvik Das thinks so.
20 min 8 sec
The emergence of artificial intelligence in society has elicited visceral reactions from people the world over, many of whom, thanks to portrayals in popular culture, can’t quite decide whether they believe we are building the future – or destroying it. Are we actually dealing with “killer robots?” Why has the public perception become so polarizing? Can we trust algorithms to make appropriate and trustworthy decisions, or do we risk too much by turning power over to the robots? Professor Ron Arkin, an expert in robotics and roboethics joins the podcast to discuss.
18 min 47 sec
On March 18, 2018, a self-driving vehicle in Tempe, Arizona, was involved in a fatal crash that resulted in the death of a pedestrian crossing the street at night. As a result, tests on self-driving cars by the company were suspended in four major cities and the inevitable questions arose: Should human “drivers” be responsible for their autonomous hosts? How do we train self-driving cars to perform risk analysis in real time? Ultimately, are travelers safer with autonomous vehicles on the road?
15 min 53 sec
In the School of Interactive Computing, we are all about – you guessed it – interaction. From investigating the ways in which computing impacts humans on a daily basis -- health care, ethics, national security, and more -- to the way we as a school interact with industry and academic community through our research, we believe we are better for our ability to collaborate and find solutions for life’s big issues. Hear from School Chair Ayanna Howard as she details our goals for this podcast and how you, the listener, can get involved.
4 min 20 sec