The Future is Spoken

Shyamala Prayaga

The Future is Spoken is a voice tech podcast covering all aspects of voice tech, from conversational strategy and design, through to interfaces, analytics, ethics, privacy, and career planning. The Future is Spoken is the podcast for the Digital Assistant Academy. The Academy is an online learning centre offering courses in voice tech. The first course, Voice Interaction Design, launches in October 2020. The Academy was founded by Shyamala Prayaga, and presenters and speakers include reputable and successful individuals in the voice technology and voice assistant sectors.

All Episodes

Hello and welcome to the 'Future is Spoken' produced by the digital assistant Academy. In Today's episode, Shyamala Prayaga speaks with Maikel Van Der Wouden about the Sonic branding in Voice Design.Maikel van der Wouden, a highly passionate audiophile, and design engineer has a lot of experience developing voice branding strategies and audio assets for prestigious clients, including fortune 500 companies, across various industries.Tune in now!Conversation Highlights:[00:00:46] Maikel's background as an Audio engineer working on Sonic branding. [00:03:08] Sonic branding connects users to the brand.[00:07:25] What is the relation between Sonic branding and Voice?[00:19:26] What are the different elements of Sonic branding? [00:30:15] What is the return of investment for companies to invest in Sonic branding. [00:32:31] Who all are involved in sonic brand designing? Learn more about Maikelhttps://maikelvanderwouden.com/ https://www.linkedin.com/in/maikel-van-der-wouden-4a5822112/ If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Jun 8

42 min 20 sec

Hello and welcome to the 'Future is Spoken' produced by the digital assistant Academy. In Today's episode, Shyamala Prayaga speaks with Julie Daniel Davis about the Voice First Usecases for FinTecin Education.A former accountant turned educator, Julie Daniel Davis enjoys thinking about innovative ways to enhance education and then making them happen. 15 years in the k12 setting, with the last role being Director of Instructional Technology and Innovation, led to being an educational consultant, advocate for voice technology use in education, and an adjunct Edtech professor at UT Chattanooga. Julie is striving to meet the individual needs of student and teacher and take them forward in their growth as lifelong learners in the use of Edtech. Somewhat of a futurist, she is never content with the now and is constantly striving to better the world she lives in. She is seen as a passionate person with the skills to help others catch the joy- whatever that joy might be. She is an instructional technology professional development leader/speaker, author, Amazon Alexa Champion, and Bixby Premier Developer. She believes voice technology can enhance education in mighty ways. She helps educators use technology resources to enhance critical thinking, creativity, communication, and collaboration- all skills required for students to become influencers of their world. In 2020 Julie became a CoSN Certified Education Technology Leader (CETL). Julie is the founder and host of the Voice in Education podcast that was a finalist in the 2020 Project Voice Awards. Julie has been on the planning/steering committee for EdCamp GigCity since 2015. She has served in the past as co-moderator of Twitter Chat #TnEdChat where educators from Tennessee and beyond talk about educational technology issues. She is recognized as a leader in the Canvas LMS community, where she serves as a course facilitator for the Canvas Certified Educator program. EdTech Magazine named Julie as having one of the top K-12 I.T. Blogs for 2015. She was recognized in 2021 by Amazon Alexa in their “Women in Voice” series.Tune in now!Conversation Highlights:[00:06:51] Usecases for Voice in Education [00:11:05] How to solve discoverability issues for parents? [00:15:36] How Schools can adapt to Voice?[00:21:09] Teaching Kids Digital Citizenship with Voice Technology [00:25:55] Will technical issues like limited language and accents understanding create problems with kids and their confidence?[00:30:49] How to solve equity issues with Voice, especially in the classroom?[00:02:33] Julie's Voice in Education Podcast Learn more about Juliehttps://www.linkedin.com/in/juliedavisedu/ https://www.juliedavisedu.com/podcast https://twitter.com/voiceedu1 If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

May 28

55 min 54 sec

Hello and welcome to the 'Future is Spoken' produced by the digital assistant Academy. In Today's episode, Shyamala Prayaga speaks with Rana Gujral about the Voice First Usecases for FinTech.Rana Gujral is an entrepreneur, speaker, investor, and CEO at Behavioral Signals, an enterprise software company that excels at distinguishing behavioral signals in speech data with its proprietary deep learning technology. As a thought-leader in the AI/technology space, he often leads keynote sessions and joins panel discussions at industry events such as World Government Summit, VOICE Summit, The Next Web Conference, and Blockchain Economic Forum. His bylines are featured in publications such as Hacker Noon, Voicebot.ai, SpeechTechMagazine, and is a contributing columnist at TechCrunch and Forbes. He's been recognized as 'Entrepreneur of the Month' by CIO Magazine, awarded 'US-China Pioneer' by IEIE, and listed as a Top 10 Entrepreneur to follow in 2017 by Huffington Post and an AI Entrepreneur to Watch in Inc. In 2020 he won "Contributor of the Year: Chatbots" in Hacker Noon's Noonie's Awards.Tune in now!Conversation Highlights:[00:01:47] Rana's Journey in the Voice tech world [00:09:35] Why is the Financial sector leveraging Voice AI, and what benefits they reap?[00:13:30] How to design scalable solutions?[00:26:01] Will Voice AI synergize with brain-machine interfaces for Fintech in future? [00:37:00] How to design Bias-Free voice experiences?Learn more about Behavioral Signalshttps://behavioralsignals.com/ Learn more about RanaLinkedIn: https://www.linkedin.com/in/ranagujral/Twitter: https://twitter.com/RanaGujral If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

May 18

46 min 33 sec

Hello and welcome to the 'Future is Spoken' produced by the digital assistant Academy. In Today's episode, Shyamala Prayaga speaks with Bret Kinsella about the Current and Future Trends of Conversational AI.Bret is the founder, CEO, and research director of Voicebot.ai. He was named commentator of the year by the Alexa Conference in 2019 and is widely cited in media and academic research as an authority on voice assistants and AI. He is also the host of the Voicebot Podcast and editor of the Voice Insider newsletter.Tune in now!Conversation Highlights:[00:01:30]: Origin story of Voicebot.ai [00:05:08]: How does Voicebot.ai conduct research? Who is the target users? [00:11:02]: Voice as marketing, nitty-gritty and expectations [00:16:18]: Why is there a rise in human-like synthetic voices? [00:20:29]: Trends to make Voice AI accessible and inclusive[00:27:49]: Voice prosthesis for people who cannot speak[00:33:21]: How many assistants do we need? [00:37:18]: Is arbitration the future? Learn more about Voicebot.aihttps://voicebot.ai/ Learn more about BretLinkedIn: https://www.linkedin.com/in/bretkinsella/Twitter: https://twitter.com/bretkinsella If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

May 11

50 min 12 sec

Hello and welcome to the 'Future is Spoken' produced by the digital assistant Academy. In Today's episode, Shyamala Prayaga speaks with Nicholas Sawka and Benjamin Falvo from Voice Spark about Voice Evangelism and testing.Nicholas was recently brought on at Wanderword as their Chief Evangelist. Nick will represent Wanderword in the US in sales, networking, gathering product testers, and improving the use/onboarding of Fabula, Poptale and entertainment production.Nick found his passion for Alexa back in 2015 when she was first released and was part of the device's initial Beta program. In 2017 he made the jump to development and has been involved with creating over 400 Alexa Skills. Further, Nicholas has experience with working on and creating Google Actions. In addition to his love for Voice First tech, he is an experienced Chief Of Operations with a demonstrated history of working in the military industry, with a Bachelor's Degree in Business Management from the University of Phoenix. Benjamin Falvo is currently making things for Howl. Benjamin is an Intrapreneurial and strategic CIO / CTO with 20 years of diversified leadership experience across agency, startup, corporate and nonprofit environments, including numerous high-profile global projects for Fortune 500 clients. Leverages a unique blend of skills at the intersection of technical leadership, creative direction, product management, and operations management. History of ideating, designing, building and deploying highly engaging, user-friendly digital products and experiences across web, mobile and event platforms. Weekly co-host for VoiceSpark Live Podcast on thought leadership in AI and Voice First.Tune in now!Conversation Highlights:[00:03:07]: Nick Journey to Voice Tech [00:04:59]: Ben's Journey to Voice Tech[00:09:07]: Evolution of Voice Spark[00:15:00]: Voice Spark's Testing criteria for voice applications [00:21:28]: Why Voice Evangelism is essential? Guest requests for Voice Spark Livehttps://voicespark.live/contact/Submit your work/skill for review:https://voicespark.live/submit-skill-action-review/Grading Scale for Skills and Actions:https://voicespark.live/voice-spark-scoring-review-breakdown/Learn more about Nick https://www.linkedin.com/in/nicholas-e-sawka-41aa3b79/ Learn more about Ben https://www.linkedin.com/in/benjaminfalvo/ If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

May 4

50 min 13 sec

Empathy is the first step to design any product because it is a skill that allows us to understand and share the same feelings that others feel. Through Empathy, we can put ourselves in other people's shoes and connect with how they might be feeling about their problem, circumstance, or situation. When we design products with Empathy in mind, we are making them usable and helpful. In Today's episode, Shyamala Prayaga speaks with Romita Bulchandani about Designing for Empathy in Conversational AI.Several years ago, Romita Bulchandani left her corporate dream job to live her dream life.  She is now living her purpose through transformational coaching while leaving glitter trails along the way.  Romita works with clients from all across the globe.    As a certified coach, Romita also leans in on 15+ years of diverse leadership experience from Fortune 50 companies like The Walt Disney Company & Marriott International.  Romita is driven by her real-world experiences and passion for living authentically.  You can think of her as a business magician, life coach, spiritual mentor, strategic planner, and creative problem solver.  Romita will find your glitter and help you bring it to the surface.  To learn more about Romita's story, you may connect with her on LinkedIn. Starting with their own experiences, they end up discussing voice design strategy for the elderly population.  Tune in now!Conversation Highlights: [00:06:40] Romita's story from Disney about Empathy [00:10:31] Empathy is putting yourself in user's shoes [00:16:38] How does Empathy help a brand? [00:20:51] How to Design for Empathy?[00:24:55] Empathy is everyone's responsibility  Learn more about Romita and her work Linkedin: https://www.linkedin.com/in/romita-bulchandani/Instagram: https://www.instagram.com/glitterthesoul/https://www.glitterforthesoul.com/ If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga 

Apr 28

48 min 17 sec

The Future is Spoken presents Ryan Elza as this week's guest. Ryan Elza is Vice President of Innovation and Technology at Volunteers of America National Services. Before that, Ryan was Social Entrepreneurship in Residence for Social Connectedness at AARP Foundation and led AARP Foundation's social isolation and digital inclusion work. He has an extensive background in the social determinants of health, design thinking and civic engagement. He is a recognized subject matter expert on social isolation and voice-first design for older adults. He has been at the frontier of developing voice-first solutions for low-income older adults. Ryan is a trained anthropologist, and has a master's degree in technology entrepreneurship from the University of Maryland, and is an avid mentor to students and startups at various stages of development.Prior to joining AARP Foundation, Ryan served as the Program Management Specialist and adjunct professor for the national award-winning Honors College Entrepreneurship and Innovation Program at the University of Maryland. During his tenure with EIP, Ryan developed and launched several new entrepreneurial programs and initiatives, including the EIP Terp Tank Startup Competition and the Global Entrepreneurship Semester Program. Previously, he worked at The Advisory Board Company on the performance technology team helping health systems implement transformational solutions. Mr Elza was a social entrepreneur by the practice during his undergraduate career at the University of Maryland, where he founded and scaled the UMD Chapter of Health Leads, a non-profit that trained and placed students in primary care clinics as family advocates for low-income families to find resources such as food, housing, and job training. During his tenure, he worked with hospital administrators, primary care physicians, university administrators, and student volunteers to implement a closed-loop screening and referral system for the social determinants of health. Starting with their own experiences, they end up discussing voice design strategy for the elderly population. Tune in now! Conversation Highlights:[00:02:21]: Ryan's journey to Social Entrepreneurship [00:06:01]: Barriers when designing voice experiences for the elderly population[00:11:57]: Usecases for Voice in Elderly Population[00:16:14]: Bot Personality. Do older adults understand they are interacting with an AI?[00:19:39]: Solving for discoverability through education [00:23:48]: How to design for age-related issues or disabilities?[00:33:05]: Designing for Social Isolation and mental health[00:40:09]: Designing for memory-related issues[00:42:56]: Ryan's suggestion for getting started Learn more about AARP Foundation http://www.fondationarp.org/Learn more about Ryan LinkedIn - https://www.linkedin.com/in/ryan-elza/If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Apr 19

45 min 26 sec

The Future is Spoken presents Matthew Hammersley as this week's guest. Matthew Hammersley is the Founder and CEO of the novel effect, which uses voice recognition combined with sound effects to enhance storytime for parents, teachers and children. Matt was previously a patent attorney for various companies. During his daughter's baby shower function, an idea for bringing new technology like voice recognition to the timeless tradition of reading aloud to children and making storytelling more engaging came to his mind. That is when he quit his job to start the company novel effect. Matt comes from a chemical engineering background from Clemson University. Starting with their own experiences, they end up discussing voice design strategy for everyone. Tune in Now!Conversation Highlights:[00:01:36]: Matt's journey from Chemical engineer to Cryptography and then from Patent officer to founder of Novel Effect. [00:04:45] Birth story of Novel Effect and how it all started[00:09:40] Kids have more patience than adults when things go wrong[00:12:45] Whose responsibility is it to safeguard a child's protection?[00:18:03] Handling privacy for kids[00:27:05] Targeting the right market [00:35:25] Designing custom responses for kids[00:41:05] Suggestions for aspiring conversation designersLearn more about Novel Effect on Twitter and the web: @novel_effect Novel EffectDownload Novel Effect in the App Store: Novel Effect for iOSDownload Novel Effect in Android: Novel Effect Beta for AndroidFollow Novel Effect on Facebook: Novel Effect on FacebookLearn more about Matthew at● LinkedIn - https://www.linkedin.com/in/matthammersley/If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga 

Apr 14

43 min 46 sec

The Future is Spoken presents Susan and Scot Westwater as this week's guest. Susan and Scot Westwater are the husband-and-wife co-founders of Pragmatic Digital where they advise the world's most innovative brands that want to capitalize on the incredible opportunity Voice represents. Through their consultancy, Susan and Scot help clients solve their marketing and customer experiences problems using customer-centric approaches to plan and create useful and usable Voice experiences for their audiences. They have presented and authored several talks, workshops, articles, and ebooks focused on the role Voice technology plays in marketing and business strategy. Together, they authored the book Voice Strategy: Creating Useful and Usable Voice Experiences and are working on a second book titled Voice Marketing which will be released in Spring 2022. Susan and Scot are also co-founders of Voice Masters, an online education program designed to teach innovative business teams about Voice and Voice strategy and ambassadors for the Open Voice Network and instructors for the Marketing AI Institute AI Academy. They both were recognized in Voicebot's Top 68 Leaders in Voice 2020, and Scot was recently included in Sound Hound's "Top 40 Voice AI Influencers to Follow on Twitter." Starting with their own experiences, they end up discussing voice design strategy for everyone. Tune in Now!Conversation Highlights:[00:02:28]: Learn how Susan's experience from content and Scot's experience from product design background helped them start a pragmatic digital voice strategy company. [00:05:05]: What does voice strategy mean, and why is it important? [00:07:17]: Why is it important to balance the user needs and business needs to design better Voice strategies [00:11:44]: Voice strategy involves thinking about the use case itself, like where Voice fit into the overall digital strategy to think about how even to convince the stakeholders and ROI of the investment. [00:16:03]: Dont rush; adopt a crawl, walk, run approach to designing Voice products[00:22:45]: Testing is iterative[00:23:23]: Data is the king to design strategies[00:33:49]: Build vs buy strategy and which is better?[00:42:42]: Marketing is not a four-letter word. Involve them in your process early on[00:53:50]: Suggestions for aspiring conversation designersSusan and Scot's book Voice Strategy: Creating Useful and Usable Voice Experiences  - www.voicestrategybook.com Voice Masters - www.voicemasters.ai Open Voice Network – Voice assistance worthy of user trust. https://openvoicenetwork.org/ Other Books: Designing Voice User Interfaces: Principles of Conversations Experiences  - Cathy Pearl How to Make Sense of Any Mess - Abby Covert The Content Strategy Toolkit  - Meghan Casey Audio Branding: Using Sound to Build Your Brand - Laurence Minsky and Colleen Fahey Learn more about Scot at● LinkedIn Learn more about Susan at● LinkedIn  If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga 

Apr 5

59 min 45 sec

The Future is Spoken presents Daniel Suissa and Ilana Meir as this week's guest. Ilana Meir is a Conversation Designer at Facebook Reality Labs. There, she works with a cross-functional team with AR/VR technologies, namely Oculus and Portal. Ilana approaches her work with a cultural lens: first seeking to understand the cultural underpinnings of behaviour in a space and then how a new product will affect them. For her contributions to the design field and voice community, Speech Technology magazine named Ilana a "Speech Technology Luminary”.Daniel Suissa is a software engineer at Facebook Reality Labs, working on knowledge bases for conversational AIs focusing on media-related interactions. He previously helped build the personal finance manager Exeq (acquired by RetailWorx), helping young folks understand their spending and connect with their favourite local merchants. Daniel comes from a background in data representation and understanding from the IDF, where I led a team of analysts that advised military leadership on resource management & strategy. Starting with their own experiences, they end up discussing collaboration in conversation design. Tune in Now!Conversation Highlights:[00:01:39] Ilana’s journey towards conversation design[00:03:01] Daniel’s journey into conversation design [00:05:34] how does the Interaction between designer and developers look?[00:10:11] Learn from Daniel’s experience how developers can adapt to user centered approach [00:11:33] How is designing for scannability different from listenability? [00:13:31] Information Architecture in conversation design is key[00:19:53] What is fun about working as a designer with a group of developers?[00:21:11] Best practices to get into conversation design spaceLearn about Ilana at●      LinkedInLearn more about Daniel at●      LinkedInIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Mar 31

35 min 18 sec

The Future is Spoken presents Jonathan Bloom as this week’s guest. Jonathan is a Senior Conversation Designer for Google, focusing on the Google Assistant. Jon was previously the UX Research Lead for Jibo, Inc., creators of the social robot of the same name. Jon was also Senior Voice User Interface Manager for Nuance Communications, where he sat on Nuance’s Innovation Steering Committee. Over his 20-year career, Jon has designed graphic, speech, and multimodal user interfaces for robots, IVR’s, dictation software, cars, and mobile applications.  Jon holds a Ph.D. in Cognitive Psychology from the New School for Social Research. Starting with their own experiences, they end up discussing Standardizing Voice Experiences. Tune in now!Conversation Highlights:[00:03:39]: Jonathan's journey from Nuance to Google[00:10:45] : Multimodal Design and how it helps [00:13:03] : Why do people expect Emotionally Intelligent Digital Assistants?[00:17:52]: Does anthropomorphism lead people to expect more? [00:22:37]: People are expecting more from the assistance. Where do we draw that balance? [00:31:40]: How much is too much personality?[00:45:17]: Script writing keeps Jonathan inspired[00:48:00] Jonathan  encourages new conversation designers to read these books Cathy Pearl's -  Designing Voice User InterfacesLearn more about Jon at-            LinkedIn-           TwitterIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Mar 24

50 min 38 sec

The Future is Spoken presents Jon Stine as this week’s guest. Jon Stine is the Executive Director of The Open Voice Network (OVN), a non-profit global association dedicated to bringing the benefit of  standards to the world of artificial intelligence-enabled voice assistance.  The OVN is a Directed Fund of The Linux Foundation.  He brings to this role more than 30 years of global leadership in the commerce and technology industries.    Jon’s retail industry knowledge was first shaped in the womenswear apparel business, as he headed sales of a well-known national brand to leading US department and specialty stores.   In 2000, he joined the Intel Corporation to create and head its first global outreach to the retail and consumer goods industry.   In the years that followed, he was a co-founder of the Metro Group Future Store Initiative in Germany, and the Pan-Pearl River Delta Initiative that first brought digital transparency to the China-to-US supply chain. He joined Cisco Systems retail-CPG consulting team in late 2006, and later headed Cisco’s North America consulting practice for retail-CPG.   In 2014, he returned to Intel as the Global Enterprise Sales General Manager for the retail, hospitality, and consumer goods industries.  He stepped away from Intel in 2019 to build The Open Voice Network. Through the years, Jon has worked directly with customers across the Americas, Western and Central Europe, Middle East, India, China, and Japan, as well as delivery partners in hardware, software, services, and consulting.    Jon resides in Portland, Oregon, USA.    Starting with their own experiences, they end up discussing Standardizing Voice Experiences. Tune in now!Conversation Highlights:[00:04:05]: Story of Open Voice Network Inception-            Listen how a group of met over coffee and ended up starting open voice network. [00:06:39]: Voice is at the same stage as User Experience was before 2006. -            Jon and around 180 volunteers are working together to bring voice standards to life.  [00:07:19]: Value of Standards [00:12:05] How do we make Voice trustworthy?[00:13:29] Voice Standards needs regulation like Accessibility to empower users [00:16:06] What is being done to make Voice more Ethical? [00:17:32] Privacy and Security. How do we protect it? And what values might we promote?[00:27:12] Voice is a crowed space, will users select Voice assistant based on Standards?[00:33:43] Designers and Developer can all add value to Open Voice Network. Learn more about Jon at-            LinkedIn-            Open Voice NetworkIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Mar 16

38 min 32 sec

The Future is Spoken presents Jeff Adams as this week's guest. Jeff has been leading prominent speech & language technology research for more than 20 years. Until 2009, he worked at Nuance / Dragon, where he was responsible for developing and improving speech recognition for Nuance's "Dragon" dictation software. He presided over many of the critical improvements in the 1990s and 2000s that brought this technology into the mainstream and enabled widespread consumer adoption.After leaving Nuance, Jeff joined Yap, a company specializing in voicemail-to-text transcription. He assembled a strong team of 12 speech scientists who, within two brief years, were able to beat all competitors on an unbiased test set. They also matched the performance of a competitor who used (off-shore) human transcription.Yap's success caught the interest of Amazon, who wanted to jump-start their new speech & language research lab. Upon acquisition, Jeff led efforts to build one of the industry-leading speech & language groups. His Amazon team developed products such as the Echo, Dash, and Fire TV. Jeff left Amazon in 2014 to found Cobalt Speech and Language.Starting with their own experiences, they end up discussing crafting natural conversations for the bot.Tune in now!Conversation Highlights:[00:28] The journey to Voice…..● Jeff has been working on speech technology for almost 26 years now. He started with a small speech company in Boston.● He ended up working at Amazon on Alexa before it was launched.● Cobalt work with companies that are looking for speech-related technologies. It is a company that licenses technology and also customizes it.[03:58] Can anyone design natural conversations?● Jeff explains that it is an art to designing natural conversations. One way to approach this is to assume the system as a human.● He also talks about designing a system that can cater to everyone's needs.● Giving the user what they are looking for is what matters! Jeff touches on building a system that responds appropriately to all the different ways people ask for something.[15:10] Creating a bot despite the lack of resources?● Jeff divulges different ways of creating a natural voice application despite the lacks of resources.● A slow launch with a lot of beta testing is the key.[18:41] NLP v/s NLU● Jeff explains the difference between Natural Language Processing and Natural Language Understanding.● NLP is a broad umbrella term referring to any computerized processing of human language, while NLU is a subset.● What are the uses of NLU?● Spoken Language Understanding is Automatic Speech Recognition(ASR) + NLU.● Jeff also touches on ensuring a system that understands what the user says.[34:04] The secret sauce for the ASR systems to work better!● Jeff elaborates on the different approaches for the ASR systems to work together.● How can we design a speech recognition system that understands the users in the natural environment?[41:41] What are the Best Practices while designing a speech system?[44:15] Must Listen● Jeff's piece of advice for someone trying to get into Speech Recognition.Learn more about Jeff at●     LinkedIn● Or email him at INFO@COBALTSPEECH.COM.If you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Mar 8

42 min 51 sec

The Future is Spoken presents Elaine Lee as this week's guest. Elaine Lee is a designer specializing in artificial intelligence and machine learning, with a focus on ethical AI. She's currently a Principal Product Designer at Twilio on the AI team, and previously led the design of eBay's AI-powered shopping assistant on Facebook Messenger and Google Assistant.Starting with their own experiences, they end up discussing the building a trustworthy voice assistantTune in Now!Conversation Highlights:[00:01:32]: Elaine’s journey from Psychology to Product Design[00:02:37]: Elaine’s experience building conversational bots with EBay[00:05:20]: Twilio AutoPilot PlatformAutopilot is a developer centric tool to create conversational AI bots for various text and voice based platforms, [00:08:17] Tips for building Natural Language System [00:10:56] Differences designing for chatbot vs Voicebots[00:18:19] Best practices for building trust with voice assistants and conversational AI bots[00:23:52] Disambiguations strategies to build trust [00:32:22] Expectations setting for building trust [00:41:54] Elaine’s advice for aspiring conversation and VUI designers Learn more about Elaine at●      LinkedInIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Mar 1

45 min 34 sec

The Future is Spoken presents Rupal Patel as this week's guest. Rupal is the Founder and CEO of VocaliD, a voice Al company that creates unique synthetic voices. Unlike conventional methods, VocaliD's award-winning technology generates high-quality, natural-sounding voices within hours, not months. They leverage cutting-edge machine learning techniques, proprietary Voice blending algorithms, and our crowdsourced Voicebank dataset to enable brands and individuals to be heard in a voice that is uniquely theirs. Vocal Id is a spin-out from her research lab at Northeastern University. She is a tenured professor in the Department of Communication Science and Disorders and the Khoury College of Computer Sciences.Starting with their own experiences, they end up discussing synthetic voices.Tune in Now!Conversation Highlights:[00:23] The journey to Synthetic voices…..● Rupal works on making customized synthetic voices for individuals as well as for companies. She started the mission to create voices for people who couldn't speak.● She also explains how the world of Voice is touching the sky right now.[03:32] Identifying the problem…….● Rupal explains the reason behind creating the Vocal ID. She divulges the problems she identified while researching people with speech impairment.● People with limited speech capabilities still can control the prosody of their Voice.● What does it take to create a natural-sounding voice?[11:47] Tuning the prompt according to your need.● Rupal speaks about the different ways to tune the prompt for the pitch or speed or even the tone. The end-to-end synthesis methodologies allow controlling Speech differently.● They have also started implementing a new method to make a change at the word level. She is also excited about some of the style modifications.[20:09] The importance of Natural Sounding Voice● She elaborates that almost every way we are consuming information is through our ears. Because of so much audible capability, you need to have a natural voice.[22:18] What secret skill do you need to enter the 'Text to Speech' world?● She touches on the skills you need to enter the world of Speech and design natural sounding voices.● Linguistics is becoming the heart of Voice.[26:11] Researching is the most crucial aspect of everything.● Rupal explains that apart from doing experiments on building up the voices and making them sound more natural, they are also doing listening perception experiments to understand how consumers have different preferences for Voice.● She also touches on how they ensure that the quality remains up to par and the operating system's role in amplifying the quality.[40:36] How is Vocal ID different from others?● Vocal ID is focused on customized Voice as supposed to specific libraries that other companies possess.● Machine learning can get you to 90% of the way, but you will require an understanding of Speech to reach that last mile.[46:40] Must Listen● Rupal's piece of advice for someone trying to get into the world of Voice.Special Reminder:Celebrate The Diversity of Human Voices! Will You Share your Voice?Join others from around the world in sharing the gift of Voice. Register today. Learn more about Rupal at●     Vocalid.ai●      LinkedIn●      Vocalid.ai/voicebankIf you enjoyed this episode of The Future is Spoken Podcast

Feb 22

47 min 28 sec

The Future is Spoken presents Marco Pasqua as this week's guest. Marco is Co-founder of LIKE ventures and an award winning speaker and Entrepreneur.Starting with their own experiences, they end up discussing the act of accessibility and inclusion in Voice Design.Tune in Now! Conversation Highlights:[00:02:11]: 2008 Lay off which lead to a career in AccessibilityLearn Marco's journey from the video game industry to evangelizing Accessibility.During the 2010 recession, losing job opened new avenues for Marco in the world of accessibility expert, public speaking and entrepreneurship.[00:06:06] LIKE Venture and Marco's Journey towards building an accessible future Marco talks about LIKE Ventures all virtual Accessibility conference scheduled for October 14thDiscusses ways to make online events and conferences accessible for everyone[00:13:02] Accessibility and Meaning Access are two different thingsMarco explains  Accessibility is not about making different products for different people and making one that everyone can use.Marco's interesting example for explaining difference between accessibility and meaningful access:"for example, for a building to have a ramp outside a door and then say, Oh, people can get inside the front door. Right. But then ask yourself, where is that ramp located? Is it actually at the front door or is it at the back of the building near a trash can or near the dumpster? So that you're saying, well, by the way, we have accessibility, but it means that you have to come through a different entrance, but thats ok.Right. Because you're just like everybody else. No meaningful access is when every single person who's expected to use a product or service can use it in exactly the way it's intended without having to feel like they have to adapt themselves to meet the function of the product. But rather that the product is already thinking about how everyone, whether you're nine or 90 can use it."[00:18:22]: Accessibility by Design Marco explains Accessibility is always an afterthought for manyAccessibility is more of a money minting machine for the government He explains how there is a lack of accessibility awareness with many companies and individuals [00:23:04]: Making Voice Application more accessible Knowing user personal circumstances and preferences how we can make voice interactions more accessible Using dialogues and conversation design how we can make show empathy and guide disabled users  [00:45:11]: Must Listen How Voice technology is helping Marco prepare for the arrival of his baby Learn more about Marco at●      LinkedIn●      TwitterIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga 

Feb 18

59 min 34 sec

The Future is Spoken presents Maaike Coppens as this week's guest. Maaike is an international Conversation Design Expert, speaker and co-author of the Voice UX Workbook. With a successful background in linguistics and UX design, Maaike found conversational design the perfect blend.Over the last couple of years, Maaike has worked with award-winning agencies, large enterprises, and innovative companies worldwide. Eager to be part of a team with an opinionated take on Conversation Design, Maaike recently joined Greenshoot Labs - a chatbot and applied AI agency in the UK - as their Head of Conversational UX.Tune in Now!Conversation Highlights:[02:54] The journey to Conversational UX….● Maaike explains that we are constantly flipping the coin in the wrong way when designing experiences. ● She had an academic path in linguistics. She then got deeper into UX. When conversational designing became more widespread, it was an ideal mix for her.[05:35] Linguistics as a career?● Maaike shares her perspective on choosing Linguistics as a career.● It's great to see how conversation design has helped her think differently about human to human Conversations.[08:44] The most important traits that a conversational Al must-have.● Maaike explains that being flexible and being inclusive are essential traits. She also feels that relatability is also a necessary aspect for conversational Al or voice assistant.● Accessibility and inclusivity are related, but they are not mutually exclusive.[21:34] Enabling the disabled with Voice....● She elaborates that when you are designing Voice assistants, empathy is critical. It is necessary to get those insights and research before doing any empathy exercises.● Empathy becomes very important when backed up by user research.● She still sees loads of difficulties around language proficiency. She says that instead of finding the next big thing, we should perfect the ones available.[35:05] Reality of Voice Assistants● She feels that if you adapt your human behaviour to technology, it is a sign that the technology is already broken.● The industry is more reactive than being proactive in designing an inclusive solution. When amazon launched smart speakers, they never thought about disabled people.[42:30] How can we design an inclusive solution?● Maaike divulges what the designers can do to design an inclusive solution. She further instructs to get out of the bubble.● She speaks that listening is the one vital skill that you need to have as a conversational designer.[47:02] No Al without IA(Information Architecture)● Maaike elaborates that Information Architecture is a significant part of conversational designing. It is about mapping the information available to you.● Why do we forget the basics of UX?● With the Information Architecture, researching and bringing in some of the UX processes into conversational designing can significantly change the experience.[54:50] Must Listen● Maaike's piece of advice for someone trying to get into the world of Voice.Learn more about Maaike at●      LinkedIn●      Twitter●      GreenShoot LabsIf you enjoyed this episode of The Future is Spoken Podcast, then make sure to subscribe to our podcast.Follow Shyamala Prayaga at @sprayaga

Feb 15

54 min 13 sec

The Future is Spoken presents Greg Bennett as this week’s guest. Greg is Conversation Design Principal at Salesforce, leading the company’s first dedicated Conversation Design practice since its inception. As a linguist, Greg focuses his work on empowering businesses to create chatbots that feel natural and helpful, build user trust, and meet customer expectations for conversational behavior. Greg works with Salesforce’s Product teams, customers, and partners to tailor their conversation designs for cross-cultural differences across channels and user populations, as well as how to effectively express personality or conversational style. Starting with their own experiences, they end up discussing about designing conversational bots for enterprise. Tune in Now! Conversation Highlights:[00:32] From Linguistics to Conversational AIGreg's academic background is in linguistics. He studied linguistics for his undergraduate and graduate work. Linguistics is the science of how language works in practice. He also talks about the design principles that they teach bots to accomplish a particular outcome. Linguistics is getting famous with Conversional designing. [03:38] Greg's work at Salesforce!Greg talks about the approach of Salesforce towards developing Einstein Bot platform. He also explains how they are ensuring the growth of this technology.Einstein is an NLU technology that trains chatbots to create a learning model. The learning model helps chatbots created with Salesforce understand customer interactions in a chat window.Are there any plans for voice enabling?Greg also elaborates on the use cases of Einstein Bots in the enterprise world.[12:26] Benefits of using Enterprise Bots Greg speaks about 'How bots fit in the benefits of a company?' Conversational Bots provide you a better way to show the constomers what your institution is all about. He sheds the light on the need to make sure that they reflect your brand. [16:34] Creating your Bot…..Greg touches on the customizability of the Einstein Bots. He also talk about the response rate of an Einstein Bot.He discusses about the Intro Template Bot. It provides the Salesforce Administrators the ability to launch a bot from the templates.How will the customers know to how to design their bots? Greg also elaborates on the building blocks in designing a Conversional Bot. From the overall implementation perspective, components depends on what your end goal is! Greg explains about best practices someone need to consider when designing a conversational AI. [37:43] Choosing the right options!Greg talks about the deciding what to choose and what not to? He is, as a Conversational Designer, biased about the importance of data in choosing the right options.He also touches on how to end the conversation and answer questions like,"Should you always end a conversation with a question?" What about ending the Conversion?Everything has a variation, so as Conversational Designer you can use it as an advantage to create an unforgettable experience. [46:08] Ensuring to stick with your branding traitsFor Greg, it's about creating a list of what fits within the brand. He explains about creating a repository of all the different ways he could acknowledge the receipt of information.He also shares his experience of working on Cortana and compares it with developing a enterprise Bot. [50:16] Measuring the User's Trust Greg talks about the different ways to measure the user's conversation with the Bot.If you wan

Feb 8

56 min 58 sec

"The Future is Spoken" presents Dr Maria Aretoulaki as this week's guest. Maria is a Voice First Veteran, having been designing Voice User Interfaces (VUIs) since 1996, long before Voice Assistants, but also before telephone self-service and Speech IVRs were mainstream. She got into Voice Design through a Post-Doc in Spoken Dialogue Management for Speech Recognition applications, after getting a PhD and an MSc in NLP and Machine Learning and, earlier, a BA (Hons) in Linguistics & English.She has held Senior VUI | Voice Design and Technical Project Management positions both in Academia and in Industry in the UK and Germany. In 2008, she founded her own VUI & Conversational Design Consultancy, DialogCONNECTION Limited. In that time, Maria has provided her VUI Design and Speech Recognition expertise to organizations in Europe, the US and Asia, including APPLE, SAMSUNG, VODAFONE, SKY, TALK TALK, EMIRATES NBD, the NHS and the European Commission. Her Voice designs, call flows, dialogue scripts and tuning recommendations have saved her clients up to $10 million annually.Originally from Greece, Maria has spent the last 30 years between the UK and Germany, and has provided multilingual and culturally appropriate Voice Design services for English, German, French, Italian, Spanish and Greek.Tune in now!Conversation Highlights:[00:25] From Linguistics to NLP to VUI Design● Maria originally studied Linguistics and English Literature in Greece, so she naturally wanted to go and study and work in the UK.● She was looking for Sponsorship for her Masters, when she bumped into the field of Machine Translation and NLP.● During her PhD studies in 1993, she discovered the world of Artificial Neural Networks and got fascinated by their potential and decided to apply them to Automatic Text Summarization.● It was through a Post-Doc in Spoken Dialogue Management for Speech IVRs that she got into the world of Voice and Speech Recognition back in 1996. From then on, she has been a VUI Designer![07:20] SIRI comes into play!● Maria explains how the iPhone was like having a full computer in our pockets and how SIRI was the beginning of a new era, making Speech Recognition and Voice mainstream.● She feels very proud about the Voice field, which she considers like her "baby" growing up to be an adult![09:40] Explainable VUI● Maria coined the term "Explainable VUI" amid the myriad of Voice applications and Voice Assistant skills / actions / capsules designed by programmers or marketing people.● "Explainable VUI" means to design a Human-Computer interface bearing in mind both the complexities and imperfections of human language and the limitations of the technology (ASR / speech recognition / NLP).● A lot of her work with various companies and organizations creating VUI Designs from scratch or reviewing existing ones is carefully crafting system prompts.● She stressed the importance of knowing how the background technology works.[52:14] Must Listen●      Maria's pieces of advice for aspiring Conversational Designers and people new in this field on how to start, learn, get ahead, and flourish. Learn more about Maria here:●      Company website, DialogCONNECTION Ltd  http://dialogconnection.com/●     LinkedIn https://www.linkedin.com/in/aretoulaki- ●     Twitter  https://twitter.com/dialogconnectio and https://twitter.com/ar3toul4ki●     Blog https://aretoulaki.wordpress.com/

Feb 1

50 min 24 sec

The Future is Spoken presents Obaid Ahmed as this week's guest. Obaid is founder and CEO of Botmock, the leading chatbot design collaboration platform. Before Botmock, Obaid co-founded a design consultancy firm OAK where he led the technology and design teams to deliver over 400 projects. Today, Obaid concentrates on building on the success of Botmock to make it a real driver in building the next generation of conversational apps.Starting with their own experiences, they discuss how the startups can have their VUI and share some powerful tips for aspiring Conversational Designers. Tune in now! Conversation Highlights:[00:16] What is Botmock?● Obaid is a software developer by training and had spent some time working at Blackberry and IBM. And then he ran a consulting agency where they built all sorts of software for mobile and web for many different companies.● Botmock is a design and prototyping tool. Essentially, they enable teams who build conversational experiences to go from an idea from planning to developer handover. Understand what exactly will be built, how the experience will unfold eventually, and be able to test it before they go into the development phase.● Obaid unfolds whether Botmock is designers centric tool or a developer-centric tool?  [05:31] Conversation designing is a team sport.● Obaid explains how startups(Conversation Design company) should go about building a conversation design team?● He elaborates on the importance of research. Many teams that they work within enterprise settings do have very specific people who are doing training data research. ● He also explains how they identify different utterances. The best thing is to look at what the user's already saying in terms of data sets. ● Successful bots can bring the users slowly into their world and teach them how to interact further. [14:36] What about the startups?● Obaid speaks about how the startups can have their voice automation despite the lack of data.● He recommends creating some prototypes. Don't worry about the depth of those prototypes, but make some samples. Come up with some use cases that you think are your customers are looking to tackle.● He also sheds light on the need to build early and then test as soon as possible, and explains how they can have an in-depth analysis? [26:15] Tackling the problems● Obaid touches on how the conversation designer gets to the data to synthesize it?● One of the critical things that are different and hard to build from a design perspective, especially Conversational design is usually in production after they roll chatbox out. So there's very little in-depth analysis they can do at a design level.● What kind of skill sets should the writer have? [31:56] The importance of Linguistics and Psychologists in Conversational designing● Obaid elaborates that Language plays a significant role in making something easy. He also explains how they are very crucial to have in Conversational designing, especially the linguistics.● Emoji in voice conversations? (Text + Pitch = Satisfied Conversation) [37:46] It's all about Text to Speech tuning!● Obaid speaks that they are working on some of the advance sorts of customization features as well. But it depends on the engine to engine, and there's a lot of variations on it, and soon they would probably be able to give more control to designers.● He also touches on what' Engine to Engine' actually mean?[40:07] Who is responsible for tuning the prompts?● Obaid also explains if the designers and writers should know about the technology in order to drive successful communications and collaboration. [43:49] Must Listen

Jan 25

45 min 8 sec

The Future is Spoken presents Jeff Blankenburg as this week’s guest. Jeff spent the early part of his career in digital advertising, building websites for Victoria’s Secret, Abercrombie & Fitch, and Ford Motor Company, among others. He also spent 8 years at Microsoft, primarily as an evangelist for any new technology he could get his hands on. Today, he works on the Amazon Alexa team helping developers make Alexa even smarter. Jeff has also spoken at conferences all over the world, including London, Munich, Sydney, Tokyo, and New York, covering topics ranging from software development technologies to soft skill techniques. Starting with their own experiences, they end up answering questions like "How to build the future of voice?" Tune in Now to find your answers!  Conversation Highlights:[00:23] From phycology to Alexa Evangelist●       Jeff explains how he got out of the school, actually with a degree in psychology. And thought that was the path he was going to go down before he realized software was really where he wanted it to be.●       He also speaks about the idea of a voice enabled assistant he had came up at Microsoft.And when he saw the actual product at Amazon, it was like a fascination to him. [03:23] Ambient computing is the future!●       His fascination towards ambient computing can be clearly seen from his virtues.●       They discuss about how the computing power can help us to do everything right. [06:08] Should Voice enable more people?●       Jeff stresses on how the technology can help the disabled people to live a better life. [08:38] What is the 'One breath test'?●       He also touches on some of the limitations of voice assistants and how they can solve many problems just by summarizing the content. [12:03] How about Contextual Designing?●       Jeff thinks that instead of developing something new and trying to make it work, one need to be able to take a step back. One need to be able to say, what are the paths that he expects his user to take through this experience and make sure that they have those in a nice, structured way.●       Conversational design is a great way to think about this. What are the main things that your user's are going to do? This let you define the core experiences pretty easily. [14:24] Ensuring everything has been tested out! [16:49] Beta testing is the key to the future of voice.●       By being able to define not only a starting position, but also an outcome, and then validating that against your skill is really a valuable tool in being able to set these tests as part of your user design.●       Jeff touches on how these tests are a crucial way to determine what the final product will look like? [18:39] Localizing Alexa?●       Jeff says that to localize Alexa they definitely need someone who is fluent in that local language. So, they are solving this problem in a couple of different ways.●       He also explains an intriguing way of being unique that forces the users to listen. [22:56] Where is the gap? [27:23] Can Conversation Designing become a modem through which people could enable discoverability? [28:31] Dealing with the unexpected…..●       Jeff stresses on the ways in which Alexa is designed to deal with unexpected things. BeinBuzzsprout - Let's get your podcast launched! Start for FREEDisclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Jan 18

50 min 22 sec

The Future is Spoken presents Celene Osiecka as this week’s guest. Celene is Conversational Designer. She has been designing conversational interfaces using emerging technologies like chatbots, AI, natural language, speech recognition, and machine learning for the last fifteen years, deploying over 500 conversational interface deployments in the financial, telecommunications, travel, retail, and education industries. With a background in psychology and HCI, she currently leads a global team of conversational designers that seeks to design innovative and ground-breaking conversational interfaces.  Starting with their own experiences, they end up discussing the importance of testing and the need to make the assistants as human as possible. Tune in Now! Conversation Highlights:[00:25] Celene's journey of being a conversation designer●       Celene explains how she got into conversation designing when people didn't even knew what chatbots were.●       She started doing everything and then as the company grew, they scaled up. She started to specialize, instead of looking at everything. [03:00] Designing a digital world●       Celene touches on the key differences and commonalities that she saw while working for different platforms. She further explains her transition into voice based designing and how she missed a lot of things that used to have earlier.●       A whole new realm of challenges came up when she went from Digital to voice based designing. [06:30] What is the future of voice designing?●       She introduces us to the importance of IVR in designing the future of voice. Celene also puts the light on the incompetence of the voice and explains how Voice can never rule the entire path.●       "We can't force people into a role that they don't either want to be in depending on what their situation is." [12:10] How automation is helping many companies in this COVID scenario?●       Celene speaks about the need for automation in the present world. She also touches on how different companies are benefiting from Bots and Voice technologies. [18:08] How can we design a solution which can enable people to use more of the virtual assistant services? [21:42] What are the similarities while designing a Bot? [24:03] What about testing?●       Celene elaborates on the differences she faced while testing the Bots for different companies. She also divulges on the rigorous process of testing for the financial companies, and their hesitancy to go live.●       She also speaks about the frequency of testing their technology, and her experiences about using different methods of testing. [31:53]  Making the assistants as human as possible!●       Celene shares her experience about ensuring to cover up most of the utterances to make the assistants as natural as possible. ●       She also elaborates on the importance of data and how it is helping to build a better version of assistants, meanwhile speaking about some rare scenarios when no data is available.  Learn more about Celene at●       [24]7.ai●       LinkedIn If you enjoyed this episode of The Future is Spoken Podcast, then make sure to su

Jan 11

38 min 38 sec

The Future is Spoken presents Diana Deibel as this week’s guest. Diana is an experienced VUI Designer and the current Design Director at Chicago-based product design and strategy consultancy, Grand Studio, leading teams in conversation and product design. She’s the co-founder of both the VUI Design Slack channel and the Chicago chapter of the Ubiquitous Voice Society, as well as a frequent speaker at conferences and colleges across the country including SXSW, SpeechTEK, VOICE, Northwestern University, and Columbia. Her book with fellow conversation designer Rebecca Evanhoe, Conversations with Things, will be out on the Rosenfeld Media roster in spring of 2021. In addition to writing and designing for bots, Diana is a produced playwright and screenwriter who has written and produced for a variety of networks and creatives including Animal Planet and Blue Man Group. She is the co-creator of the web series, The Underlings, and the in-development pilots, Shytown and Automates.  Starting with their own experiences, they end up discussing about the technology which not only is empowering people but also defining new standards for voice assistants. Tune in Now! Conversation Highlights:[00:07] Diana's fascinating journey to become a VUI designer●       Diana explains how she got into VUI designing while sharing her experiences of being a health writer.●       She is also writing a book called, Conversation With Things, to explain a practical approach towards VUI. She is co-authoring this book with Rebecca.●       She had a script writing background, both from training as well as from life and work experience. [05:04] Can 'Conversation Interfaces' benefit industries?●       Diana approaches this question in an interesting manner. Instead of favoring one side, she goes black and white, and explains how it is a use case based thing. [11:20] Empowering people with technology!●       She discusses the role of conversational designing in empowering the physically or emotionally disabled people. She also touches on the involvement of technology in moving people out of addiction.●       Diana elaborates on the responsibility of the modern day assistants in medicating the people and helping them in many different ways. [19:02] The Matter of Privacy!●       Since we are having assistants almost everywhere, this creates a question of privacy because some of the people are skeptical about it. Diana lays down some of the powerful points to use these devices with better ethical solutions in place.●       She also explains that it is the responsibility of the whole team to come up with the strategies to ensure the user's privacy, meanwhile speaking about the importance to target the needs of all the users. [30:51] Is System Persona game changer?●       Diana defines System Person as the most fun part of designing any kind of conversation platform, and touches on the importance of an appropriate persona. [35:00] How can defining the standards help drive the conversation? [37:19] How important is context when it comes to designing conversations?●       Diana elaborates on the need for context, and also inculcates the importance of multimodalities in improving the conversations. Learn more about Diana at●       LinkedIn

Jan 4

42 min 58 sec

The Future is Spoken presents Dr. Teri Fisher as this week’s guest. Dr. Fisher, dubbed “The Voice Doctor,” is an award-winning TEDx and keynote performer, physician, podcaster, author, educator, and leading authority on all things voice technology.A doctor by day and voice enthusiast by night is how Dr. Fisher describes himself. After using Amazon’s Alexa and exploring the world of voice technology he realized this new way of interacting with devices would change everything.Dr. Fisher asserts voice technology will be the new operating system for our lives. Using your voice is natural, it is something everyone does. There is no learning curve in voice technology as there is with computers and smartphones. He believes voice technology will be a game-changer and allow people to better multitask. How? It can be explained with the acronym voice, meaning: Versatile, omnipresent, innate, contextual, and efficient.In the healthcare field, voice technology is being used by people to ask health questions without having to sit down and search. The technology also opens doors to hands-free assistance in first aid situations - where questions can be asked and answered without taking the person away from the problem at hand. Voice devices could also help guide people through their at-home postoperative care, assist people with chronic ailments, as well as reminding people of medical appointments. Voice technology could be used as a diagnostic tool. Researchers are working on ways for voice devices to initially diagnose COVD-19 by listening to coughs and analyzing the sounds. Additionally, it could be used to diagnose depression, coronary artery disease as well as dementia. In a hospital setting, says Dr. Fisher, voice-enabled devices could be added to hospital rooms to make a patient’s stay easier. They could also be used to augment nursing tasks and reduce delays. Voice technology could change recordkeeping with healthcare workers’ comments being automatically transcribed into patients’ electronic files.These changes are being investigated, he says, but they have to be weighed against privacy regulations. Implementing large-scale change will take time as the healthcare field is generally cautious about change. Ultimately, believes Dr. Fisher, voice will become another vital sign, just like pulse rate and blood pressure. He also touches on voice-technology being used in operating rooms, for medical training, as well as the possibility of voice-enabled devices being used to take patient histories and other information before a medical appointment. Find Dr Fisher on LinkedIn.

Dec 2020

45 min 4 sec

The Future is Spoken presents Roger Kibbe as this week’s guest. Roger is currently a Senior Developer Evangelist for Viv Labs, the platform behind Samsung’s Bixby 2 voice engine. He works with executives, designers, and developers on voice and conversational AI strategy and execution.Roger says he loves technology but has some gripes with how we interact with it. Technology can empower and enable us, but he acknowledges it can also be a time sink. You pull out your phone to look for something, get distracted by social media, and forget what you were originally looking for. His fascination with voice-enabled technology started after using the Echo Dot. Considering the possibilities that voice-enabling afforded, he believed the technology was ground-breaking in that it allowed people to get specific things done and then “get out of the way.” The technology became a tool and less of a distraction. The “time sink” factor was eliminated. In working with Samsung, and the Bixby voice engine that uses AI, Rogers says, they are developing new products and new ways of interacting. That raises interesting questions. How do you interact with a voice-enabled device that has a screen?  How do you interact with a voice-enabled device without a screen? Being multimodal is a big part of developing new products. Roger stresses that technology needs to enable inclusiveness. Accidentally, voice-enabled devices were at first embraced by the deaf. That wasn’t planned or intended, but unlocked inclusiveness for a group of people. Voice-enabled devices can make technology inclusive for people that cannot read. This bridges the gap and allows them to use new technology. This inclusiveness is something that should be mandated, not added later as an afterthought. Roger also touches on the thorny issue of privacy as it relates to voice-enabled devices how that might be problematic as the technology continues to develop. Find Roger on LinkedIn 

Dec 2020

36 min 25 sec

The Future is Spoken presents Renée Cummings as this week’s guest. Among many roles, Renée is a criminologist, criminal psychologist, AI ethicist, data activist, urban technologist, and international consultant. Renée specializes in therapeutic jurisprudence, urban AI, ethical AI adoption, diversity, equity, and inclusion in AI. She is also the Data Activist in Residence at the University of Virginia. In this episode, Renée examines the importance of understanding and applying ethics in voice design.She discusses how her diverse background led her to working in AI. “I started to look at the risk assessment tools that were being used in the criminal justice system, and how these algorithmic decision-making systems are really misbehaving when it came to the administration of justice.”Renée explains that new and emerging technologies should be given “robust ethical guardrails” to prevent the potential harm it could cause. For Renée, ethical design starts with a fundamental understanding of what good design is, as well as considering whether that design is good for all communities. Diversity, equity, and inclusion should be an integral part of the design process. “Voice has the ability to motivate, to inspire, but it also has the ability to harm and to disenfranchise—and you have got to understand that as a designer of voice technology, you have a role.”Some questions Renée asks are: “Is that technology accessible to someone with a voice impediment or if someone is in a particular state of trauma; if someone has been harmed and they bring more emotion to their voice, or if someone has a different life experience?” Voice technology can also be problematic if misused in hiring situations. This year in particular, employers have relied more on technology in the recruitment process. Tools for hiring that are not designed to be culturally diverse or appropriate run the significant risk of discriminating against some candidates. Individuals with heavy accents, for instance, may be denied employment opportunities.To create more ethical, human-centred AI, it’s crucial that “all voices are amplified, heard, respected, appreciated, celebrated.”Renée discusses considerations when designing for sensitive conversations. Due to differing cultural backgrounds and experiences, what may be a sensitive conversation for one individual may not be for another. Even variations in pronunciation and enunciation can change the understood meaning behind a word. To ensure that no harm is done through words, designers must “bring that level of diversity to that whole concept of what is sensitive conversation.”Renée wants students who are learning design for voice interfaces or chatbots to focus on ethics and to understand that everything created will be part of a long-lasting, socio-technical “legacy.” “As designers, we've got to be bold enough as individuals working in new and emerging technologies, working in artificial intelligence: bold enough to do the right thing, bold enough to understand it is our own responsibility to educate ourselves on things like ethics.”Renée further discusses the importance of understanding and applying ethics to voice technology in this powerful and insightful podcast episode!Find Renée on LinkedIn

Dec 2020

33 min 10 sec

The Future is Spoken presents Phillip Hunter as this week’s guest. Phillip is an expert in strategy, design, and AI-powered optimization. He is the founder of CCAI (Conversational Collaborative AI) Services and is part of the team behind Alexa. In this episode, Phillip discusses voice analytics and performance optimization techniques. Before anything is coded for conversational AI systems, voice analytics need to occur. “Many of these systems are built to achieve something specific,” Phillip explains, and analytics—such as user engagement and monthly usage—help measure their success in achieving those goals. A common issue identified through analytics relates to recognition errors. These errors can result in users getting confused or stuck while using an application. Even when a product is well-designed and thought out, Phillip notes that there will always be unpredictable issues. Phillip talks about the goal of reaching a resolution for users. He gives the example of a customer who accesses a bank’s call centre to confirm that a specific deposit has occurred. This kind of request is relatively straight-forward to automate, but it requires AI to gather information from the user and access a back-end information system to see what is happening in a specific account. In an ideal case, the request can be fulfilled independently, using only AI. Phillip explains the importance of anticipating issues through gathering and studying data before an application goes into production. As soon as a product is live, there is the added pressure of tight deadlines and upset users to consider. Once an application is in production, the next step is to validate whether the system’s performance is on or off-target in achieving set goals.  If the numbers are not matching target goals (which is common), a diagnosis is needed to determine the cause. A solution can then be proposed and implemented. After the issue has been “fixed” with the solution, metrics are again examined to ensure that it is working. A lot of teams are tempted to focus on the symptom and assume it’s a “recognition event”  issue, such as a miscommunication between the user and AI during one part of a conversation. However, optimization involves taking a measured and rigorous approach when attempting to fix problems. Phillip explains that symptoms should be evaluated holistically—there are many potential causes behind a problem, and the user’s overall emotional “journey” in an application shouldn’t be overlooked. “The biggest thing I want to encourage people to do is to take that holistic look and really analyze the different things, and ask a lot of ‘what if’ questions.” Phillip hopes that those entering his field consider potential ways that a platform could work, rather than simply how it works now. The study of human verbal communication, in addition to the study of technology itself, is valuable—Phillip envisions a future where more complex and ambiguous tasks will be carried out by AI. Phillip further explores the importance of  voice analytics and optimization strategies in this insightful podcast episode!Find Phillip on LinkedIn

Dec 2020

1 hr 6 min

The Future is Spoken presents Hans van Dam as this week’s guest. Hans is the co-founder and CEO of the Conversation Design Institute, Amsterdam. It trains and certifies conversation designers from around the world. In this episode, Hans discusses the importance of writing for listenability.For Hans, writing for listenability means designing everything for voice-first, even if the writing may only be applied to a chatbot. Hans notes that if designers can find a way to write for voice—the most complicated interface—then they can write for anything.Although Hans started out his career as a copywriter, he shifted his focus to conversation design when chatbots came out. His company was contacted by Google, which was seeking training for its conversation designers to improve the adoption of Google Assistant. “They all said the same thing: This is so new and so different from anything that’s out there, you’re never going to be able to teach anyone this.”Today, the Conversation Design Institute is thriving, and the need for training is on the rise. The institute recognizes and provides certification for three different roles: AI trainers, conversational designers, and conversational copywriters. The fundamental techniques to follow for structuring a message in voice involve acknowledging the user’s request (saying “sure” or got it,” etc.),  confirming the request through repetition, and ending with a simple prompt to indicate that it’s the user’s turn to perform an action. Hans explains that both user and bot needs must be mapped out when designing conversations. While a bot will require a list of rules and tasks to provide information, users also have emotional needs to consider. Companies will often prioritize a bot’s needs, but empathetic exchanges are ultimately what keep humans engaged and motivated in a conversation. This imbalance in meeting needs is often a reflection of the team behind the product. Hans uses the example of a company with ten engineers but only one writer to illustrate what that imbalance may look like. The Conversation Design Institute works to correct this problem by giving everyone “a fair seat at the table.”“The psychology of language and the technology always work together. And a lot of companies fail on their projects because they don’t understand that balance,” Hans posits.Psychological techniques can be applied to everything from delivering bad news to promoting a desired action. Hans gives the example of a voice assistant discussing shoes that a user is interested in purchasing. In addition to stating hard facts about the shoes, such as the price, the voice assistant may mention that these shoes are popular with a lot of park-goers in the city. Mentioning popularity implies that there is “social proof” of desirability attached to the shoes.To promote a certain behaviour, designers also need to build dialogues that consider a user’s ability and motivation to perform an action. If both of these factors can be increased through the help of a voice assistant, then the last step is the prompt—the part of the conversation that triggers a desired action. Better, or easier prompts, tend to result in better outcomes. Hans envisions a multi-modal future, where voice will be a significant part of AI interfaces. Although conversation data control (or privacy concerns) are hurdles that could slow progress, Hans believes that conversation design will continue to grow as a field. “Just like you cannot imagine a company without engineers today, you will not be able to imagine a company without conversation designers five years from now.”Find Hans on LinkedInhttps://conversationdesigninstitute.com

Nov 2020

34 min 36 sec

The Future is Spoken presents Ananya Sharan as this week’s guest. Ananya is a search and voice expert working as the product manager for Pandora’s Voice Mode, a mobile-only voice assistant that allows users to easily discover and listen to new music. In this episode, Ananya explores the rewards and challenges of working with voice assistant technology. As the largest audio-streaming platform in the US, Pandora creates personalized recommendations for users, whether they are looking for podcasts or music. Ananya describes Pandora’s Voice Mode as an employee-driven initiative. “We really wanted to build something in-house and showcase our data science, our deep knowledge about the listener, and all of the intelligence we have in our recommendation ensembles to bring that natural conversational way of consuming music and podcasts and put it right on your phone. So we really were thinking with our customer in mind.”Voice Mode, which is a mobile-only feature available on Android and iOS, is unique from other voice assistants in its ability to work in ambiguous and hands-free scenarios. Unlike connected devices, which may require a specific song request, Pandora’s creation caters to personalized recommendations, depending on a user’s mood or activity. Ananya uses cooking as an example. “When you say, ‘Hey Pandora, play me something for cooking,’ we know what you like to listen to … so we play something like that.” Ananya emphasizes the importance of advocating for users’ needs. In addition to internally advocating for innovative solutions, a challenge for Voice Mode’s team was getting leaders on board with a milestone-driven schedule, rather than being pinned down to a standard timeline-driven schedule. Due to the need to train AI modes in language and accent recognition, among other hurdles, “it's hard to predict with absolute certainty when the product is going to be ready or when it's going to achieve, let's say, 90% accuracy.” Ananya explains that inaccuracies could translate into quite a lot of frustration for a real-world user—they may not even go back to a product after one bad experience. Another challenge relates to accents and pronunciations. Many early AI models only learned American accents. However, Ananya believes that voice technologies have begun to catch up as global usage of these products grow. Ananya even sees an improvement in Voice Mode’s accent recognition since it was first launched. “The only way it can work is by getting more people to use it and training the models using a variety of accents.” Ananya’s advice for others in voice is to “focus on real users and real use cases, and assume that there's just multiple ways to ask for the same thing.” She also suggests looking into new contexts where users might benefit from voice technology.  Ananya shares more about her work with Pandora and voice technology in this insightful podcast episode!Find Ananya on LinkedInUseful resources:Conversation Design from Google: https://designguidelines.withgoogle.com/conversation/conversation-design/learn-about-conversation.html#learn-about-conversation-the-cooperative-principleVoice Summit Playlist from 2018: https://www.youtube.com/playlist?list=PLn51IO3rbkV1E1a6WjgvFtW3VaOCRxzovVoice Industry news, reports: https://voicebot.ai 

Nov 2020

58 min 5 sec

In this episode The Future is Spoken podcast, Deborah Dahl, a natural language understanding expert, explores natural language understanding and its role in the world of voice technology.What is natural language understanding? Natural-language understanding or natural-language interpretation is a subtopic of natural-language processing in artificial intelligence (AI). Natural language understanding deals with machine reading comprehension. Natural language understanding is considered an AI-hard problem.However, there’s actually a lot going on behind the scenes when humans have conversations. Each conversation humans have is unique; the precise flow of words has never taken place before and will never take place again. Next, conversations and language are open-ended. This is why, for AI, natural language understanding is a real challenge, and classed as a ‘hard’ AI problem. To be able to even come close to conversing the way humans do, AI will ultimately have to learn to process things that have never been said before and may never be said again. This is what fascinates Deborah. She has a Ph.D. in linguistics, and with her long interest in computers, she is passionate about computational linguistics.Natural language understanding is a hot topic for conversational designers because they have to know what natural language understanding can and cannot do when they create a voice bot or interface, along with the current state of the technology.Deborah gives some fascinating examples of some of the challenges conversational designers and natural language understanding experts face. For example, it’s important for conversational designers to understand the concept of what AI experts call slots and form filling. Explains Deborah: “If you ask a smart speaker to set an alarm for five, it implicitly has an understanding of five. But it's missing a slot. It's missing the AM/PM slot. So it needs to follow up with the user and make sure it pins that down, which time the user is thinking about.”Another issue is the way it’s easy for humans to introduce other topics into a conversation. For AI, though, incorporating a new topic is difficult, if not impossible. In AI, a topic is called a domain, and in AI, entering a new topic causes confusion. For example, if you are talking about golf with a friend, and then discuss golf balls, their colors, and cost, this is easy for humans, but difficult for AI. Next, Deborah explains that multi-intent utterances are difficult for natural language understanding.She says: “If you had a human personal assistant, you might say something like: ‘Can you find out if there's a nearby Thai restaurant and if there is, make reservations for four at eight o'clock?’. Here, you have what we call two intents - to find a restaurant and make reservations. For a lot of technical reasons, a task like this is really hard (for AI).”She observes that things that we can accomplish today with AI are not due to science, but due to having faster computers. “It’s a synergistic cycle - the computers get better and then the technology catches up, and then the computers get better and faster, and so it goes,” she explains.In addition, speech recognition is progressing at a steady pace. In the 90s, speech recognition was so bad, a lot of the natural language design processes were aimed at correcting speech processing errors. Today, understanding speech is much better.In this episode, Deborah also touches on the emotional angle of having a voice interface friend that doesn't irritate us and just listens to us. The demand for artificial friends is increasing. She noted that humans love to have artificial friends, particularly given that loneliness is endemic.Find Deborah Dahl on Linkedin.

Nov 2020

48 min 57 sec

Your voice interface creation will only be successful if it possesses human traits such as empathy and imperfection. Senior conversational designer Jason Gilbert digs deep into imbuing voice interfaces with human-like conversational abilities in this fascinating episode of The Future is Spoken.  Jason is a senior AI conversational designer based in Israel. He is also the creator of AnnA, the conversational bot. The Importance of EmpathyJason explores so much in this episode, including the importance of imbuing voice interfaces with empathy. He discussed the creation of AnnA, conversational dynamics between humans, and how designers study these dynamics to build human-like interfaces. Jason also talks about his work on the creation of a virtual Albert Einstein.Originally, Jason wanted to be a filmmaker and screenwriter, and studied filmmaking at Temple University College of Liberal Arts. Around five years ago he entered conversational design by accident when applying for an interactive screen writer role. The job title conversational designer didn’t even exist back then. “I just stumbled into it,” says Jason. “All of a sudden I found myself designing dialogues for interactive characters.”One of Jason’s first roles in voice tech was designing chatbot Miss Piggy for Facebook Messenger. “That experience of working on an entertainment project proved to me that this was a new art form,” explained Jason. “Conversation design is a new medium of art that encompasses within it filmmaking, theater, UX design, design, writing, directing, literature, therapy – so many other disciplines come together into one in order to craft a good personality and a good conversation. Conversational Design is an Artform“I realized this is a whole new art form, and I realized that I could really . . . it's the Wild West, it's an open frontier, and there are not that many people doing it. And so I luckily fell into it, and now I'm doing what I love.” This experience and Jason’s passion for conversational design eventually led to the creation AnnA, a companionship bot. AnnA evolved when Jason’s mentor, Yaki Dunietz, CEO of CoCoHub, took it on himself in the late 1990s to crack the Turing Test. Yaki’s fascination with cracking the Turing Test led him to create an intelligent entity or intelligent machine that could pass the Turing Test. “Yaki started a project called Virtual Alan Turing because Alan Turing was his idol. I don't know how much you know about Alan Turing, but Alan Turing is a fascinating, fascinating individual. I mean, beyond the fact that he's considered the father of modern day AI, the guy was a genius,” enthuses Jason. In 2016, Jason asked Yaki about the possibility of turning Alan into a companionship bot. The idea arose because Jason’s parents were moving to a retirement community in Florida and Jason had read a lot about loneliness amongst the aging population and how it contributes to a decline in health. Yaki agreed, and all Jason had to do was update the Alan bot to fit into more modern times, give it a slightly different voice. He and Yaki also decided to make it transgender.Find Jason on LinkedinAnnA, the companion botAn interview with i24News about AnnA’s songJason’s article on bots and overcoming loneliness The Turing TestTechnological Singularity

Nov 2020

48 min 8 sec

This week’s guest on The Future is Spoken is Brielle Nickoloff. Brielle is a conversational designer based in Washington DC. A passion for language and the patterns of language use led Brielle to a career in voice tech. She has always been curious about what people say and why they choose the words they do. In college Brielle took an elective called conversational analysis. The students would analyze conversations and word choices.  “We are such natural conversationalists, it’s second nature to us to speak. There are such interesting ways that we communicate - sometimes we don’t even think about it,” she observes.Brielle observes that in some settings humans communicate in such a specific context, we don’t even have to use language. She cites the example of buying something in a store; most of us will place something at the checkout and there may be few, if any, words used. “We are just naturally aware of how to communicate,” she explains, adding that the college course was instrumental in piquing her interest in linguistics. Not long afterwards, she wanted to explore more.Brielle explains that everyone has their own idiolect, which means each individual has their own unique language use patterns. So while it would be impossible to create a voice tech interface for an individual, it is possible to design an interface for a group with similarities.Voice tech is one of few new technologies that can take us away from a screen-based world. Voice is giving us different options when it comes to figuring out the best way to communicate with others, or with artificial intelligence. Examples include communicating with a vehicle while driving, entering a home carrying groceries and asking to switch the lights on, or running on a treadmill and requesting a new song or different podcast episode.Brielle says that often the starting point for interface design is usually to consider a user’s physical environment and what their request may be.  “When you start looking at things this way, really amazing opportunities for use cases start to emerge,” says Brielle.The next step is determining whether the voice interface needs to give a confirmation to a request. For example, a request to switch on the lights requires a response from the interface. Part of the reason for this is that if, for example, a bulb isn’t working, a human can determine where the problem may lie.  Brielle explores other examples of use cases for voice-first, our emotional responses to this technology, and lots more in this exciting episode! Find Brielle on LinkedIn.

Nov 2020

39 min 18 sec

The Future is Spoken is excited to present Dr. Joan Palmiter Bajorek as this week’s guest. As many listeners will know, Joan is the founder and CEO of Women in Voice, an international organization she created to empower women and minority genders in voice technology.Joan is a data-driven and human-centred linguist and researcher based in Seattle, USA. She brings an impressively diverse range of skills and abilities to her work. Joan has experience with leadership, management, technical, development, and marketing roles. She has conducted research and voice/multimodal design work for more than seven years, visual graphic design for nine years, and has also analyzed languages for almost two decades. These credentials make Joan the ideal guest to explore the power of conversational design. Joan believes the future of voice tech will be multi-modal. Voice tech will be a space where we engage technology, not just with our voices, but with other senses too, such as our hands. She gives the example of ordering a pizza, which can be more complex than it sounds, given the range of menu options. A user might speak to a chatbot when placing an order, while selecting toppings from their smartphone screen. This makes complete sense, given the complexity of some voice tech conversations, and the current limitations of voice interfaces and chatbots.In this episode, Joan gives a broad exploration of conversational design, how conversations are developed in different industries, and her motivation for the creation of Women in Voice.This is a fascinating episode exploring so much in the voice tech world! Enjoy!Find Joan on Twitter and LinkedIn.Women in Voice

Oct 2020

42 min 32 sec

The Future is Spoken presents Brooke Hawkins as a guest in this episode - Designing for Voices in Conversational Design.Brook is a conversational designer based in Detroit, Michigan. She defines conversational design as any back and forth between a human and an interface. For a designer, it can involve any visual or sonic decisions that are made, such as the sound of crickets in the background. Brooke extends this definition to telephone menus we often encounter when we call banks or government agencies.Without a doubt, it’s an exciting time to enter the voice design industry. “The field is being shaped as it’s growing and a lot of the technology is new and exciting,” says Brooke. “For example, we haven’t had something like smart speakers before, where you can search the internet for anything you can possibly imagine.”These speakers can provide a lot of other services, such as turning on lights in your home. “The decisions that people are making right now in conversational design are really important, not only in terms of shaping our relationships with smart speakers, but our relationships with one another,” she notes.People entering the field can expect to be involved in critical conversations about what the future of these devices may look like. In addition, there are many ethical and humane questions that need to be asked and answered. “If you’re designing this for voice, you need to ensure that you’re creating products that are helpful to people, and not harming people. “And if you care about ethics and designing products that make people's lives better, these questions will come up for you every single minute of the day,” observes Brook.Brooke recommends these resources: The Algorithm, by Karen HaoThe VUX World PodcastVoice BotBradley Metrock’s new newsletter, This Week In VoiceFind Brooke on her website, LinkedIn, and Twitter.

Oct 2020

38 min 12 sec

The Future is Spoken is pleased to have Sina Kahen as a guest in our latest episode - Voice Strategies. Sina is a voice strategist based in London, U.K, and he shares a huge amount of information with listeners in this episode.Sina works in the medical technology world and also owns his own company, Vaice. The company name is a made-up word, combining the acronym AI and the word voice.In this episode, Sina discusses the importance of strategy when businesses develop a voice app or voice interface. He explains that a strategic approach is the bridge between AI and the needs of businesses wanting to use voice technology as part of a customer journey. Vaice always takes a strategic approach with clients, and aims to help brands understand what’s possible with voice, and move forward with voice interface development. When Vaice begins working with a new client, its first step is a discussion about why the client is considering using voice technology. Sina recommends that voice tech consulting firms begin at a high level to determine if voice is even a fit for a company. “We focus on getting to know a brand before voice is even discussed as a possible solution. Once you determine that voice is in fact a fit and there is a need for it, you dig into the weeds and begin working on the strategy, on the how.“Brands are being exposed to so many technologies today. So our starting point is looking at a brand’s customer journey and helping them understand what voice might be able to do for them,” he explains. One of the key ways all of determining whether voice is a good fit for a client is by analyzing the amount of involvement a consumer will have with a service or product. “It's our job to look up the level of involvement a customer might have with a product or service to determine if voice technology is a good fit,” says Sina.For example, a service such as insurance is a high-involvement purchase because a consumer usually needs a lot of information and has questions before buying. They will want to speak to a human, not a voice interface. If a voice interface is involved with this purchase, it will only be at the very beginning, in a superficial way, such as directing a consumer to making a call.In this episode, Sina discusses the challenges voice tech pros face, including the fact that companies believe voice tech can do more than it can in its current state. He explores the need for the voice industry to focus on utility and customer convenience, and the importance of adding personality to voice content. Sina recommends the following books for individuals entering the voice tech industry:Wired for Speech, by Clifford Nass and Scott BraveThe Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, by Kevin KellyFind Sina on LinkedIn and Instagram.

Oct 2020

32 min 16 sec

Welcome to Episode 3 of The Future is Spoken! Before we jump into this episode, a quick note to say you can get 15 percent off course fees at The Digital Assistant Academy before Oct. 16, 2020. Register for the Voice Interaction Design course and use the code DALAUNCH to obtain your discount.In Conversational Interfaces, we speak with Keri Roberts, owner of Branding Connection. Based in New Jersey, Keri helps brands and businesses discover what they are great at and amplify it. She is a content marketer, and this includes a lot in the audio space, such as chatbots, voice interaction, and podcasts. What makes a company great?Keri loves talking about what makes companies unique, how they amplify that, how do you put that into your conversation, into your content, into your audio. For her, it has streamlined into one approach - whether someone's interacting with you on a chatbot, on a voice skill, on a podcast, on a blog, they have the same feeling about you and your brand throughout.What is a conversational interface?Keri says that a good definition of a conversational interface is an AI conversation with a human element. She explains: “Whether it's a voice or a chatbot generally, it's really about the conversation that that AI is having, that a computer system if you want to think that way, is having with an individual.”A chatbot or any other interface we engage with is obviously not actual human being, but when we interact with it, we want it to feel a little bit human, and that's similar with a voice skill interaction as well. “So that's how I look at it is how do we interact with an AI with a computer in a way that has somewhat of a human quality,” she adds. What makes a good interface?Keri also explores what makes a good conversational interface. In her view, it’s one that really signifies the brand of the company that created it. So the conversational interface is really unique to them, so that when we are interacting with it, we feel like we are interacting with the personification of somebody who might work there. The other piece that makes a conversational interface really good is that it has to be efficient. In other words, it has to be helpful to the user who is interacting, and it has to give them what they need and want in that moment. Inclusion and representation in voiceIn addition, Keri is big on inclusion and community and making sure all voices are heard, and everyone is thought about. She works to ensure that in her work everyone is included. Keri explains: “When we're talking about conversational interfaces, we want to think about who is interacting with that when you're talking about people of different cultures, different backgrounds, different languages, different genders, and then of course different disabilities.”In this episode, Keri also talks about the importance of brands being clear about their identity before working on a conversational interface, and how to work with other members of a conversational design team when working on an interface. Show notesFind Keri Roberts:LinkedIn: https://www.linkedin.com/in/kerinroberts/Twitter: https://twitter.com/kerinrobertsPodcast: http://thebrandingconnection.com/podcastshow/The Digital Assistant AcademyThe Academy’s podcast: https://www.thefutureisspoken.com/

Oct 2020

31 min 6 sec

In today’s episode Rebecca Evanhoe, a conversation designer and strategist, discusses pathways into voice interaction design, also called conversational design. Based in New York, Rebecca’s own journey into conversational design is a fascinating story. It starts with a chemistry degree and then a master of fine arts in fiction writing. These diverse fields of study illustrate the spectrum of learning one can bring to conversational design. An entrant to voice design doesn't need only a technical or coding background, although it can help, too!Rebecca says that regardless of work experience or academic background, an individual entering the field of conversation design must be curious about everything. "People get into voice design are really drawn to the right-brain-left-brain combination that the work requires. In my background, you can see reflected, with science and writing degrees," says Rebecca.You also need to be really interested in teaching yourself and learning new things, and be comfortable working independently or in a team environment.Rebecca shares a lot more about the backgrounds and work experience that are welcome in the growing voice design industry.The Future is Spoken is the podcast for The Digital Assistant Academy. The Academy offers its first course, Voice Interaction Design, in October, 2020. Register before Oct. 16 for a 15 percent discount using the code DALAUNCH. Rebecca also shares many insights into entering voice interaction design, and talks about the qualities needed to succeed in this field. She’s interviewed by host Sheelagh Caygill. Show notesMy twitter handle: @revanhoe  Diana Deibel's Twitter: @dianadoesthis Women in Voice - https://womeninvoice.orgRosenfeld Media's launch page for Rebecca and Diana's upcoming book - https://rosenfeldmedia.com/books/conversations-with-things/

Sep 2020

42 min 6 sec

Hello, and welcome to the very first episode of The Future is Spoken, a new podcast show produced by the Digital Assistant Academy. This show accompanies The Digital Assistant Academy’s course, entitled Voice Interaction Design. You don’t need to be enrolled in the course to get something from this podcast. Each episode will feature exclusive information, found only in the course content. By listening to this podcast, you will come away with valuable details about the world of voice assistants and the opportunities that are arising in the voice assistant industry.Of course, we encourage you to sign-up for this ground-breaking learning opportunity so that you can become a Certified Voice Interaction Designer. If you register before the end of September 2020 you will be eligible for a 15 percent discount on course fees. Enter the code DALAUNCH for your discount, that’s DALAUNCH. We’ll include these details in the show notes.As we mentioned in the trailer episode, The Digital Assistant Academy is founded by Shyamala Prayaga, a voice technology thought-leader with more than 20 years experience in the industry. Shyamala has designed for mobile, web, desktop, and voice-based interfaces.Voice-based interfaces are the technology of the future. And the Digital Assistant Academy’s voice interaction design course will make you job-ready for new and exciting opportunities!But first, The Future is Spoken host Sheelagh Caygill talks to Shyamala, founder of the Digital Assistant Academy. Based in Michigan, Shyamala’s work has been presented nationally and internationally. She is a well-known industry expert and speaker, with two patents. Shyamala’s well-respected publications are referenced in academia research projects.

Sep 2020

36 min 16 sec

Hello, and welcome to the trailer episode of The Future is Spoken, a new podcast show produced by the Digital Assistant Academy. The Future is Spoken will accompany The Digital Assistant Academy’s very first course, entitled Voice Interaction Design, set to launch in October, 2020.This ground-breaking learning opportunity is a self-paced course with access to ongoing support. It allows you to become a Certified Voice Interaction Designer. Demand for voice technologies has never been greater, and there is a real need for qualified voice interaction designers who can address every aspect of the voice technology design process.The Digital Assistant Academy is founded by Shyamala  Prayaga, a voice technology thought-leader with more than 20 years experience in the industry. Shyamala has designed for mobile, web, desktop, and voice-based interfaces.By taking the Voice Interaction Design course, you will access all of Shyamala’s knowledge and experience as a thought leader and influencer in the world of voice technology. And you will learn voice interaction design from active industry leaders and practitioners! Voice-based interfaces are here to stay. And the Digital Assistant Academy’s voice interaction design course will make you job-ready for this new and exciting future!The Digital Assistant Academy’s Voice Interaction Design course has been created to give learners the best opportunities for success!It has nine key learning objectives for you as a student. You can find the full list of learning objectives on this trailer episode page at digitalassistant.academy. Some of the learning objectives are:Having a strong understanding of fundamentals of voice interfaces Learning the techniques of voice and conversational designUnderstanding the power of conversation and apply techniques in voice designUnderstanding the value of ethics and privacy of in the voice design processAnd, finally, applying all your learning into capstone projectsYou can find the complete module list at this episode’s page at digitalassistant.academy. Throughout the course, you will also have access to expert interviews, along with insights from industry-leading practitioners.The course not only equips you with skills and experience, it positions you for success with support to help you find work in the world of voice technology. We assist with portfolio creation, presentation skills, and career coachingIf your imagination has been captured by this episode and you want to explore possibilities, go to digital assistant.academy, and discover more about this rapidly growing sector.And if this is something you’ve wanted to pursue,  but struggled to find the right training, we’re here to make your career dreams a reality. The Digital Academy Voice Interaction Design Course will launch in October this year.Pre-register before Wednesday, September 30 2020 for a 15 per cent discount by using the code DALAUNCH at digitalassistant.academy.

Sep 2020

4 min 25 sec