skip to main content
10.1145/3597638.3608425acmconferencesArticle/Chapter ViewAbstractPublication PagesassetsConference Proceedingsconference-collections
research-article
Open access

Exploring Community-Driven Descriptions for Making Livestreams Accessible

Published: 22 October 2023 Publication History

Abstract

People watch livestreams to connect with others and learn about their hobbies. Livestreams feature multiple visual streams including the main video, webcams, on-screen overlays, and chat, all of which are inaccessible to livestream viewers with visual impairments. While prior work explores creating audio descriptions for recorded videos, live videos present new challenges: authoring descriptions in real-time, describing domain-specific content, and prioritizing which complex visual information to describe. We explore inviting livestream community members who are domain experts to provide live descriptions. We first conducted a study with 18 sighted livestream community members authoring descriptions for livestreams using three different description methods: live descriptions using text, live descriptions using speech, and asynchronous descriptions using text. We then conducted a study with 9 livestream community members with visual impairments, who shared their current strategies and challenges for watching livestreams and provided feedback on the community-written descriptions. We conclude with implications for improving the accessibility of livestreams.

References

[1]
Adobe. 2022 (accessed Dec 13, 2022). Premiere Pro. https://www.adobe.com/products/premiere.html
[2]
Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, 2010. Vizwiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. 333–342.
[3]
Carmen J Branje and Deborah I Fels. 2012. Livedescribe: can amateur describers create high-quality audio description?Journal of Visual Impairment & Blindness 106, 3 (2012), 154–165.
[4]
Pablo Cesar and David Geerts. 2011. Past, present, and future of social TV: A categorization. In 2011 IEEE consumer communications and networking conference (CCNC). IEEE, 347–351.
[5]
Xinyue Chen, Si Chen, Xu Wang, and Yun Huang. 2021. "I was afraid, but now I enjoy being a streamer!" Understanding the Challenges and Prospects of Using Live Streaming for Online Education. Proceedings of the ACM on Human-Computer Interaction 4, CSCW3 (2021), 1–32.
[6]
Aira Tech Corp. 2023 (accessed May 2023). Aira. https://aira.io.
[7]
Descript. 2022 (accessed Sep 6, 2022). Descript. https://www.descript.com/
[8]
Be My Eyes. 2023 (accessed May 2023). Be My Eyes. https://www.bemyeyes.com.
[9]
Travis Faas, Lynn Dombrowski, Alyson Young, and Andrew D Miller. 2018. Watch me code: Programming mentorship communities on twitch. tv. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 1–18.
[10]
C Ailie Fraser, Joy O Kim, Alison Thornsberry, Scott Klemmer, and Mira Dontcheva. 2019. Sharing the studio: How creative livestreaming can inspire, educate, and engage. In Proceedings of the 2019 on Creativity and Cognition. 144–155.
[11]
Cole Gleason, Amy Pavel, Himalini Gururaj, Kris Kitani, and Jeffrey Bigham. 2020. Making GIFs Accessible. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–10.
[12]
William A Hamilton, Oliver Garretson, and Andruid Kerne. 2014. Streaming on twitch: fostering participatory communities of play within live mixed media. In Proceedings of the SIGCHI conference on human factors in computing systems. 1315–1324.
[13]
Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI conference on human factors in computing systems. 1539–1548.
[14]
Hopin. 2023 (accessed May 2023). StreamYard. https://streamyard.com.
[15]
Yun Huang, Yifeng Huang, Na Xue, and Jeffrey P Bigham. 2017. Leveraging complementary contributions of different workers for efficient crowdsourcing of video captions. In Proceedings of the 2017 chi conference on human factors in computing systems. 4617–4626.
[16]
Mina Huh, YunJung Lee, Dasom Choi, Haesoo Kim, Uran Oh, and Juho Kim. 2022. Cocomix: Utilizing Comments to Improve Non-Visual Webtoon Accessibility. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–18.
[17]
The Smith-Kettlewell Eye Research Institute. 2019. YouDescribe FAQ for describers. https://youdescribe.org/support/describers.
[18]
The Smith-Kettlewell Eye Research Institute. 2019. YouDescribe.com. https://youdescribe.org/.
[19]
Gaurav Jain, Basel Hindi, Connor Courtien, Xin Yi Therese Xu, Conrad Wyrick, Michael Malcolm, and Brian A. Smith. 2023. Towards Accessible Sports Broadcasts for Blind and Low-Vision Viewers. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–7.
[20]
Robert Johansen. 1988. Groupware: Computer support for business teams. The Free Press.
[21]
Joonyoung Jun, Woosuk Seo, Jihyeon Park, Subin Park, and Hyunggu Jung. 2021. Exploring the experiences of streamers with visual impairments. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–23.
[22]
Kaycem. 2023 (accessed July 2023). how to IMPROVE your SKILLS QUICKLY + NEW SUB GOAL?!? !bootcamp !youtube. https://www.twitch.tv/videos/1854614493.
[23]
Juho Kim 2015. Learnersourcing: improving learning with collective learner activity. Ph. D. Dissertation. Massachusetts Institute of Technology.
[24]
Walter Lasecki, Christopher Miller, Adam Sadilek, Andrew Abumoussa, Donato Borrello, Raja Kushalnagar, and Jeffrey Bigham. 2012. Real-time captioning by groups of non-experts. In Proceedings of the 25th annual ACM symposium on User interface software and technology. 23–34.
[25]
Hye-Kyung Lee. 2011. Participatory media fandom: A case study of anime fansubbing. Media, culture & society 33, 8 (2011), 1131–1147.
[26]
Xingyu Liu, Patrick Carrington, Xiang ’Anthony’ Chen, and Amy Pavel. 2021. What Makes Videos Accessible to Blind and Visually Impaired People?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–4.
[27]
Xingyu Liu, Ruolin Wang, Dingzeyu Li, Xiang’Anthony’ Chen, and Amy Pavel. UIST 2022. CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding.
[28]
Zhicong Lu, Michelle Annett, and Daniel Wigdor. 2019. Vicariously experiencing it all without going outside: A study of outdoor livestreaming in China. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (2019), 1–28.
[29]
Meta. 2023 (accessed April 2023). Facebook Live. https://www.facebook.com.
[30]
Microsoft. 2023. Word for the web. https://www.microsoft365.com/launch/word
[31]
Rosiana Natalie, Ebrima Jarjue, Hernisa Kacorri, and Kotaro Hara. 2020. ViScene: A Collaborative Authoring Tool for Scene Descriptions in Videos. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility. 1–4.
[32]
Rosiana Natalie, Joshua Tseng, Hernisa Kacorri, and Kotaro Hara. 2023. Supporting Novices Author Audio Descriptions via Automatic Feedback. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
[33]
American Council of the Blind. 2003. The Audio Description Project. https://adp.acb.org/guidelines.html.
[34]
Amy Pavel, Gabriel Reyes, and Jeffrey P. Bigham. 2020. Rescribe: Authoring and Automatically Editing Audio Descriptions. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (Virtual Event, USA) (UIST ’20). Association for Computing Machinery, New York, NY, USA, 747–759. https://doi.org/10.1145/3379337.3415864
[35]
Yi-Hao Peng, Jeffrey P Bigham, and Amy Pavel. 2021. Slidecho: Flexible Non-Visual Exploration of Presentation Videos. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility. 1–12.
[36]
Yi-Hao Peng, JiWoong Jang, Jeffrey P Bigham, and Amy Pavel. 2021. Say It All: Feedback for Improving Non-Visual Presentation Accessibility. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–12.
[37]
OBS Project. 2023 (accessed May 2023). OBS: Open Broadcaster Software. https://obsproject.com/.
[38]
The Audio Description Project. 2019. adp.acb.org. https://adp.acb.org/guidelines.html.
[39]
Reddit. 2023 (accessed April 2023). r/BlindSurveys. https://reddit.com/r/blindsurveys
[40]
Logitech Services S.A.2023 (accessed May 2023). Streamlabs. https://streamlabs.com.
[41]
Jeff T Sheng and Sanjay R Kairam. 2020. From virtual strangers to irl friends: relationship development in livestreaming communities on twitch. Proceedings of the ACM on Human-Computer Interaction 4, CSCW2 (2020), 1–34.
[42]
Thomas Smith, Marianna Obrist, and Peter Wright. 2013. Live-streaming changes the (video) game. In Proceedings of the 11th european conference on Interactive TV and video. 131–138.
[43]
Joel Snyder. 2005. Audio description: The visual made verbal. In International Congress Series, Vol. 1282. Elsevier, 935–939.
[44]
Pixar Animation Studios. 2004 (accessed August 2022). The Incredibles: Am I Fired Scene with Audio Description. https://www.youtube.com/watch?t=128&v=2zhzVGmyjtg.
[45]
Twitch. 2023 (accessed April 2023). Twitch. https://www.twitch.tv.
[46]
Yujia Wang, Wei Liang, Haikun Huang, Yongqi Zhang, Dingzeyu Li, and Lap-Fai Yu. CHI 2021. Toward Automatic Audio Description Generation for Accessible Videos.
[47]
Saelyne Yang, Jisu Yim, Juho Kim, and Hijung Valentina Shin. 2022. CatchLive: Real-Time Summarization of Live Streams with Stream Content and Interaction Data. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 500, 20 pages. https://doi.org/10.1145/3491102.3517461
[48]
YouTube. 2023 (accessed April 2023). YouTube Live. https://www.youtube.com/@live
[49]
Bei Yuan and Eelke Folmer. 2008. Blind hero: enabling guitar hero for the visually impaired. In Proceedings of the 10th international ACM SIGACCESS conference on Computers and accessibility. 169–176.
[50]
Beste F Yuksel, Pooyan Fazli, Umang Mathur, Vaishali Bisht, Soo Jung Kim, Joshua Junhee Lee, Seung Jung Jin, Yue-Ting Siu, Joshua A Miele, and Ilmi Yoon. 2020. Human-in-the-loop machine learning to increase video accessibility for visually impaired and blind users. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 47–60.

Cited By

View all
  • (2024)Exploring The Affordances of Game-Aware Streaming to Support Blind and Low Vision Viewers: A Design Probe StudyProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675665(1-13)Online publication date: 27-Oct-2024
  • (2024)"I Wish You Could Make the Camera Stand Still": Envisioning Media Accessibility Interventions with People with AphasiaProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675598(1-17)Online publication date: 27-Oct-2024
  • (2024)Making Short-Form Videos Accessible with Hierarchical Video SummariesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642839(1-17)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ASSETS '23: Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility
October 2023
1163 pages
ISBN:9798400702204
DOI:10.1145/3597638
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 October 2023

Check for updates

Author Tags

  1. Accessibility
  2. Audio Descriptions
  3. Blind and Low Vision
  4. Live Video Streaming
  5. Livestreaming
  6. Visual Impairments

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ASSETS '23
Sponsor:

Acceptance Rates

ASSETS '23 Paper Acceptance Rate 55 of 182 submissions, 30%;
Overall Acceptance Rate 436 of 1,556 submissions, 28%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)312
  • Downloads (Last 6 weeks)44
Reflects downloads up to 24 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring The Affordances of Game-Aware Streaming to Support Blind and Low Vision Viewers: A Design Probe StudyProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675665(1-13)Online publication date: 27-Oct-2024
  • (2024)"I Wish You Could Make the Camera Stand Still": Envisioning Media Accessibility Interventions with People with AphasiaProceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3663548.3675598(1-17)Online publication date: 27-Oct-2024
  • (2024)Making Short-Form Videos Accessible with Hierarchical Video SummariesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642839(1-17)Online publication date: 11-May-2024
  • (2024)“It’s Kind of Context Dependent”: Understanding Blind and Low Vision People’s Video Accessibility Preferences Across Viewing ScenariosProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642238(1-20)Online publication date: 11-May-2024
  • (2024)Unspoken Sound: Identifying Trends in Non-Speech Audio Captioning on YouTubeProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642162(1-19)Online publication date: 11-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media