skip to main content
research-article

ID.8: Co-Creating Visual Stories with Generative AI

Published: 02 August 2024 Publication History

Abstract

Storytelling is an integral part of human culture and significantly impacts cognitive and socio-emotional development and connection. Despite the importance of interactive visual storytelling, the process of creating such content requires specialized skills and is labor-intensive. This article introduces ID.8, an open-source system designed for the co-creation of visual stories with generative AI. We focus on enabling an inclusive storytelling experience by simplifying the content creation process and allowing for customization. Our user evaluation confirms a generally positive user experience in domains such as enjoyment and exploration while highlighting areas for improvement, particularly in immersiveness, alignment, and partnership between the user and the AI system. Overall, our findings indicate promising possibilities for empowering people to create visual stories with generative AI. This work contributes a novel content authoring system, ID.8, and insights into the challenges and potential of using generative AI for multimedia content creation.

References

[1]
Prithviraj Ammanabrolu, Ethan Tien, Wesley Cheung, Zhaochen Luo, William Ma, Lara J. Martin, and Mark O Riedl. 2020. Story realization: Expanding plot events into sentences. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. 7375–7382.
[2]
Seungho Baek, Hyerin Im, Jiseung Ryu, Juhyeong Park, and Takyeon Lee. 2023. PromptCrafter: Crafting text-to-image prompt through mixed-initiative dialogue with LLM. arXiv:2307.08985. Retrieved from https://arxiv.org/abs/2307.08985
[3]
Aaron Bangor, Philip Kortum, and James Miller. 2009. Determining what individual SUS scores mean: Adding an adjective rating scale. Journal of Usability Studies 4, 3 (2009), 114–123.
[4]
Weizhen Bian, Yijin Song, Nianzhen Gu, Tin Yan Chan, Tsz To Lo, Tsun Sun Li, King Chak Wong, Wei Xue, and Roberto Alonso Trillo. 2023. MoMusic: A motion-driven human-AI collaborative music composition and performing system. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37. 16057–16062.
[5]
Stephen Brade, Bryan Wang, Mauricio Sousa, Sageev Oore, and Tovi Grossman. 2023. Promptify: Text-to-image generation through interactive prompt exploration with large language models. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–14.
[6]
Andrew S. Bradlyn, Ivan L. Beale, and Pamela M. Kato. 2003. Psychoeducational interventions with pediatric cancer patients: Part I. Patient information and knowledge. Journal of Child and Family Studies 12 (2003), 257–277.
[7]
John Brooke. 1996. Sus: A quick and dirty’usability. Usability Evaluation in Industry 189, 3 (1996), 189–194.
[8]
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell. 2020. Language models are few-shot learners. Advances in Neural Information Processing Systems 33 (2020), 1877–1901.
[9]
Daniel Buschek, Lukas Mecke, Florian Lehmann, and Hai Dang. 2021. Nine potential pitfalls when designing human-AI co-creative systems. arXiv:2104.00358. Retrieved from https://arxiv.org/abs/2104.00358
[10]
Tony C. Caputo. 2003. Visual Storytelling: The Art and Technique. Watson-Guptill Publications.
[11]
Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, and Smaranda Muresan. 2023. I spy a metaphor: Large language models and diffusion models co-create visual metaphors. arXiv:2305.14724. Retrieved from https://arxiv.org/abs/2305.14724
[12]
Wengling Chen and James Hays. 2018. Sketchygan: Towards diverse and realistic sketch to image synthesis. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 9416–9425.
[13]
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv:2204.02311). Retrieved from https://arxiv.org/abs/2204.02311
[14]
John Joon Young Chung and Eytan Adar. 2023. PromptPaint: Steering text-to-image generation through paint medium-like interactions. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology. 1–17.
[15]
John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Visual sketching of story generation with pretrained language models. In Proceedings of the Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (CHI EA ’22). 1–4.
[16]
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. 2023. Simple and controllable music generation. arXiv:2306.05284. Retrieved from https://arxiv.org/abs/2306.05284
[17]
Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, and Ilya Sutskever. 2020. Jukebox: A generative model for music. arXiv:2005.00341. Retrieved from https://arxiv.org/abs/2005.00341
[18]
Wala Elsharif, James She, Preslav Nakov, and Simon Wong. 2023. Enhancing Arabic content generation with prompt augmentation using integrated GPT and text-to-image models. In Proceedings of the 2023 ACM International Conference on Interactive Media Experiences. 276–288.
[19]
Deep Ganguli, Danny Hernandez, Liane Lovitt, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, Nova Dassarma, Dawn Drain, Nelson Elhage, Sheer El Showk, Stanislav Fort, Zac Hatfield-Dodds, Scott Johnston, Shauna Kravec, Neel Nanda, Kamal Ndousse, Catherine Olsson, Daniela Amodei, Dario Amodei, Tom Brown, Jared Kaplan, Sam McCandlish, Chris Olah, and Jack Clark. 2022. Predictability and surprise in large generative models. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1747–1764.
[20]
Deepanway Ghosal, Navonil Majumder, Ambuj Mehrish, and Soujanya Poria. 2023. Text-to-audio generation using instruction-tuned LLM and latent diffusion model. arXiv:2304.13731. Retrieved from https://arxiv.org/abs/2304.13731
[21]
Roberto Gozalo-Brizuela and Eduardo C. Garrido-Merchán. 2023. A survey of generative AI applications. arXiv:2306.02781. Retrieved from https://arxiv.org/abs/2306.02781
[22]
Ariel Han and Zhenyao Cai. 2023. Design implications of generative AI systems for visual storytelling for young learners. In Proceedings of the 22nd Annual ACM Interaction Design and Children Conference. 470–474.
[23]
Daphne Ippolito, Ann Yuan, Andy Coenen, and Sehmon Burnam. 2022. Creative writing with an AI-powered writing assistant: Perspectives from professional writers. arXiv:2211.05030. Retrieved from https://arxiv.org/abs/2211.05030
[24]
Amir Jahanlou and Parmit K. Chilana. 2022. Katika: An end-to-end system for authoring amateur explainer motion graphics videos. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–14.
[25]
Hyeonho Jeong, Gihyun Kwon, and Jong Chul Ye. 2023. Zero-shot generation of coherent storybook from plain text story using diffusion models. arXiv:2302.03900. Retrieved from https://arxiv.org/abs/2302.03900
[26]
Pegah Karimi, Jeba Rezwana, Safat Siddiqui, Mary Lou Maher, and Nasrin Dehbozorgi. 2020. Creative sketching partner: An analysis of human-AI co-creativity. In Proceedings of the 25th International Conference on Intelligent User Interfaces. 221–230.
[27]
Nam Wook Kim, Nathalie Henry Riche, Benjamin Bach, Guanpeng Xu, Matthew Brehmer, Ken Hinckley, Michel Pahud, Haijun Xia, Michael J. McGuffin, and Hanspeter Pfister. 2019. Datatoon: Drawing dynamic network comics with pen\(+\) touch interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–12.
[28]
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross Girshick. 2023. Segment anything. arXiv:2304.02643. Retrieved from https://arxiv.org/abs/2304.02643
[29]
Tim Knapp. 2023. Situating large language models within the landscape of digitial storytelling. In Proceedings of the MEi: CogSci Conference, Vol. 17.
[30]
Felix Kreuk, Gabriel Synnaeve, Adam Polyak, Uriel Singer, Alexandre Défossez, Jade Copet, Devi Parikh, Yaniv Taigman, and Yossi Adi. 2022. Audiogen: Textually guided audio generation. arXiv:2209.15352. Retrieved from https://arxiv.org/abs/2209.15352
[31]
Tomas Lawton, Kazjon Grace, and Francisco J. Ibarrola. 2023. When is a tool a tool? User perceptions of system agency in human–AI co-creative drawing. In Proceedings of the 2023 ACM Designing Interactive Systems Conference. 1978–1996.
[32]
Tomas Lawton, Francisco J. Ibarrola, Dan Ventura, and Kazjon Grace. 2023. Drawing with reframer: Emergence and control in co-creative AI. In Proceedings of the 28th International Conference on Intelligent User Interfaces. 264–277.
[33]
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv:1910.13461. Retrieved from https://arxiv.org/abs/1910.13461
[34]
Cun Li, Jun Hu, Bart Hengeveld, and Caroline Hummels. 2019. Story-me: Design of a system to support intergenerational storytelling and preservation for older adults. In Companion Publication of the 2019 on Designing Interactive Systems Conference 2019 Companion. 245–250.
[35]
Xin Li, Wenqing Chu, Ye Wu, Weihang Yuan, Fanglong Liu, Qi Zhang, Fu Li, Haocheng Feng, Errui Ding, and Jingdong Wang. 2023. VideoGen: A reference-guided latent diffusion approach for high definition text-to-video generation. arXiv:2309.00398. Retrieved from https://arxiv.org/abs/2309.00398
[36]
Vivian Liu and Lydia B. Chilton. 2022. Design guidelines for prompt engineering text-to-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–23.
[37]
Ryan Louie, Andy Coenen, Cheng Zhi Huang, Michael Terry, and Carrie J. Cai. 2020. Novice-AI music co-creation via AI-steering tools for deep generative models. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[38]
Kristijan Mirkovski, James E. Gaskin, David M. Hull, and Paul Benjamin Lowry. 2019. Visual storytelling for improving the comprehension and utility in disseminating information systems research: Evidence from a quasi-experiment. Information Systems Journal 29, 6 (2019), 1153–1177.
[39]
Eric Mörth, Stefan Bruckner, and Noeska N. Smit. 2022. ScrollyVis: Interactive visual authoring of guided dynamic narratives for scientific scrollytelling. IEEE Transactions on Visualization and Computer Graphics (2022).
[40]
Changhoon Oh, Jungwoo Song, Jinhan Choi, Seonghyeon Kim, Sungwoo Lee, and Bongwon Suh. 2018. I lead, you help but only with enough details: Understanding user experience of co-creation with artificial intelligence. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[41]
Jonas Oppenlaender. 2022. The creativity of text-to-image generation. In Proceedings of the 25th International Academic Mindtrek Conference. 192–202.
[42]
Jonas Oppenlaender. 2022. A taxonomy of prompt modifiers for text-to-image generation. arXiv:2204.13988. Retrieved from https://arxiv.org/abs/2204.13988
[43]
Jonas Oppenlaender, Rhema Linder, and Johanna Silvennoinen. 2023. Prompting AI art: An investigation into the creative skill of prompt engineering. arXiv:2303.13534. Retrieved from https://arxiv.org/abs/2303.13534
[44]
Hiroyuki Osone, Jun-Li Lu, and Yoichi Ochiai. 2021. BunCho: AI supported story co-creation via unsupervised multitask learning to increase writers’ creativity in japanese. In Proceedings of the Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–10.
[45]
Nikita Pavlichenko and Dmitry Ustalov. 2023. Best prompts for text-to-image models and how to find them. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2067–2071.
[46]
Han Qiao, Vivian Liu, and Lydia Chilton. 2022. Initial images: Using image prompts to improve subject representation in multimodal AI generated art. In Proceedings of the 14th Conference on Creativity and Cognition. 15–28.
[47]
Chia Yi Quah and Kher Hui Ng. 2022. A systematic literature review on digital storytelling authoring tool in education: January 2010 to January 2020. International Journal of Human–Computer Interaction 38, 9 (2022), 851–867.
[48]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv:2204.06125. Retrieved from https://arxiv.org/abs/2204.06125
[49]
Jeba Rezwana and Mary Lou Maher. 2022. Identifying ethical issues in AI partners in human-ai co-creation. arXiv:2204.07644. Retrieved from https://arxiv.org/abs/2204.07644
[50]
Jeba Rezwana, Mary Lou Maher, and Nicholas Davis. 2021. Creative PenPal: A virtual embodied conversational AI agent to improve user engagement and collaborative experience in human-AI co-creative design ideation. In Joint Proceedings of the ACM IUI 2021 Workshops.
[51]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684–10695.
[52]
Carolina Beniamina Rutta, Gianluca Schiavo, and Massimo Zancanaro. 2019. Comic-based digital storytelling for self-expression: An exploratory case-study with migrants. In Proceedings of the 9th International Conference on Communities & Technologies-Transforming Communities. 9–13.
[53]
Nisha Simon and Christian Muise. 2022. TattleTale: Storytelling with planning and large language models. In Proceedings of the ICAPS Workshop on Scheduling and Planning Applications.
[54]
Helena Romano Snyder and Israel Colon. 1988. Foreign language acquisition and audio-visual aids. Foreign Language Annals 21, 4 (1988), 343–348.
[55]
Michelle Scalise Sugiyama. 2001. Narrative theory and function: Why evolution matters. Philosophy and Literature 25, 2 (2001), 233–250.
[56]
Sangho Suh, Sydney Lamorea, Edith Law, and Leah Zhang-Kennedy. 2022. PrivacyToon: Concept-driven storytelling with creativity support for privacy concepts. In Proceedings of the Designing Interactive Systems Conference. 41–57.
[57]
Lingyun Sun, Pei Chen, Wei Xiang, Peng Chen, Wei-yue Gao, and Ke-jun Zhang. 2019. SmartPaint: A co-creative drawing system based on generative adversarial networks. Frontiers of Information Technology & Electronic Engineering 20, 12 (2019), 1644–1656.
[58]
Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Dinalescu. 2021. Story centaur: Large language model few shot learning as a creative writing tool. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. 244–256.
[59]
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv:2307.09288. Retrieved from https://arxiv.org/abs/2307.09288
[60]
Kelly L. A. van Bindsbergen, Hinke van der Hoek, Marloes van Gorp, Mike E. U. Ligthart, Koen V. Hindriks, Mark A. Neerincx, Tanja Alderliesten, Peter A. N. Bosman, Johannes H. M. Merks, Martha A. Grootenhuis, and Raphaële R. L. van Litsenburg. 2022. Interactive education on sleep hygiene with a social robot at a pediatric oncology outpatient clinic: Feasibility, experiences, and preliminary effectiveness. Cancers 14, 15 (2022), 3792.
[61]
Mathias Peter Verheijden and Mathias Funk. 2023. Collaborative diffusion: Boosting designerly co-creation with generative AI. In Proceedings of the Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 1–8.
[62]
Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. 2023. Sketch-guided text-to-image diffusion models. In Proceedings of the ACM SIGGRAPH 2023 Conference (SIGGRAPH ’23). 1–11.
[63]
Yunlong Wang, Shuyuan Shen, and Brian Y Lim. 2023. RePrompt: Automatic prompt editing to refine AI-generative art towards precise expressions. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–29.
[64]
Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. 2022. Diffusiondb: A large-scale prompt gallery dataset for text-to-image generative models. arXiv:2210.14896. Retrieved from https://arxiv.org/abs/2210.14896
[65]
Justin D. Weisz, Michael Muller, Jessica He, and Stephanie Houde. 2023. Toward general design principles for generative AI applications. arXiv:2301.05578. Retrieved from https://arxiv.org/abs/2301.05578
[66]
Qiang Wu, Baixue Zhu, Binbin Yong, Yongqiang Wei, Xuetao Jiang, Rui Zhou, and Qingguo Zhou. 2021. ClothGAN: Generation of fashionable Dunhuang clothes using generative adversarial networks. Connection Science 33, 2 (2021), 341–358.
[67]
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V. Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv:2309.03409. Retrieved from https://arxiv.org/abs/2309.03409
[68]
J. D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny can’t prompt: How non-AI experts try (and fail) to design LLM prompts. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–21.
[69]
Chengzhi Zhang, Weijie Wang, Paul Pangaro, Nikolas Martelaro, and daragh Byrne. 2023a. Generative image AI using design sketches as input: Opportunities and challenges. In Proceedings of the 15th Conference on Creativity and Cognition. 254–261.
[70]
Chao Zhang, Cheng Yao, Jiayi Wu, Weijia Lin, Lijuan Liu, Ge Yan, and Fangtian Ying. 2022. StoryDrawer: A child–AI collaborative drawing system to support children’s creative visual storytelling. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 1–15.
[71]
Chenshuang Zhang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. 2023. Text-to-image diffusion model in generative AI: A survey. arXiv:2303.07909. Retrieved from https://arxiv.org/abs/2303.07909
[72]
Lvmin Zhang and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. arXiv:2302.05543.
[73]
Yixiao Zhang, Gus Xia, Mark Levy, and Simon Dixon. 2021. COSMIC: A conversational interface for human-AI music co-creation. In Proceedings of the New Interfaces for Musical Expression (NIME ’21).
[74]
Shihao Zhao, Dongdong Chen, Yen-Chun Chen, Jianmin Bao, Shaozhe Hao, Lu Yuan, and Kwan-Yee K. Wong. 2023. Uni-ControlNet: All-in-one control to text-to-image diffusion models. arXiv:2305.16322. Retrieved from https://arxiv.org/abs/2305.16322
[75]
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large language models are human-level prompt engineers. arXiv:2211.01910. Retrieved from https://arxiv.org/abs/2211.01910

Cited By

View all
  • (2024)Multimodal Outputs for the Workplace From Generative AIComputational Practices and Applications for Digital Art and Crafting10.4018/979-8-3693-2927-6.ch008(198-225)Online publication date: 17-Jul-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 14, Issue 3
September 2024
384 pages
EISSN:2160-6463
DOI:10.1145/3613608
Issue’s Table of Contents

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 August 2024
Online AM: 15 June 2024
Accepted: 22 May 2024
Revised: 26 April 2024
Received: 15 December 2023
Published in TIIS Volume 14, Issue 3

Check for updates

Author Tags

  1. Storytelling
  2. generative AI
  3. creativity

Qualifiers

  • Research-article

Funding Sources

  • Malone Center for Engineering in Healthcare at the Johns Hopkins University

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)686
  • Downloads (Last 6 weeks)313
Reflects downloads up to 24 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Multimodal Outputs for the Workplace From Generative AIComputational Practices and Applications for Digital Art and Crafting10.4018/979-8-3693-2927-6.ch008(198-225)Online publication date: 17-Jul-2024

View Options

Get Access

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media