skip to main content
10.1145/3613904.3642525acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Conveying Emotions through Shape-changing to Children with and without Visual Impairment

Published: 11 May 2024 Publication History

Abstract

Shape-changing skin is an exciting modality due to its accessible and engaging nature. Its softness and flexibility make it adaptable to different interactive devices that children with and without visual impairments can share. Although their potential as an emotionally expressive medium has been shown for sighted adults, their potential as an inclusive modality remains unexplored. This work explores the shape-emotional mappings in children with and without visual impairment. We conducted a user study with 50 children (26 with visual impairment) to investigate their emotional associations with five skin shapes and two movement conditions. Results show that shape-emotional mappings are dependent on visual abilities. Our study raises awareness of the influence of visual experiences on tactile vocabulary and emotional mapping among sighted, low-vision, and blind children. We finish discussing the causal associations between tactile stimuli and emotions and suggest inclusive design recommendations for shape-changing devices.

1 Introduction

We are witnessing an increasing interest in shape-changing as an affective interaction modality, leading to novel forms of engagement and emotional expression [16, 29, 30, 37, 50]. Shape-changing is an exciting modality in terms of accessibility due to its haptic and nonverbal nature. It has the potential to be used in multiple application areas ranging from games and storytelling to emotional regulation and therapeutics, and by a broad range of users. Particularly, shape-changing devices could reduce barriers to inclusion in mixed-visual ability contexts where technology can support shared experiences between people with and without visual impairments (e.g., storytelling, games, immersive environments).
Figure 1:
A teaser picture with five pictures showing each of the shapes, from left to right: goosebumps (inflated bubbles), spikes, concaves (deflated bubbles), wrinkles (lines contracted), and tentacles (octopus tentacles inflated)
Figure 1: Tactile stimuli used in the experiment, including Goosebumps, Spikes, Concaves, Wrinkles, and Tentacles, with two levels of movement: static and dynamic.
Prior work has shown that shape-changing broadens the communication capabilities of computers and can be used as an emotional expression modality [21, 29, 32, 33, 38, 60]. Results reveal that shape-changing stimuli are mapped to specific emotions based on shape and movement frequency. However, the potential of using shape-changing for accessible interaction remains unexplored. Research is limited to young adults, and there are no design guidelines or empirical evidence on how shape changes map to emotions when considering people with visual impairment. Moreover, even less attention has been paid to children with and without visual impairment (VI). If a certain shape-changing stimulus is used by children with and without visual impairments, would they interpret the stimulus differently?
In this paper, we investigate the expressive capabilities of shape changes and how children map tactile stimuli to inform future shape-changing devices that communicate nonvisual emotional reactions to children in mixed-visual ability groups. We aim to answer three main research questions: (1) Which emotions do children map to each shape-changing stimulus? (2) What are the differences and similarities in shape mapping between children with different visual abilities? and (3) What are children’s rationale for mapping tactile stimuli to emotions?
To answer these questions, we asked 50 children (26 with VI and 24 sighted) to associate a specific emotion with tactile stimuli. Our tactile stimuli vary in shape and movement. The final set is comprised of five shapes (goosebumps, spikes, wrinkles, tentacles, and concaves) and two levels of varying movement for each (dynamic (_D) or static (_S)) (1). Biological systems inspired our approach to designing the shapes [29, 30], which alter skin shapes to express emotional states: tentacles (expressing liveliness), goosebumps (excitement or fear), wrinkles (nervousness), concaves (sadness), and spikes (representing anger).
Results show that children with and without visual impairment were able to perceive distinct emotions through touch; however, they have different emotional-tactile mappings. Their associations were only congruent in attributing fear and anger to the Spikes_D tactile stimulus. The other stimuli had different mappings according to each child’s visual acuity. Specifically, Tentacles were associated with happiness, in the dynamic version for children with visual impairment and the static version for sighted children; children with VI mapped static wrinkles to happiness while sighted children associated it with calmness. Moreover, children with VI did not perceive sadness with any shape stimuli, while sighted children mapped it to Concaves_S. Additionally, results showed a difference in shape-valence between children with low-vision and blindness when perceiving Spikes. In Spikes, the emotional-tactile associations made by children with low vision were similar to sighted children and different from children with blindness. Also, emotional-mapping strategies used suggest differences dependent on the visual impairment level.
The key contributions of this paper are (1) an empirical evaluation of emotional mapping of shape-changing stimuli by children with and without visual impairment, which extends previous evaluations with sighted adult populations, (2) a discussion of the differences and similarities between children with different visual abilities, and (3) design guidelines that allow designers to select shape-changing stimuli to trigger specific emotional precepts. These contributions are relevant to accessibility researchers and designers of technologies for groups of children with mixed-visual abilities, particularly in the fields of haptic and multisensory feedback. They open new research avenues for using shape changes as an emotion expression modality.

2 Related Work

We discuss related work along three key topics: first, we analyze tactile-emotional mappings, particularly within human-computer and human-robot interactions fields. Second, we describe previous studies that investigate how visual ability affects tactile exploration. Third, we outline how visual ability influence recognition and emotional expression and how assistive systems display emotional information.

2.1 Touch and Emotional Expression

Touch is the first human sense as we start feeling our touch inside our mother’s uterus [22, 25]. In social interactions, we use touch to express our emotions towards others [62]. People display different emotions by the way they touch (e.g., pat, push, scratch) [25], the gesture’s pressure, the body’s location contact [26], and their culture [13, 14]. Our responses to touch are hard to ignore, so it plays a crucial role in our interpretation of others’ intentions [25]. and our well-being [1, 22, 70]. For example, affective touch, such as holding hands, hugging, or patting on the back, can be a passive intervention to emotional and intellectual development [22] and can lower our anxiety and stress, helping our emotional self-regulation and support [22].
Although touch can smoothly express different emotions in social interactions, tactile information is less explored than visual and audio information [26, 27]. In human-computer interaction, various forms and materials can display emotional communication [33, 34, 38, 71]. They can use different tactile information such as vibration [45, 59, 69], temperature [66], pressure [56], 3D objects [43, 44], and shape-changing [21, 29, 32]. For example, short and rapid vibrations relate with higher arousal levels [45, 59, 69], warm stimuli represented more pleasant emotions than cold [66], pressurized squeeze relates to unpleasant and aroused emotions as opposed to finger touch that conveys pleasant and relaxed emotions [56]. And sharper objects are associated with high arousal levels while rounded shapes elicit positive valence [21, 43, 44].
Robots’ skin is also an interactive surface that can act as a tactile sensing platform to perceive human emotions [15, 62], as an expressive medium using its artificial skin to convey different emotions [29, 30, 31, 32], or as a comforting agent to study the influence of touch and hugging in human-robot interactions [1, 28, 64, 68, 70]. Soft materials have the potential to be used in robot skin; similar to human skin, they have no predefined format and can be adapted to different robots and different shape-change forms [30]. These materials can cover the sensors, creating more robust and less expensive robots while increasing their contact zone expressiveness, pleasantness, and sensitiveness [62].
Adding shape-changing communication to social robots’ skin can enrich the design space of a robot’s expressive spectrum, silently informing the interaction through visual and tactile/haptic feedback [30]. Results with adults showed that a shape-changing robot, with spikes and goosebumps, could express emotions, in which texture-shape relates to valence, while shape-changing frequency influences arousal [29]. Spikes mapped with anger, while goosebumps with pleasure and excitement [29]. Other shapes can convey other emotions; wrinkles-inspired shape represents relaxed and nervous emotions under resting and vacuumed states; tentacle-like shapes potentially tell liveliness; adhesive suction cups express attachment; and skin pores indicate interaction exposure, etc. [30].
The power of touch in human social interaction suggests that it can be a socially expressive medium when interacting with technology, namely robots. However, tactile information in robotics is mainly limited to how people manipulate and touch the robot [1, 63], not how its skin elicits emotions. The present work expands this line of research by investigating how new shape-changing skins evoke emotions and how children perceive them.

2.2 Tactile Exploration & Visual Disability

Sighted and people with blindness can equally identify materials and their features using only touch [4, 65]; however, sighted increase their performance whenever they also see the material [4]. Also, age influences users’ material-mapping skills, regardless of their visual capacity, having 33% accuracy at age five and reaching 69% at thirteen and 86.5% in adults [65].
Since birth, the user’s visual ability influences tactile exploration and mapping strategies [4]. While children with blindness progressively learn to process haptic information directly, increasing their haptic-mapping skills with training and age [65]. Age-matched sighted children have similar skills; however, they convert haptic information into a visual image and semantic information, even when they do not see the objects [65]. Another factor that increases children’s haptic-mapping skills, regardless of their visual ability, is the context. In a storytelling activity, children who know the story context, title, and characters during book exploration recognize more haptic elements in the storybook [65].
The capacity of sighted children and children with blindness to translate haptic information into semantic and object features suggests that shape-changing materials have the potential to map other concepts. However, it is still untapped how other ideas, such as emotions, can be expressed using shape-changing materials and if the emotional-mapping performance and strategies are similar among children with blindness, low vision, and sighted.

2.3 Emotional Recognition & Visual Disability

Emotion recognition, in human interactions, combines visual and auditory signals [17, 55]. When there is a sensory impairment, the remaining senses can partly compensate for the impaired sense [17, 19, 36, 55]. To enrich their emotion recognition ability, people with visual impairment can use speech information (e.g., tone, tempo, volume, silence moments, and voice energy) and body touch whenever they are familiar and close enough to touch their partner [55]. Those compensatory strategies may allow them to recognize the emotional state of the conversational partner. Still, they may not be enough to overcome the subtle emotional changes of facial expressions (e.g., smile) [55]. Additionally, people with visual impairment may have difficulty perceiving and expressing emotions due to their visual acuity [41]. Especially children with blindness must rely entirely on other senses to recognize others’ emotions without having a reference to their sighted partner’s emotional behaviors. At the same time, children with low vision can use their existing visual capacity to perceive and mimic others’ emotions, which may help them develop their social-emotional competencies [41].
Previous research on human-computer interaction explored different modalities to develop social-emotional functions of people with visual impairment. For example, a vibrotactile belt informed the participant with VI of the affective state of their conversational partners based on facial expressions [6, 8, 9]. Another study suggested an actuator-mediated "body touch" platform to mimic emotions or intentions from conversation partners without actual body contact [55].
Additionally, sound can elicit emotions in an accessible but more intrusive way. Hear beacons can enact digital emoji emotions [11], and an audio movie description positively enhances presence and emotion elicitation to the experience [23, 24]. Although music is universally used to elicit emotions [18, 39, 67], recent studies showed that the emotional variables (valence, arousal, intensity, and preference) associated with music differed significantly between adults with and without VI [52]. For sadness, adults with VI associated lower valence, higher arousal, higher intensity, and preference than the sighted group. sighted adults associate higher valence, arousal, and intensity for happiness compared to their peers. For anger and fear, only arousal and intensity significantly differed between groups and lower in groups with VI.
In storytelling activities, multi-sensory platforms explored how children with visual impairment, individually [12] and in mixed-visual ability groups [2, 31], perceive and express emotions using tangible, audio, and lights modalities. Those platforms used the story audio, tangibles (e.g., a calm sheep, Rose-a happy robot, and a spiky-angry robot), and emotional character sounds (e.g., laughing, crying). Children with and without visual impairment could play, perceive, and adapt the story to the emotions of the different characters.
Various assistive systems and modalities help people with and without visual impairment to perceive emotions in interactions; however, it is still untapped how children can use them. Our study presents a first step to exploring touch as a non-intrusive emotional medium for children with and without visual impairment.

3 User Study

We investigate how shape-changing can elicit emotions in children with and without visual impairment. Participants experienced touching ten different tactile stimuli (with five different shapes under one of two possible movement conditions: static and dynamic, as shown in Fig. 2. We asked participants to label each tactile stimulus with five possible emotions (anger, calm, happy, sad, and fear) or none. The emotions covered all four quadrants of Russell Circumplex [58].
Figure 2:
A teaser with three pictures showing the study setup. From left to right. The first picture has a girl with a pink closed box in front of her and has her hand inside the box; the child doesn’t see inside the box. The second picture is a view inside the pink box, where a child’s hand explores a shape-skin made from silicon. The third picture is a compiled version of several shape-skins explored in the study.
Figure 2: a) A child interacting with shape-changing skin inside the pink box during the study; b) captures the internal view of touching the spikes skin; c) a collection of shape-changing skin samples that change users’ haptic experiences through pneumatic control.

3.1 Design of shapes and emotional expressions

We used ten tactile stimuli based on previous work [29, 30], as shown in Fig. 1. The stimuli were bio-inspired and designed to express internal states: tentacles (expressing liveliness), goosebumps (excitement or fear), wrinkles (nervousness), concaves (sadness), and spikes (representing anger). Each stimulus had one of the five shapes unit, i.e. tentacles, goosebumps, wrinkles, concaves, and spikes, and could be expressed in static or dynamic mode, resulting in the ten stimuli: Tentacles_S, Tentacles_D, Goosebumps_S, Goosebumps_D, Wrinkles_S, Wrinkles_D, Concaves_S, Concaves_D and Spikes_S, Spikes_D (where _S means static and _D means dynamic). Shape units are fabricated with soft silicone using 3D-printed molds and can be pneumatically controlled to change their surface properties. As the manufacturing process can change the physical characteristics of the shapes assembled, we decided to replicate the previous fabrication process [29, 30]. The authors shared with us their preexistent 3D models. And we made 3 different shape skins: Spikes, Goosebumps, and Concaves shared the same skin as the previous study [29]; Tentacles and Wrinkles shapes units used two different skins [30]. In the shared skin, there were small hard tips. Spikes had them inflated, while in Goosebumps and Concaves, tips were vacuumed underneath the skin. We used a FlowIO [61] device with a maximum power of 255 pwm for inflation and vacuum motors to control dynamic shape movements. The device allowed the control of five channels to perform inflation and vacuum. The shape-changing frequency measures the number of shape rises per minute (rpm). The design of the shape movements and their correspondent emotional mapping was based on an iterative process with one teenager with blindness, two educators from inclusive schools, and four psychologists. Through the iterative process, we converged on the following movement properties for each of the shapes unit: Tentacles, Wrinkles and Splikes in dynamic mode had 20rpm, Goosebumps and Concaves had 40rpm. For Tentacles, Wrinkles, and Spikes, the movement behavior repeatedly went from neutral to inflation, followed by a vacuum. For Goosebumps, the repeated behavior went from neutral to inflation; and Concaves went from neutral to vacuum.
In the supplementary files, we included videos for each tactile-stimulus [48].

3.2 Design of emotional storytelling activity

Our participants were children in an age range still learning to identify emotions [20]. To help them in the mapping process, we adopted a strategy suggested by the school’s psychologists and educators: providing context to each tactile stimuli-mapping task. This context-driven activity helps participants relate to the feelings and physical sensations. It facilitates the mapping process while keeping children engaged in a stimuli-mapping activity [65].
Four school psychologists and researchers co-created the stories, inspired by children’s books and school activities [7, 40, 46, 49, 53, 57]. We built ten characters, and each character had five possible one-sentence stories, mapping to one of the five emotions: anger, calm, happy, sad, or fear. One example of a character is Peter that is in his living room and could be: 1) sad because he misses his grandmother who lives far away; 2) happy watching television; 3) anger because his mother ordered him to clean his bedroom; 4) calm because he was listening to some relaxing music; 5) scared because there was a party of his parents’ friends and he is afraid of talking to people he does not know well; or 6) none of those. All the characters and stories/emotions were presented randomly to each child. All stories are available in the supplementary files [48].

3.3 User Study Design

The study had three independent variables in a mixed-design approach. Shape unit and movement were combined as a within-subjects factor in ten tactile stimuli that each child interacted with: Tentacles_D and Tentacles_S, Wrinkles_D and Wrinkles_S, Goosebumps_D and Goosebumps_S, Concaves_D and Concaves_S, Spikes_D and Spikes_S. Shapes skins were hidden in a box (Figure 3 b) to evaluate and compare the tactile-emotional mapping of children with and without VI. To compare identical stimuli, we hide the shapes inside a box, testing haptic-only stimuli and avoiding visual cue interference. All Children touched each of the ten tactile stimuli for 25 seconds without seeing them. The order of all stimuli was randomly assigned. The third independent variable was children’s visual acuity with two between-subjects levels, sighted (S) and visually impaired (VI).

3.3.1 Hypotheses.

In this study, we tested the following hypotheses.
First, we wanted to explore the differences in emotional perceptions using tactile information and whether the emotion-shape mapping differed between children with different visual abilities.
H1: Children consistently map each tactile stimulus with a specific emotion. This hypothesis is based on the previous user study demonstrating a consistent mapping between tactile stimuli and emotions with adults [29].
H2: Children with and without visual impairment have different emotional-tactile mapping. This hypothesis is based on the fact that adults with and without VI have a different emotional mapping with other feedback modalities (e.g., music [52]) and use different haptic exploration strategies [4].
Second, we wanted to know whether and to what extent a shape change correlates with valence and arousal dimensions. We had the following hypotheses:
H3: The emotions mapped to shape units have different valence levels. Assuming the following order from highest to lowest valence: Tentacles; Goosebumps; Wrinkles; Concaves; and Spikes. This is based on the natural metaphor of each shape in our design [30]. Moreover, our exploration extends to the emotional mapping of goosebumps and spikes, shown in a prior study [29], while the design of tentacles serves to expressed liveliness  [47]. Finally, the emotional mapping of the different shapes, (goosebumps, spikes, tentacles, concaves, and wrinkles), emerges from collaborative insights with our co-designers. Notably, co-designers, including a teenager with blindness, two educators from inclusive schools, and four psychologists, played a pivotal role in suggesting and refining the emotional nuances associated with these shapes. This collective input supported our proposed hypothesis: tentacles may simulate happiness and liveness with subtle movements; goosebumps are usually mapped to both excitement and scare; wrinkles simulate human forehead wrinkles when frowning, thus expressing confusion and nervousness; concaves may relate to the lowered head and express sadness; finally spikes are linked to anger and defensiveness in their natural form.
H4: Dynamic stimuli convey higher arousal than static. Based on the previous studies on emotional expression through vibrations [29, 45], a higher frequency usually links to a higher arousal level.

3.3.2 Participants.

We recruited 50 children, 22 girls and 27 boys aged between 6 and 14 (M=9.7 SD=2.3), from 4 public schools. Of the 50 children, 26 experienced visual impairment (10 with blindness, 16 with low vision), and 24 children were sighted (N = 50, \(N\_S = 24\) , \(N\_VI = 26\) , NB = 10, \(N\_LV = 16\) ). In the remaining paper, children will be named by the first letter, "C", a two-digit number, and an additional code, representing their visual acuity LV, B, or S for low vision, blind, and sighted, respectively. The children’s teachers informed the research team of their visual acuity based on professional diagnosis categorized into 4 visual acuity levels [51]: low (C29LV, C40LV, C41LV), medium (C2LV, C6LV, C13LV, C15LV, C20LV), high (C1LV, C11LV, C17LV, C21LV, C24LV, C28LV, C32LV, C50LV), and blind (C5B, C9B, C18B, C25B, C31B, C36B, C38B, C43B, C44B, C47B). The ethics committee of our institution approved the research protocol, and their legal guardians signed the consent forms. All children agreed to participate and could quit at any time.

3.3.3 Procedure.

Figure 3:
A teaser with three pictures showing the study apparatus. It is a labeled view of the study setup from left to right. The first picture has a boy manipulating a shape-skin for training, marking the box, and the researcher in the image. The second picture is labeled, tagging the view inside the pink box, where a child’s hand explores a shape-skin made from silicon. The third picture is a labeling picture showing the pneumatic actuator used in the study.
Figure 3: User study setup: (a) In a practice session, children touched the shape units on the table to familiarize themselves with the shapes and be comfortable touching them; (b) During the experiment, children touched the shapes inside a pink box and mapped them to an emotion; (c) Dynamic shapes use a FlowIO device to actuate and control them.
The individual sessions took around 25 minutes and were conducted in schoolrooms with two researchers present. The researchers set up the conditions and guided the session. On a table, there was a pink box with a slot with the shapes skins inside, illustrated in Figure 3. Children sat at the table in front of the box with the slot facing them. Hiding behind the box were the shape units, the FlowIO device, and a computer to control and actuate the shapes.The actuator remained consistently active, producing a uniform sound, while the children wore headphones while exploring the shapes to minimize the impact of external sounds on their emotion selection process. The researchers explained,in 2 minutes, to the children how the session would run and gave them two shape units (different from the shapes used in the experiment), and during 3 minutes they were able to manipulate and familiarize themselves with the shapes. During the experiment, children had to put their hands in the box for 25 seconds and feel the stimulus. The stimuli could be repeated on demand. While they experienced the tactile stimulus, we asked children to imagine the shaped surface as the skin of a story character and to identify the character’s emotional state, to ease the identification process, we presented them with a character with a context with five possible endings (representing each emotion). The ten stimuli (5 shape units x 2 movement levels) were presented randomly. Children spent circa 20 minutes, classifying all the ten stimuli.
In the initial phase of each shape exploration, both the story character and the order of story endings were randomly selected. During the first step, one researcher would then read aloud the chosen character and context. Subsequently, in the third step, the researcher would read the five possible story endings, corresponding to the five different emotions, in the pre-selected order specific to each child and shape.
For each tactile stimulus, we had a five-step procedure (lasting 2 minutes on average) : 1. Children put their hands in the box, and the researcher would then read aloud the selected fictional character, context. and described the emotions the character could have , taking around 15 sec.. 2. The child put her hand inside the box to know how the character felt through touch. 3. While the child explored the hidden shape, for 25 seconds, the researcher would read the five possible story endings, corresponding to the five different emotions, in the pre-selected order specific to each child and shape. 4. Children select and describe the character’s emotion and describe their reasoning for the emotional mapping , which took around 45 seconds. 5. Idle time between stimuli to reduce tiredness and relax their hand. Both the characters, emotions, and story ends were presented randomly for each stimulus.
To illustrate, when Child 1 (C1) was testing the first shape, the procedure commenced with presenting a scenario, such as "Camilla at home." The researcher would then articulate the emotions randomly (e.g. anger, happy, fear, calm, sad) along with the story endings as the child received the stimuli. In the exploration of the second shape, the same child encountered a different narrative, for instance, "Arthur on a beach.", and a different emotional sequence (e.g. sad, anger, happy, calm, fear, calm) unfolded with the corresponding story endings during the third step. All the characters and the possible ending were read by the same researcher, but were not prerecorded. The Stories can be found in the supplementary material.
At the end of the session (lasting 2 minutes on average), we also asked children two open questions: 1) If they had the chance to use the shape feedback, how would they use it? and 2) What are the potential uses of shapes in robots?

3.3.4 Measures.

Our user study aims to understand children’s emotional perception of shape-changing stimuli. Collected measures focus on two categories: 1) tactile perception and emotional mapping, and 2) mapping strategies.
Children associated emotions with each tactile stimulus via self-reported emotion identification (anger, calm, happy, sad, fear, or none). They could identify up to two emotions for each stimulus. To test some of our hypotheses, we also associated a valence (V) and arousal (A) value from a 3-level scale (low, medium, high) to the chosen emotion based on Russell Circumplex [58]. Anger (V=LP, A=HP) (V=LN, A=HP), calm (V=MP, A=MN), happy (V=HP, A=LP); sad (V=HN, A=HN); fear (V=MN, A=HN). LP, MP, and HP stand for low, medium, and high positive; LN, MN, and HN stand for low, medium, and high negative. Although we acknowledge that emotions are individual subjective measures, and there is an ongoing debate on arousal and valence as emotions dimensions [5, 42, 54], for our statistic analysis, we mapped valence and arousal levels as follows: (HN=-3); (MN=-2); (LN=-1); (LP=1); (MP=2); and (HP=3);
Additionally, we asked children to describe their association strategies after each shape-emotion mapping, and at the end of the session, they described the shapes’ potential applications. Using a deductive thematic analysis, two researchers coded and aligned the open-question notes into tactile-emotion association strategies and potential future applications.

4 Findings

The following section describes children’s associations of tactile stimuli to specific emotions and their mapping to valence and arousal dimensions. Additionally, we answer our hypotheses. Finally, we present findings related to children’s rationale for their stimuli-emotion associations.

4.1 Mapping Tactile Stimuli to Emotions

From the 500 stimuli evaluations, 97% of the stimuli were attributed to an emotional label. In thirteen cases, children select the no_emotion option (NoEmotion=13, N_VI=5, N_S=6); A second emotional label only occurred in four instances (SecondEmotion=4, N_VI=5, N_S=6) and only one child, C32LV, explicitly said they feared the shapes after experiencing the Spikes_D stimuli.

Sighted children consistently associate each tactile stimulus with an emotion. Children with VI are less consistent.

Overall, results showed a relation between tactile stimuli and emotions perceived. By applying a chi-square test for association, we found a statistically significant weak association between the tactile stimuli and the chosen emotions ( \(\chi ^2(45) = 85.156, p\lt.001, {\color {black}V=.184}\) ). We separately computed chi-square tests and Cramer’s V for emotion-tactile mapping for children with and without visual impairment. We found a significant moderate association between the tactile stimuli and the chosen emotions by sighted children ( \(\chi ^2(45) = 94.246, p \lt.001, {\color {black}V=.280}\) ). However, no significant association was found for children with visual impairment ( \(\chi ^2(45) = 57.512, p =.108, {\color {black}V=.207}\) ). Lastly, we checked the association between shapes and emotions according to children’s gender. No significant associations were found for boys ( \(\chi ^2(45) = 60.999, p =.056, {\color {black}V=.208}\) ), neither for girls ( \(\chi ^2(45) = 60.568, p =.060, {\color {black}V=.233}\) ).

Emotion associations significantly differed from random selection in five stimuli.

We also analyzed whether the chosen emotions for each tactile stimulus had unequal proportions, using a chi-square goodness-of-fit test. Considering the previous results, which suggest different mappings between children with and without visual impairment, we conducted the statistical analysis for each level of visual acuity. Table 1 presents the statistical tests for each tactile stimulus according to the children’s visual acuity. The results show that the attribution of emotions was only significantly different from the equal proportions (i.e., random selection) for five out of the ten tactile stimuli with moderate effect sizes (partial support for H1): Tentacles_D for VI, Tentacles_S for S, Wrinkles_S for both VA levels, Spikes_D for both VA levels, and Concaves_S for S.
Table 1:
Tactile StimuliSVI
Tentacles_D \(\chi ^2(4)=6.000,p=.199, {\color {black}V=.250}\) \(\chi ^2(5)=14.556,p=.012, {\color {black}V=.348}\) *
Tentacles_S \(\chi ^2(4)=11.833,p=.019, {\color {black}V=.351}\) * \(\chi ^2(4)=2.846,p=.584, {\color {black}V=.172}\)
Wrinkles_D \(\chi ^2(5)=5.000,p=.416, {\color {black}V=.204}\) \(\chi ^2(5)=7.444,p=.190, {\color {black}V=.239}\)
Wrinkles_S \(\chi ^2(4)=27.667,p\lt.001, {\color {black}V=.480}\) * \(\chi ^2(4)=10.538,p=.032, {\color {black}V=.318}\) *
Spikes_D \(\chi ^2(4)=14.000,p=.016, {\color {black}V=.382}\) * \(\chi ^2(4)=14.556,p=.012, {\color {black}V=.374}\) *
Spikes_S \(\chi ^2(4)=9.750,p=.045, {\color {black}V=.319}\) * \(\chi ^2(4)=3.615,p=.461, {\color {black}V=.186}\)
Goosebumps_D \(\chi ^2(4)=2.500,p=.776, {\color {black}V=.161}\) \(\chi ^2(4)=10.111,p=.072, {\color {black}V=.312}\)
Goosebumps_S \(\chi ^2(4)=9.500,p=.091, {\color {black}V=.315}\) \(\chi ^2(4)=1.296,p=.730, {\color {black}V=.112}\)
Concaves_D \(\chi ^2(4)=7.667,p=.105, {\color {black}V=.283}\) \(\chi ^2(4)=6.308,p=.177, {\color {black}V=.246}\)
Concaves_S \(\chi ^2(4)=10.583,p=.032, {\color {black}V=.332}\) * \(\chi ^2(4)=5.846,p=.321, {\color {black}V=.237}\)
Table 1: Results of the chi-square goodness-of-fit tests and Cramer’s V for effect sizes, for each tactile stimulus according to each level of visual acuity (VA), where VI stands for children with visual impairment, and S stands for sighted.
Only fear and anger have congruent attribution to the Spikes_D tactile stimulus; the differences generally support our hypothesis that children with and without visual impairment have different emotional/tactile mappings (H2 supported). Figure 4 shows a heatmap representation of the chosen emotions for each tactile stimulus. If we specifically look at the tactile stimuli with clustered distributions, based on the previous statistical analysis, we deduce the following mappings. Sighted children mostly choose happy (N = 11) for Tentacles_S, calm (N = 15) for Wrinkles_S, fear (N = 9) and anger (N = 7) for Spikes_D, anger (N = 9) and happy (N = 8) for Spikes_S, calm (N = 10) for Concaves_S, and sad (N = 10) for Concaves_D. While for children with visual impairment, they mostly choose happy (N = 11) for Tentacles_D, happy (N = 10) for Wrinkles_S, fear (N = 8) and anger (N = 7) for Spikes_D.
Figure 4:
Three heatmaps showing the frequency of selected emotions (anger, fear, sad, calm, happy) based on each tactile stimulus in the following order: tentaclesD, tentaclesS, wrinklesD, wrinklesS, spilesD, spikesS, gossebumpsD, gossebumsS, concavesD, concavesS. The three heatmaps show the emotion and stimulus mapping in the following order for all participants (left), for sighted children (middle), and for children with visual impairment (right). For all the participants, the picture at the left has a selection cluster for happy and tentaclesD and tentaclesS. For wrinklesS, there is a higher selection (21), spikesD and spikesS set for fear and anger, and goosebumpS and goosebumpD show a small cluster in fear (17), but all the selections are similar. For concavesS the highest value was calm (17), but the values are identical across emotions and for concavesS and concavesD. On the heatmap representing sighted children, we observe the same cluster for happy and tentacles and a high value for calm with wrinklesS (15). Concaves are set for calm (10 when in dynamic mode) and sad (10 when in a static way). The heatmap on the right shows that the values for children with visual impairment were more clustered, tentacles kept the relation with happiness, and anger and fear were mapped in spikes and goosebumps shapes.
Figure 4: Heatmap representation of the number of times each emotion was selected per each tactile stimulus. For all children on the left, only for sighted children in the center, and only for children with visual impairment on the right.

4.1.1 Order effects.

Although we counterbalanced the order of the storytelling characters, the shapes, and the emotions read aloud by the researcher, we looked at statistical associations between these and the chosen emotions by children. No significant association was found between the possible storytelling characters and children’s chosen emotion ( \(\chi ^2(45) = 44.280, p=.502, {\color {black}V=.133}\) ). Additionally, the association between the order of the shapes and their emotional mappings was also not significant ( \(\chi ^2(45) = 37.834, p=.767, {\color {black}V=.123}\) ). However, we found a significant strong association between the order of the emotions and children’s choice ( \(\chi ^2(25) = 527.103, p\lt.001, {\color {black}V=.459}\) ). While reading aloud the emotion options, children tended to choose the last and first options.

4.2 Mapping Tactile Stimuli to Valence and Arousal

After converting the chosen emotions into values of valence and arousal, as described in Section 3.3.4, we analyzed each of these continuous variables with three-way mixed ANOVA. We tested for main effects and interaction effects between our independent variables: shape unit and movement as the two within-subjects factors, and children’s visual acuity as the between-subjects factor. Each shape had different valence and arousal levels, and the movement condition had a different effect in children with and without VI.

Valence levels, figure 5, revealed a statistically significant two-way interaction effect between shape movement and visual acuity ( \(F(1,36)=6.764, p=.013, \eta ^2_p=.158\) ), a simple main effect of movement ( \(F(1,36)=8.679, p=.006, \eta ^2_p=.194\) ), and a simple main effect of shape ( \(F(4,144)=7.724, p\lt.001, \eta ^2_p=.177\) ). The emotions attributed to dynamic shapes had generally lower valence (M = −.065, SE =.168) compared to static shapes (M =.489, SE =.170). This difference is stronger for sighted children who chose on average emotions with lower negative valence for dynamic shapes (M = −.356, SE =.244) and lower positive valence for static shapes (M =.689, SE =.247). Children with visual impairment attributed similar lower positive levels of valence to both dynamic (M =.225, SE =.231) and static shapes (M =.290, SE =.234). Lastly, the valence attributed to each shape sorted from lower to higher values was: Spikes (M = −.625, SE =.236), Goosebumps (M = −.277, SE =.210), Concaves (M =.542, SE =.250), Wrinkles (M =.587, SE =.275), and Tentacles (M =.833, SE =.255). This result partially supports H3, as the obtained valence levels were sorted as expected. The exception was the shape unit Goosebumps, which based on prior work with sighted adults, we expected to have higher valence [29].
Figure 5:
Shows four bar plots. Two bar plots represent each shape’s valence and arousal levels per shape-unit and visual ability (sighted, visual impaired). The other two bar plots represent valence and arousal levels per movement and visual ability. The valence and arousal levels ranging from high negative (HN), low negative (LN), neutral (NT), low positive (LP), and high positive (HP).
Figure 5: Valence and arousal levels splitted by visual acuity, movement and shape-unit. Ranging from high negative (HN), low negative (LN), neutral (NT), low positive (LP), and high positive (HP)

Regarding the arousal levels, figure 5, we found a statistically significant simple main effect of shape ( \(F(4,144)=8.335, p\lt.001, \eta ^2_p=.188\) ), and a three-way interaction effect between visual acuity, shape and movement ( \(F(4,144)=4.388, p=.002, \eta ^2_p=.109\) ). The arousal attributed to each shape unit sorted from higher to lower values was: Spikes (M = 1.707, SE =.222), Goosebumps (M = 1.344, SE =.245), Tentacles (M =.753, SE =.250), Concaves (M =.486, SE =.236), and Wrinkles (M =.037, SE =.195). The three-way interaction effect revealed that children with and without visual impairment attributed emotions with different levels of arousal when three out of five shapes were dynamic rather than static. Specifically, the emotions attributed to Tentacles by children with VI had higher arousal when it was dynamic (M = 1.600, SE =.399) compared to when it was static (M =.300, SE =.435), while sighted children attributed emotions with higher arousal to Tentacles_S (M = 1.000, SE =.458) than to Tentacles_D (M =.111, SE =.420). For Wrinkles the opposite effect was shown, as children with VI attributed higher arousal when it was static (M =.850, SE =.401) compared to when it was dynamic (M = −.200, SE =.453), while sighted children associated higher arousal to Wrinkles_D (M =.556, SE =.477) compared to Wrinkles_S (M = −1.056, SE =.422). Lastly, the emotions attributed to Concaves by sighted children had similar arousal levels whether this shape was dynamic (M =.278, SE =.472) or static (M =.167, SE =.483), while children with VI attributed emotions with higher arousal when it was dynamic (M = 1.350, SE =.447) compared to when it was static (M =.150, SE =.458). Although we did not find a significant simple main effect of movement on the arousal level ( \(F(1,36)=1.735, p=.196, \eta ^2_p=.046\) , (H4 not supported), the three-way interaction effect reveals that movement was differently perceived by sighted and VI for more than half of the stimuli, and there is an interaction effect between the shape unit and the movement level for emotion perception.
Figure 6 plots the joint probability distributions for different shapes and visual acuity, using a Kernel Density Estimate (KDE) method. The marginal distributions along the two axes depict the arousal and valence estimations of the expressed emotion. The plots illustrate that children with and without visual impairment have different emotional/tactile mapping (H3), especially for Concaves and Wrinkles. Spikes were the most effective stimuli conveying negative valence and high arousal for all children. Contrary to our hypothesis, Goosebumps had an emotional distribution similar to Spikes with negative valence. Tentacles were perceived as conveying high-valence emotions. Wrinkles relate to low arousal, and high-valence emotions for sighted children but not for children with VI. Concaves had high variability for both groups.
In addition, we noticed a difference between low-vision and blind children when perceiving the Spikes stimuli. In figure 7, we zoom on to the emotional distribution, split by visual acuity (sighted, low-vision, and blind), and the movement level. Children with low vision perceived the Spikes stimuli as high-arousal and negative valence, similar to sighted children. On the other hand, children with blindness had a more uniform distribution of perception for emotional valence and arousal. It is noteworthy that this difference is only visible for Spikes stimuli.
Figure 6:
Shows twenty plots representing the kernel density estimation (KDE), which is a cluster representation of the emotion’s variables (arousal and valence) selected by each shape unit (tentacles. wrinkles, spikes, goosebumps, and concaves) and shape movement (static, dynamic). Each plot has the x-axis with a 7-point scale with the valence value (ranging from high negative (HN), low negative (LN), neutral (NT), low positive (LP), and high positive (HP). The y-axis is the arousal value and has the same range of values as valence. The coordinates used in the description are the pair of valence and arousal. The first ten plots show the KDE for sighted children, and the last tenfive show the KDE for children with visual impairment. The KDE graphs show the tendency for arousal and valence values depending on the shapes studied and the child’s visual acuity.
Figure 6: Kernel Density Estimation (KDEs) for each shape plotted onto a 2D emotion plane, also separated by visual acuity group. The marginal distributions estimate valence and arousal values for the expressed shape. (HN: high negative; NT: neutral; HP: high positive)
Figure 7:
is like Figure 5, with six KDE plots (showing valence and arousal classifications). It represents the KDE plots emotion information for spikes shapes split by children’s visual acuity (sighted, low-vision, and blind) and movement (spikesD and spikesS). We can see similar clusters for sighted children and children with low vision; for both movements, the most selected quadrants were (MN, MP), and the others were inexpressive. The third KDE plot, which represents the selection made by children with blindness, was similar for static and dynamic modes, and the order was (LN,MP), (MP,LP), (MP, MN) and (LN,LN).
Figure 7: Kernel Density Estimation (KDEs) for spikes shape on an emotion plane, separated by the dynamic level of the shape movement as well as the visual acuity of the children (sighted, low vision, and blind).

4.3 Mapping Strategies

In this section, we present the qualitative results of children’s reported strategies for associating tactile stimuli with emotions and their ideas of shape’s potential applications. We categorized their strategies in two broad themes: 1) tactile stimuli features driven where it is linked to the tactile features (roughness, movement, shape, frequency) and different-mapping (sensations or concepts); and 2) story context situations, in which participants leveraged the story context and familiar situations to associate with emotions.

Children (N=20) could explicitly express their rationale for tactile-emotion associations for at least one shape. They explicitly expressed their rationale for 38.00% of the 500 mappings (VI=37.30%, S=38.33%). Those explicit associations were more frequently related to tactile stimuli features (TS_Driven=77.77%, VI=71.13%, S=84.78%) than the story context (Story_Driven=20.11%, VI=25.77%, S=14.13%). In the remaining cases, children related to both strategies, tactile and stories (Both_Driven=2.12%).
Tactile-stimuli-driven associations (TS_N=147, VI=69, S=78) were explicit associated with form (From_N=73, VI=35, S=38) and movement (Move_N=25, VI=13, S=12). Tentacles were associated with hair and grass: "little hairs, reminds us of ups and downs when we are nervous" (C12S, fear); "it feels like grass" (C44B, happy). Wrinkles with smiling mouth "it has lines that look like a smile" (C4B, happy). Spikes with nails "it looked like nails" (C29LV, anger). Goosebumps with bubbles "it is like inflating bubbles" (C12S, anger). And Concaves with holes "it has holes, looks like hiding people" (C44B, fear). Additionally, the movement was often referred to and shown to influence the mapping. "When we cry, breathing is accelerated, I felt the skin stretching and bending like the lungs" (C12S, sad, Wrinkles_D); "It goes up and down, feels like very angry" (C40LV, angry, Spikes_D); "It seems a nervous heartbeat" (C10S, fear, Concaves_D). Dynamic and static tactile stimuli had similar justification levels for all children, the percentage of mapping justified due to tactile features was 29.2% for dynamic shapes and 28% for static mode.
The most frequent stimuli associated to its tactile features were Tentacles_S (TS_N=21, VI=11, S=10), followed by Spikes_D (SD_N=17, VI=8, S=9) and Goosebumps_D (GD_N=16, VI=7, S=9). The less frequent ones were Wrinkles_S (WS_N=10, VI=4, S=6) and Concaves_D (CD_N=11, VI=3, S=8). This result suggests that Tentacles_D, Spikes_D, and Goosebumps_D tactile features are easier to map to an emotion. We also see a lower justification level in Wrinkles and Concaves, for children with VI, the same does not apply to sighted children (Tentacles_VI=18, Wrinkles_VI=11, Spikes_VI=12, Goosebumps_VI=15, Concaves_VI=9), (Tentacles_S=16, Wrinkles_S=15, Spikes_S=16, Goosebumps_S=15, Concaves_S=16)).
Children justifications based on their haptic sensation (Sensation_N=75, VI=31, S=44) rather than mapping to a concept (Concept_N=75, VI=31, S=44). Interestingly, sensations-based justifications were made by all, however children with blindness use it less often (Sensation_S=56,41%, VI=44,92%, LV=48.64%, B=40.62%), e.g., "it trembles" (C29LV, fear, Gossebumps_D); "it is fluffy" (C25B, happy, Tentacles_S); "It tickles or "he slipped something over my arm" (C39S, fear, Wrinkles_D). Concept mapping was also used, e.g., "urchin" (C23S, anger, Spikes_S); "waves" (C29Lv, anger, Wrinkles_D); "nails" (C29Lv, anger, Spikes_D); "tears" (C18B, sad, Tentacles_S); "plant spine" (C18B, fear, Tentacles_D); "winkles" (C18B, fear, Wrinkles_S). Overall, Children with blindness used concept-mapping more often than the others (Concept_S=19.23%, VI=27.53%, LV=24.32%, B=31.25%).
Story-content associations were less used and showed empathy (SS_N=38, VI=25, N_S=13) "She is sad because no one heard her" (C6LV, sad, Wrinkles_D), "He was calm sitting on the couch" (C13LV, calm, Goosebumps_D). Children with visual impairment used the story-context strategy more than their sighted counterparts (Story_Rationale_VI=26.59%, Story_Rationale_S=15.21%).

Emotional associations were also associated with children’s past experiences and feelings. For example, a story reminded a scared moment "I am afraid of waves" (C15LV, fear, Tentacles_D) and a happy one "I was travelling with my family, and it was a happy time (C4S, happy, Concaves_S)". A tactile cue created a calm feeling "I like to feel this one with my fingers. It is so relaxing" (C29LV, calm, Tentacles_D) or frightening "This is a very immersive shape. WOW, he slipped something over my arm" (C39S, fear, Wrinkles_D). In four cases, children also showed associations between the static and dynamic display of the same shape unit. In those cases, they selected the same emotion for dynamic and static mode "This shape is similar to the previous one <Concaves_D> but less angry." (C50LV, angry, Concaves_S).
The potential use of shape-based platforms was coded based on children’s answers. Children had difficulty envisioning it (N=21), using it to express emotions (N=11), as a relaxing device for hugging and tapping (N=11), or playing using new stories and new shapes (N=7).

5 Discussion

This paper explores how shape-changing stimuli elicit emotions in children with and without visual impairment. We aim to inform the design of inclusive interactive systems that combine tactile stimuli to elicit specific emotional precepts. We extend prior work on shape-changing robotic skins by investigating varying skin shapes and movement levels and their effect on emotional mappings in children with and without VI. In the following section, we answer the research questions and discuss results in light of previous research on shape-changing feedback and emotional mapping. Finally, we contribute with a mapping scheme for shape-changing stimuli and emotions that can be used in interactive systems for mixed-visual ability groups or individual use.

5.1 Which emotions do children map to each shape-changing stimulus?

Not all the shapes reliably map to a specific emotion. For this tactile vocabulary, tentacles and happiness were the most frequent combination, followed by spikes linked to anger/fear. Sadness was challenging to express using shape-changing stimuli, which is aligned with previous research [29]. Although distinct shape units can communicate a wide range of valence levels, arousal was harder to express; introducing movement to a shape unit did not significantly increase perceived arousal.
Children’s justifications support a high variance in their associations, which suggests that tactile stimuli’ emotional relations are highly user-dependent and can be mediated by multiple individual factors. As seen in figure 4, stimuli can be perceived as pleasant by some and unpleasant by others; for example, Spikes_D is mapped to anger (N=16) and happy (N=12) "I feel many spikes like they are angry" (C35S, angry) and "it is a thrilled feeling like tickles." (C29LV, happy).
Nevertheless, there is a tendency to associate dynamic/static Tentacles with happiness, dynamic/static Spikes with anger and fear, and static Wrinkles and Concaves with calmness. In Goosebumps and Concaves, the emotional mappings have high variance. This result can be a side effect of using the same shape skin for Goosebumps, Concaves, and Spikes. Some children noticed that hardy points of the Spikes shape unit when exploring other shapes, which could have influenced their responses - e.g., "angry peaks and fearful mountains" (C21LV, Goosebumps), "It seems a nervous heartbeat with needles" (C10S, Concaves). Therefore, goosebumps and concaves emotional mapping should be reassessed in new shapes (without contracted spikes).
Finally, the proposed shape-changing designs did not reliably convey sadness, which should be the focus of future research.

5.2 Differences and similarities in shape mapping between children with and without visual impairment

Children with and without visual impairment have different emotional-tactile mapping. Visual features for sadness influenced children’s mapping. A more reliable tactile-emotional association exists in sighted children as seen in Figure 4. Five stimuli mapped with an emotion: Tentacles_S for happiness, Wrinkles_S or Concaves_S to calm, Spikes_D and Spikes_S to fear and anger and Concaves_D to sad. In children with visual impairment, the association is less reliable as only three stimuli mapped with emotions: Tentacles_D with happiness, Wrinkles_D with calm and Spikes_D with fear and anger; all the other shape-changing stimuli did not have a reliable association.
In table 2, a pattern emerges as children with and without VI consistently associate the same emotions with specific shapes, except for sadness. Notably, tentacles represent happiness, wrinkles convey calmness, and spikes denote fear and anger, demonstrating a shared shape vocabulary across the two groups. It’s worth highlighting that only sighted children attribute sadness to static concaves, whereas children with VI do not make a distinct association with sadness. This overlap in emotional mappings is advantageous, enabling the provision of identical shapes with similar meanings for shared group activities involving children with different visual abilities—another noteworthy observation from Table 2 is the perception variances related to movement. Children with VI better identified emotions (happy, anger, and fear) when expressed through dynamic movement. In contrast, sighted children excel in associating spikes with anger and fear, irrespective of whether the stimuli are dynamic or static. Interestingly, sighted children tend to associate static tentacles and static wrinkles to happiness and calmness.
Table 2:
EmotionSightedVI
Happytentacles_Stentacles_D
Calmwrinkles_S & concaves_Swrinkles_S
Fearspikes_D & spikes_Sspikes_D
Angerspikes_D & spikes_Sspikes_D
Sadconcaves_Dnone
Table 2: Emotional mappings for sighted and children with visual impairment (VI)
Spikes_D consistently mapped anger and fear regardless of children’s visual ability. Interestingly, sadness was rarely selected by children with VI, but sighted children perceived sadness through the Concaves_D stimuli. The difference can potentially be explained by the visual nature of the bio-inspired metaphor; it was a "looking inside, self-absorbed feeling". The analogy can eventually become too "graphic" to be interpreted by children with VI and have a more natural mapping to sadness for sighted children. Further research is needed to understand whether visual metaphors play a role in tactile emotional precepts. One promising research avenue is the study of crossmodal correspondences [21, 43] and its differences dependent on users’ visual ability levels [4].
Movement and visual acuity influenced valence and arousal levels. The valence levels of emotions rated by sighted children ranged from lower negative (for dynamic) to lower positive for static shapes. While children with visual impairment chose similar lower positive valences for static and dynamic shapes. We also found opposite attributions of arousal by sighted and children with VI for three shape units when these were dynamic and static. These differences also support the distinct emotional-tactile mappings for children with and without VI. Additionally, one child with VI explicitly stated that they were more comfortable with dynamic shapes, as they knew the expected behaviour (e.g. "it is a thrilled feeling like tickles. Touching a moving stimulus is less stressful, we know what to expect"(C29Lv, happy)). Although we do not know whether sighted children feel the same way, this fear of unexpected behaviours should be considered in the future.
In Spikes stimuli, the emotional mapping was different between visual impairment levels. An additional result worth reflecting on is the tendency for different mappings in Spikes stimuli between children with blindness and low vision, as seen in figure 7. Children with low vision perceive spikes shapes similar to sighted children, with high arousal and negative valence. However, children with blindness showed a more uniform distribution covering all valence and arousal quadrants. The other shapes did not reflect the same tendency. Future research should aim to understand the meaning and reasons for this unexpected effect.
The study sheds light on the impact of visual acuity on tactile-emotional associations. Like music and material-mapping, tactile stimuli showed different emotional associations between children with and without visual impairment [52]. Those differences have three main implications for tactile communication. First, our findings suggest that sighted children leverage their visual experiences to make sense of shape-changing feedback, aligned with previous research [4]. Thus, designers can explore the visual features of tactile feedback to develop new shape-changing interactive systems. Second, the unexpected result of Spikes emotional mapping raised awareness of the potential differences in tactile perceptions for children with low vision compared to children with blindness. Third, designers should consider movement features to reduce children’s stress from unexpected events, which contrasts with prior work that focuses on expressing higher arousal levels [29].

5.3 What are children’s rationale for mapping tactile stimuli to emotions?

Children did not have difficulty mapping emotions and describing their associations. The most preferred strategy disclosed was tactile-related; however, children with VI used it less than their sighted peers. To mitigate the order bias, we presented emotions randomly.
Self-reported association strategies revealed links to tactile-stimuli features (form and movement) and story context. Aligned with previous research, when using tactile-driven strategies, children used their sensations (e.g. fluffy, relaxing, aggressive) more than predefined concepts (e.g. urchin, tears, waves, nails). Children with VI, especially children with blindness, used more concept-driven justifications and familiar touchable concepts to map the emotions (e.g., tears, wrinkles, plant spines). This finding supports previous research [65] in which children with blindness extensively train tactile classification and can reach the same skill level as their sighted peers [4].
Moreover, children also leveraged living experiences to tactile and context associations. They can relate to familiar shapes (e.g. human skin, grass or hair) and rhythms (e.g. heartbeat, breathing rate) or story context using self-disclosure (e.g. family vacations, waves, fear) or empathize with the story’s character (e.g. "It is normal to be scared when you speak to others" (C10S, fear, Peter)). Occasionally, they used a combined strategy, using the story context to justify the character’s tactile behaviour (e.g. "John’s skin had some ups and downs showing insecurity, fear. He was nervous because of the cat." (C12S, fear, Spikes_S))
The emotion-tactile strategies justifications have two main highlights: 1) shape movement did not affect children’s emotional mapping; 2. Wrinkles_S and Concaves_D were harder to justify by children with VI; that effect did not apply to sighted children. Results suggest that Concaves are unfamiliar shapes for children with VI. We hypothesize that Concaves are not commonly encountered on everyday surfaces. Also, Wrinkles_S need more tactile expressiveness, maybe by increasing its rugosity.
The explicit justification rationale in 38% of cases is notably lower than observed in previous research involving sighted adults (73% of cases)  [43]. Nevertheless, our study reveals a noteworthy similarity in rationale based on geometric features and everyday objects between children with and without visual impairment (VI) and sighted adults  [43]. The observed decrease in explicit justifications among children, compared to adults, suggests that children have more difficulty expressing mapping strategies, particularly at this developmental stage  [20]. Moreover, these findings also suggest that articulating the rationale behind tactile-emotion mappings poses a challenge for both children and adults  [43], indicating the complexity of this cognitive process.

5.4 Comparing with prior research on shape-changing emotional mappings

Our findings exhibit similarities and differences compared to a previous study on mapping shape-changing stimuli to emotions by adults  [29] (N=40 in laboratory experiments). Six tactile stimuli were used in the previous work, including two shape units, i.e., spikes, goosebumps, two movement levels (fast speed vs. slow speed), and two amplitude levels. Similar to our findings, participants associated Spikes stimuli with emotions of high-negative valence and high-positive arousal, such as fear and anger.
In contrast to the previous findings that mapped the Goosebumps to positive-valence emotions, our results indicate that children associate negative-valence emotions with these stimuli. In addition, prior work highlighted the impact of changes in shape’s speed on the perception of emotional arousal; however, we did not observe a significant difference in emotional perception based on variations in movement. Various reasons could explain these differences.
First, the modality users interacted with differs between the two studies. In our study, shape units were placed inside a box, and participants could only rely on their sense of touch to interpret the stimuli. However, in previous research, shape units were not concealed; thus, participants may have relied on visual and tactile features to attribute emotions. This difference in emotional mapping, due to the absence or presence of visual cues, is supported by previous research [4], in which sighted users had a higher categorization accuracy using visual exploration compared to haptic exploration.
Second, participants’ demographics varied in age and background between user studies. Previous work focused on adult participants, while our study focused on children. Participants also differed in terms of visual acuity and cultural backgrounds, which may affect emotional perceptions.
Table 3:
EmotionShapeMaterialMovementSightedVIMVA
Happytentaclesoft and flexibleinflate/deflatestaticdynamicdynamic
Calmwrinklesoft with roughnessslow inflate/deflatestaticstaticstatic
Fearspikesflexible with hard tipsfast inflate/vacuumdynamicdynamicdynamic
Angerspikesflexible with hard tipsfast inflate/vacuumdynamicdynamicdynamic
Sadnone-----
Table 3: Recommended tactile stimulus features per emotion for sighted, children with visual impairment (VI), and mixed-visual ability groups (MVA)

5.5 Broader implications and Design Guidelines

Our work yielded a set of reflections and guidelines for emotional, tactile expressions, and affective computing studies, especially in mixed-visual ability environments. Therefore, first, we reflect on the benefits and compromises of using stories as a methodological technique to conduct affective computing studies with children. Second, we reflect on tactile-emotional mappings. Lastly, we recommend design guidelines for shapes as an emotional medium for children with and without visual impairment and the trade-offs needed in mixed-visual ability groups.

5.5.1 Using stories in affective computing studies.

Stories can enhance tactile experiences; creating an emotional context can greatly benefit affective computing studies, especially with children. It is more straightforward for children to identify emotions by enacting situations while interacting through tactile stimulation, eliciting different emotions. However, there is also a downside; our results showed that some children use the context of the story as the primary cue for emotional recognition. Although the story-driven strategy was less used than tactile-driven associations, it was the main reason for 20.3% of all emotion-association justifications.

5.5.2 Tactile-emotional mapping based on daily "visual" experiences.

Aligned with previous research on affective computing [26], results showed that tactile stimuli are mapped to different emotions based on contextual and living experiences. Children did not have difficulty mapping tactile stimuli to distinct emotions as they often used their memories to classify their responses. However, those living experiences were not exclusively tactile but also visual. During the study, all children perceived the holes when exploring Concaves. For sighted children, it was common to map Concaves to a sad (self-absorbed) feeling. On the other hand, it was not natural for children with visual impairment to map concaves with emotions as they did not associate holes to "self-absorbed feelings"; prior work suggests that it may be due to their reduced visual experiences [41]. Another example is that Spikes were usually related to aggressive visual cues like a blowfish [29]. Although Spikes are expressive because of their sharp and rigid tips, children with blindness had a less reliable mapping than their sighted or low-vision peers who consistently selected anger or fear. Results also showed that Goosebumps stimuli were unexpectedly related to negative valence, in contrast to previous tactile-emotion studies [29], possibly because of our research’s absence of visual cues.
Therefore, designers should consider more than just haptic features when designing tactile-based expressions, as children’s visual experiences also influence their emotional mapping. However, it is vital to evaluate stimuli with children with and without visual impairment, especially children with blindness, reducing the risk of a lack of shared understanding.

5.5.3 Design guidelines for emotional shapes.

Results link tactile stimuli to emotional categories. Notably, children associate Tentacles with happiness, Spikes with anger and fear, Wrinkles, and concaves with calm. Also, results showed that silicon-based shapes and movement could provide children comfort and flexibility to express emotions.
However, when designing tactile interactions, it is key to consider the users’ visual abilities. We suggest specific tactile stimuli guidelines based on users’ visual abilities - table 3. The best shape to express happiness is a soft tentacles-like shape (flexible hairy shape); children with VI preferred the dynamic mode, while sighted preferred static. To express calm, we suggest a wrinkle-like shape, a soft material with some roughness; the designer can opt for a static shape or a slow rhythm to relate to a relaxing heartbeat or breathing. For fear and anger, our findings suggest using spikes with a flexible material that can vacuum and inflate quickly and with multiple rigid tips (or small contact zones), ideally in dynamic mode. However, a static shape is also a valid option.
For mixed-visual ability group interactions, we suggest that tactile stimuli balance the shape-expressiveness for skins to be perceived by all users regardless of their visual acuity. For the emotions of calm, anger, and fear, the suggestion is straightforward, as children with and without visual impairments exhibited the same shape-emotional mapping. However, the mapping for happiness presents a more nuanced scenario. While the shape unit selection for all children is tentacles, sighted children tended to prefer the static mode, whereas those with visual impairments favored the dynamic mode (refer to Table 2). Recognizing the inherent fluidity of tentacles, even in static mode, we propose a compromise: advocating for the use of dynamic shapes to express happiness in mixed-visual ability groups. This recommendation aligns with findings from prior research  [21] on emotions and the mapping of shape-changing objects. In this study, sighted adults, when classifying dynamic stimuli, demonstrated higher accuracy in reporting affect compared to static stimuli. Therefore, based on our findings, we suggest: 1) for happiness, a dynamic tentacles-like shape; 2) for calm, a static wrinkles-like shape; and 3) for anger and fear, a dynamic spikes-like shape.

5.6 Future application in children with different visual abilities

Our results are based on qualitative and quantitative data among fifty children, categorized into two groups: 26 children with visual impairment and 24 sighted. Our qualitative exploration extended to all children showed mapping strategies and children’s classifications of shapes into emotional variables such as valence and arousal in the two distinct groups. Our quantitative analysis explored the different emotional categories of shapes and compared the perception differences among children with and without visual impairment. The richness of our sample size, permits the use of our quantitative and qualitative insights into future studies and the differences between children with and without visual impairment on emotional and tactile perception. However, it is crucial to note that when we segmented the qualitative analysis of the twenty-six children with visual impairment into their visual acuity levels, the distinct responses between children with blindness and those with low vision may not be universally applicable. This limitation arises from the insufficient representation of children across the different visual impairment levels.Consequently, while our results catalyze awareness about the emotional-tactile perception that can be applied when exploring the differences between children with visual impairment and sighted, our findings in the subgroup of visual impairment, showing the differences between children with blindness and low vision, cannot be universally applied due to the limited number of children in each subgroup (blind, low-vision).

5.7 Limitations and Future Work

One of the limitations of our user study is the poor level of statistical power, which increases the probability of Type II errors. Although we did not perform an a priori power analysis to calculate the ideal sample size, we did a post hoc power analysis to evaluate the probabilities of correctly rejecting the null hypotheses. For the statistical tests related to H1, H3 and H4, we obtained power values above 0.8, which endorse the probability that these results will generalize. The non-significant results related to H2 (chi-square goodness of fit tests in Table 2) showed power values between 0.2 and 0.7, suggesting that collecting a bigger sample might show new differences for the stimuli with non-significant emotion classifications. However, our results provided enough support to reject the null hypothesis for H2 and we do not see this issue as problematic. Overall, the main findings, claims, and broader implications of our study are drawn from statistically significant differences with moderate effect sizes, which should not be undermined by the lack of statistical power [3].
Secondly, this study included children from 8 schools in a single country. Although results guarantee some consistency regarding educational background, it needs to explore a broader spectrum of cultural profiles. Also, the number of children with visual impairment was insufficient to quantitatively confirm the differences in tactile-emotion associations and mapping strategies between blind and low-vision perceptions. Further research can replicate the reported study with more children from different cultures, countries, and visual abilities. Additionally the presence of the spikes, concaves and goosebumps on the same shape skin could have impacted their children’s mappings, as our system may not always completely extract and conceal the spikes inside the cavity. Further research should explore novel shapes, isolated in separate skins, or multisensory feedback to express sadness and distinguish between fear and anger precepts.
Additionally, emotions have subjective and individual factors that our predefined values of arousal and valence did not express; also, our work did not evaluate emotional dominance. Further research with children should include a children’s adapted version of Self-Assessment Manikin (SAM) [10] into a tactile questionnaire allowing children with and without visual impairment to classify valence, arousal, and dominance [35]. Lastly, in future work, EEG or GSR sensors can be explored to easier the emotional mapping process.

6 Conclusion

We presented a systematic exploration of the potential of shape-changing stimuli to convey emotions to children with and without visual impairment. Results show statistically significant associations between emotions and tactile stimuli. We also found differences in these associations according to children’s visual abilities. We provided qualitative data on children’ association strategies, highlighting the previous daily visual experiences and haptic training. These results extend prior work on tactile-emotion mappings, particularly when using tactile soft shape-based feedback. We also contribute with a new process to help affective computing researchers explore emotion mapping with children, by creating a context-driven procedure. We synthesize the findings into guidelines and reflections that can aid designers in creating multisensory experiences.

Acknowledgments

We thank J. Nogueira, V. Balsinha, Afectos and all schools involved.
FCT support: 10.54499/UIDB/50009/2020, SFRH/BD/06452/2021,
SFRH/BD/06589/2021, 10.54499/UIDB/50021/2020,
10.54499/LA/P/0083/2020, 10.54499/UIDP/50009/2020,
2022.00816.CEECIND/CP1713/CT0013, EU DCitizens GA-101079116,
IAPMEI/ANI/FCT/CRAI C628696807-00454142, Hybrida PTDC/CCI-INF/7366/2020, Tailor GA-952215, HumanE-AI-Net H2020-ICT-48-2020/952026 and NSF Grant 1830471.

Supplemental Material

MP4 File - Video Preview
Video Preview
MP4 File - Video Presentation
Video Presentation
MP4 File - Shape-Changing Skins Expressions
This video shows the ten shape-changing skins expressions
PDF File - Examples of the stories used in the study
This document present the stories used in the procedure

References

[1]
Rebecca Andreasson, Beatrice Alenljung, Erik Billing, and Robert Lowe. 2018. Affective touch in human–robot interaction: conveying emotion to the Nao robot. International Journal of Social Robotics 10 (2018), 473–491.
[2]
Cristiana Antunes, Isabel Neto, Filipa Correia, Ana Paiva, and Hugo Nicolau. 2022. Inclusive’R’Stories: An Inclusive Storytelling Activity with an Emotional Robot. In 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 90–100. https://doi.org/10.1109/HRI53351.2022.9889502
[3]
Thom Baguley. 2004. Understanding statistical power in the context of applied research. Applied ergonomics 35, 2 (2004), 73–80.
[4]
Elisabeth Baumgartner, Christiane B Wiebel, and Karl R Gegenfurtner. 2015. A comparison of haptic material perception in blind and sighted individuals. Vision research 115 (2015), 238–245.
[5]
Kirsten Boehner, Rogério DePaula, Paul Dourish, and Phoebe Sengers. 2007. How emotion is made and measured. International Journal of Human-Computer Studies 65, 4 (2007), 275–291. https://doi.org/10.1016/j.ijhcs.2006.11.016 Evaluating affective interactions.
[6]
Tara J. Brigham. 2017. Merging Technology and Emotions: Introduction to Affective Computing. Medical Reference Services Quarterly 36, 4 (2017), 399–407. https://doi.org/10.1080/02763869.2017.1369289 arXiv:https://doi.org/10.1080/02763869.2017.1369289PMID: 29043945.
[7]
Manuel Brum. 2010. Lucas the bat who was afraid of the dark. Minutos de Leitura.
[8]
Hendrik Buimer, Thea Van der Geest, Abdellatif Nemri, Renske Schellens, Richard Van Wezel, and Yan Zhao. 2017. Making Facial Expressions of Emotions Accessible for Visually Impaired Persons. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, Maryland, USA) (ASSETS ’17). Association for Computing Machinery, New York, NY, USA, 331–332. https://doi.org/10.1145/3132525.3134823
[9]
Hendrik P Buimer, Marian Bittner, Tjerk Kostelijk, Thea M van der Geest, Richard JA van Wezel, and Yan Zhao. 2016. Enhancing emotion recognition in VIPs with haptic feedback. In HCI International 2016–Posters’ Extended Abstracts: 18th International Conference, HCI International 2016 Toronto, Canada, July 17–22, 2016 Proceedings, Part II 18. Springer, 157–163.
[10]
Teah-Marie Bynion and Matthew T. Feldner. 2020. Self-Assessment Manikin. Springer International Publishing, Cham, 4654–4656. https://doi.org/10.1007/978-3-319-24612-3_77
[11]
Stanley J. Cantrell, R. Michael Winters, Prakriti Kaini, and Bruce N. Walker. 2022. Sonification of Emotion in Social Media: Affect and Accessibility in Facebook Reactions. Proc. ACM Hum.-Comput. Interact. 6, CSCW1, Article 119 (apr 2022), 26 pages. https://doi.org/10.1145/3512966
[12]
Bhavya Chopra and Richa Gupta. 2022. StoryBox: Independent Multi-Modal Interactive Storytelling for Children with Visual Impairment. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI EA ’22). Association for Computing Machinery, New York, NY, USA, Article 420, 7 pages. https://doi.org/10.1145/3491101.3519651
[13]
Constance Classen. 1997. Foundations for an anthropology of the senses. International Social Science Journal 49, 153 (1997), 401–412.
[14]
Constance Classen. 2012. The deepest sense: A cultural history of touch. University of Illinois Press.
[15]
Ravinder S. Dahiya, Giorgio Metta, Maurizio Valle, and Giulio Sandini. 2010. Tactile Sensing—From Humans to Humanoids. IEEE Transactions on Robotics 26, 1 (2010), 1–20. https://doi.org/10.1109/TRO.2009.2033627
[16]
Felecia Davis. 2015. The textility of emotion: A study relating computational textile textural expression to emotion. In Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition. 23–32.
[17]
Minke J de Boer, Tim Jürgens, Deniz Başkent, and Frans W Cornelissen. 2021. Auditory and Visual Integration for Emotion Recognition and Compensation for Degraded Signals are Preserved With Age. Trends in hearing 25 (2021), 23312165211045306.
[18]
Anna DeWitt and Roberto Bresin. 2007. Sound design for affective interaction. In Affective Computing and Intelligent Interaction: Second International Conference, ACII 2007 Lisbon, Portugal, September 12-14, 2007 Proceedings 2. Springer, 523–533.
[19]
Minke J. de Boer, Deniz Başkent, and Frans W. Cornelissen. 2020. Eyes on Emotion: Dynamic Gaze Allocation During Emotion Perception From Speech-Like Stimuli. Multisensory Research 34, 1 (2020), 17 – 47. https://doi.org/10.1163/22134808-bja10029
[20]
Murray J Dyck, Charles Farrugia, Ian M Shochet, and Martez Holmes-Brown. 2004. Emotion recognition/understanding ability in hearing or vision-impaired children: do sounds, sights, or words make the difference?Journal of Child Psychology and Psychiatry 45, 4 (2004), 789–800.
[21]
Feng Feng, Dan Bennett, Zhi-jun Fan, and Oussama Metatla. 2022. It’s Touching: Understanding Touch-Affect Association in Shape-Change with Kinematic Features. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 428, 18 pages. https://doi.org/10.1145/3491102.3502003
[22]
Tiffany Field. 2014. Touch. MIT press.
[23]
Louise Fryer. 2016. An introduction to audio description: A practical guide. Routledge.
[24]
Louise Fryer and Jonathan Freeman. 2014. Can you feel what I’m saying? The impact of verbal information on emotion elicitation and presence in people with a visual impairment. Proceedings of the international society for presence research (2014), 99–107.
[25]
Alberto Gallace and Charles Spence. 2010. The science of interpersonal touch: An overview. Neuroscience Biobehavioral Reviews 34, 2 (2010), 246–259. https://doi.org/10.1016/j.neubiorev.2008.10.004 Touch, Temperature, Pain/Itch and Pleasure.
[26]
Matthew J Hertenstein, Rachel Holmes, Margaret McCullough, and Dacher Keltner. 2009. The communication of emotion via touch.Emotion 9, 4 (2009), 566.
[27]
Matthew J Hertenstein, Dacher Keltner, Betsy App, Brittany A Bulleit, and Ariane R Jaskolka. 2006. Touch communicates distinct emotions.Emotion 6, 3 (2006), 528.
[28]
Laura Hoffmann and Nicole C. Krämer. 2022. The persuasive power of robot touch. PLoS ONE 16, 5, Artikel e0249554 (2022), e0249554–1 – e0249554–30. https://doi.org/10.1371/journal.pone.0249554
[29]
Yuhan Hu and Guy Hoffman. 2019. Using Skin Texture Change to Design Emotion Expression in Social Robots. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 2–10. https://doi.org/10.1109/HRI.2019.8673012
[30]
Yuhan Hu and Guy Hoffman. 2022. What Can a Robot’s Skin Be? Designing Texture-Changing Skin for Human-Robot Social Interaction. J. Hum.-Robot Interact. (may 2022). https://doi.org/10.1145/3532772
[31]
Yuhan Hu, Isabel Neto, Jin Ryu, Ali Shtarbanov, Hugo Nicolau, Ana Paiva, and Guy Hoffman. 2022. Touchibo: Multimodal Texture-Changing Robotic Platform for Shared Human Experiences. In Adjunct Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (Bend, OR, USA) (UIST ’22 Adjunct). Association for Computing Machinery, New York, NY, USA, Article 68, 3 pages. https://doi.org/10.1145/3526114.3558643
[32]
Yuhan Hu, Zhengnan Zhao, Abheek Vimal, and Guy Hoffman. 2018. Soft skin texture modulation for social robotics. In 2018 IEEE International Conference on Soft Robotics (RoboSoft). 182–187. https://doi.org/10.1109/ROBOSOFT.2018.8404917
[33]
Gijs Huisman. 2012. A Touch of Affect: Mediated Social Touch and Affect. In Proceedings of the 14th ACM International Conference on Multimodal Interaction (Santa Monica, California, USA) (ICMI ’12). Association for Computing Machinery, New York, NY, USA, 317–320. https://doi.org/10.1145/2388676.2388746
[34]
Gijs Huisman. 2017. Social Touch Technology: A Survey of Haptic Technology for Social Touch. IEEE Transactions on Haptics 10, 3 (2017), 391–408. https://doi.org/10.1109/TOH.2017.2650221
[35]
Gonzalo Iturregui-Gallardo and Jorge Luis Méndez-Ulrich. 2020. Towards the creation of a tactile version of the Self-Assessment Manikin (T-SAM) for the emotional assessment of visually impaired people. International Journal of Disability, Development and Education 67, 6 (2020), 657–674.
[36]
Julie A. Jacko, Leon Barnard, Thitima Kongnakorn, Kevin P. Moloney, Paula J. Edwards, V. Kathlene Emery, and Francois Sainfort. 2004. Isolating the Effects of Visual Impairment: Exploring the Effect of AMD on the Utility of Multimodal Feedback. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria) (CHI ’04). Association for Computing Machinery, New York, NY, USA, 311–318. https://doi.org/10.1145/985692.985732
[37]
Heekyoung Jung, Youngsuk L Altieri, and Jeffrey Bardzell. 2010. Skin: designing aesthetic interactive surfaces. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction. 85–92.
[38]
Heekyoung Jung, Youngsuk L. Altieri, and Jeffrey Bardzell. 2010. SKIN: Designing Aesthetic Interactive Surfaces. In Proceedings of the Fourth International Conference on Tangible, Embedded, and Embodied Interaction (Cambridge, Massachusetts, USA) (TEI ’10). Association for Computing Machinery, New York, NY, USA, 85–92. https://doi.org/10.1145/1709886.1709903
[39]
Patrik N Juslin. 2013. From everyday emotions to aesthetic emotions: Towards a unified theory of musical emotions. Physics of life reviews 10, 3 (2013), 235–266.
[40]
Orianne Lallemand and Eleonore Thuillier. 2016. The wolf who was afraid of his shadow. Zero a Oito.
[41]
Markus Lang, Manfred Hintermair, and Klaus Sarimski. 2017. Social-emotional competences in very young visually impaired children. British Journal of Visual Impairment 35, 1 (2017), 29–43.
[42]
Lucian Leahu, Steve Schwenk, and Phoebe Sengers. 2008. Subjective Objectivity: Negotiating Emotional Meaning. In Proceedings of the 7th ACM Conference on Designing Interactive Systems (Cape Town, South Africa) (DIS ’08). Association for Computing Machinery, New York, NY, USA, 425–434. https://doi.org/10.1145/1394445.1394491
[43]
Anan Lin, Meike Scheller, Feng Feng, Michael J Proulx, and Oussama Metatla. 2021. Feeling Colours: Crossmodal Correspondences Between Tangible 3D Objects, Colours and Emotions. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 187, 12 pages. https://doi.org/10.1145/3411764.3445373
[44]
Xin Lu, Poonam Suryanarayan, Reginald B Adams Jr, Jia Li, Michelle G Newman, and James Z Wang. 2012. On shape and the computability of emotions. In Proceedings of the 20th ACM international conference on Multimedia. 229–238.
[45]
Shaun Alexander Macdonald, Stephen Brewster, and Frank Pollick. 2020. Eliciting Emotion with Vibrotactile Stimuli Evocative of Real-World Sensations. In Proceedings of the 2020 International Conference on Multimodal Interaction (Virtual Event, Netherlands) (ICMI ’20). Association for Computing Machinery, New York, NY, USA, 125–133. https://doi.org/10.1145/3382507.3418812
[46]
Trace Moroney. 2019. When I am feeling sad, happy, scared, and angry. Porto Editora.
[47]
Akira Nakayasu. 2016. Luminescent tentacles: a scalable SMA motion display. In Adjunct Proceedings of the 29th Annual ACM Symposium on User Interface Software and Technology. 33–34.
[48]
Isabel Neto, Yuhan Hu, Filipa Correia, Filipa Rocha, Guy Hoffman, Hugo Nicolau, and Ana Paiva. 2024. paper repository. https://github.com/IsabelCanicoNeto/Shape-skins-emotions. Accessed: 2024-02-19.
[49]
Elizabete Neves. 2019. The ball of emotions. Porto Editora.
[50]
Masaru Ohkubo, Miki Yamamura, Hiroko Uchiyama, and Takuya Nojima. 2014. Breathing clothes: artworks using the hairlytop interface. In Proceedings of the 11th Conference on Advances in Computer Entertainment Technology. 1–4.
[51]
World Health Organization. 2022. Visual Acuity Levels. "https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment". Accessed: 2022-12-20.
[52]
Hye Young Park and Hyun Ju Chong. 2019. A comparative study of the perception of music emotion between adults with and without visual impairment. Psychology of Music 47, 2 (2019), 225–240.
[53]
Tom Percival. 2021. How to make friends. Jacaranda Editora.
[54]
Rosalind W Picard. 2000. Affective computing. MIT press.
[55]
Shi Qiu, Pengcheng An, Jun Hu, Ting Han, and Matthias Rauterberg. 2020. Understanding visually impaired people’s experiences of social signal perception in face-to-face communication. Universal Access in the Information Society 19 (2020), 873–890.
[56]
Jussi Rantala, Katri Salminen, Roope Raisamo, and Veikko Surakka. 2013. Touch gestures in communicating emotional intention via vibrotactile stimulation. International Journal of Human-Computer Studies 71, 6 (2013), 679–690. https://doi.org/10.1016/j.ijhcs.2013.02.004
[57]
Sandrine Rogeon and Gustavo Mazali. 2007. Pedro is affraid of ghosts. Livraria Civilizacao Editora.
[58]
James A Russell. 1980. A circumplex model of affect.Journal of personality and social psychology 39, 6 (1980), 1161.
[59]
Hasti Seifi and Karon E. MacLean. 2013. A first look at individuals’ affective ratings of vibrations. In 2013 World Haptics Conference (WHC). 605–610. https://doi.org/10.1109/WHC.2013.6548477
[60]
Orit Shaer, Eva Hornecker, 2010. Tangible user interfaces: past, present, and future directions. Foundations and Trends® in Human–Computer Interaction 3, 1–2 (2010), 4–137.
[61]
Ali Shtarbanov. 2021. FlowIO Development Platform–the Pneumatic “Raspberry Pi” for Soft Robotics. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 1–6.
[62]
David Silvera-Tawil, David Rye, and Mari Velonaki. 2015. Artificial skin and tactile sensing for socially interactive robots: A review. Robotics and Autonomous Systems 63 (2015), 230–243. https://doi.org/10.1016/j.robot.2014.09.008 Advances in Tactile Sensing and Touch-based Human Robot Interaction.
[63]
Walter Dan Stiehl and Cynthia Breazeal. 2005. Affective touch for robotic companions. In Affective Computing and Intelligent Interaction: First International Conference, ACII 2005, Beijing, China, October 22-24, 2005. Proceedings 1. Springer, 747–754.
[64]
Sherry Turkle, Cynthia Breazeal, Olivia Dasté, and Brian Scassellati. 2006. Encounters with kismet and cog: Children respond to relational artifacts. Digital media: Transformations in human communication 120 (2006).
[65]
Annie Vinter, Oriana Orlandi, and Pascal Morgan. 2020. Identification of textured tactile pictures in visually impaired and blindfolded sighted children. Frontiers in Psychology 11 (2020), 345.
[66]
Graham Wilson and Stephen A. Brewster. 2017. Multi-Moji: Combining Thermal, Vibrotactile ; Visual Stimuli to Expand the Affective Range of Feedback. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI ’17). Association for Computing Machinery, New York, NY, USA, 1743–1755. https://doi.org/10.1145/3025453.3025614
[67]
Raymond Michael Winters IV. 2014. Exploring music through sound: Sonification of emotion, gesture, and corpora. McGill University (Canada).
[68]
Steve Yohanan and Karon E. MacLean. 2011. Design and Assessment of the Haptic Creature’s Affect Display. In Proceedings of the 6th International Conference on Human-Robot Interaction (Lausanne, Switzerland) (HRI ’11). Association for Computing Machinery, New York, NY, USA, 473–480. https://doi.org/10.1145/1957656.1957820
[69]
Yongjae Yoo, Taekbeom Yoo, Jihyun Kong, and Seungmoon Choi. 2015. Emotional responses of tactile icons: Effects of amplitude, frequency, duration, and envelope. In 2015 IEEE World Haptics Conference (WHC). 235–240. https://doi.org/10.1109/WHC.2015.7177719
[70]
Yiran Zhao, Yujie Tao, Grace Le, Rui Maki, Alexander Adams, Pedro Lopes, and Tanzeem Choudhury. 2023. Affective Touch as Immediate and Passive Wearable Intervention. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 4, Article 200 (jan 2023), 23 pages. https://doi.org/10.1145/3569484
[71]
Ran Zhou, Zachary Schwemler, Akshay Baweja, Harpreet Sareen, Casey Lee Hunt, and Daniel Leithinger. 2023. TactorBots: A Haptic Design Toolkit for Out-of-Lab Exploration of Emotional Robotic Touch. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 370, 19 pages. https://doi.org/10.1145/3544548.3580799

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Check for updates

Badges

Author Tags

  1. Soft robotics
  2. emotion expression
  3. empirical study
  4. human-robot interaction
  5. nonverbal behavior
  6. shape-changing
  7. tactile interaction
  8. texture-change
  9. visually impaired

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Data Availability

Examples of the stories used in the study: This document present the stories used in the procedure https://dl.acm.org/doi/10.1145/3613904.3642525#pn3289-supplemental-material-2.pdf
Examples of the stories used in the study: This document present the stories used in the procedure https://dl.acm.org/doi/10.1145/3613904.3642525#pn3289-supplemental-material-2.pdf

Funding Sources

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 796
    Total Downloads
  • Downloads (Last 12 months)796
  • Downloads (Last 6 weeks)199
Reflects downloads up to 04 Sep 2024

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media