Kai Kunze에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Kai Kunze 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
We’ve turned intuition into a buzzword—flattened it into a slogan, a gut feeling, or a vague whisper we don’t always know how to hear. But what if intuition is so much more? What if it's one of the most powerful tools we have—and we’ve just forgotten how to use it? In this episode, I’m joined by Hrund Gunnsteinsdóttir , Icelandic thought leader, filmmaker, and author of InnSæi: Icelandic Wisdom for Turbulent Times . Hrund has spent over 20 years studying and teaching the science and art of intuition through her TED Talk, Netflix documentary ( InnSæi: The Power of Intuition ), and global work on leadership, innovation, and inner knowing. Together, we explore what intuition really is (hint: not woo-woo), how to cultivate it in a culture obsessed with logic and overthinking, and why your ability to listen to yourself might be the most essential skill you can develop. In This Episode, We Cover: ✅ Why we’ve misunderstood intuition—and how to reclaim it ✅ Practical ways to strengthen your intuitive muscle ✅ What Icelandic wisdom teaches us about inner knowing ✅ How to use intuition during uncertainty and decision-making ✅ Why trusting yourself is an act of rebellion (and power) Intuition isn’t magic—it’s a deep, internal guidance system that already exists inside you. The question is: are you listening? Connect with Hrund: Website: www.hrundgunnsteinsdottir.com TedTalk: https://www.ted.com/talks/hrund_gunnsteinsdottir_listen_to_your_intuition_it_can_help_you_navigate_the_future?utm_campaign=tedspread&utm_medium=referral&utm_source=tedcomshare Newsletter: https://hrundgunnsteinsdottir.com/blog/ LI: www.linkedin.com/in/hrundgunnsteinsdottir IG: https://www.instagram.com/hrundgunnsteinsdottir/ Book: InnSæi: Icelandic Wisdom for Turbulent Times Related Podcast Episodes: How To Breathe: Breathwork, Intuition and Flow State with Francesca Sipma | 267 VI4P - Know Who You Are (Chapter 4) Gentleness: Cultivating Compassion for Yourself and Others with Courtney Carver | 282 Share the Love: If you found this episode insightful, please share it with a friend, tag us on social media, and leave a review on your favorite podcast platform! 🔗 Subscribe & Review: Apple Podcasts | Spotify | Amazon Music…
Kai Kunze에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Kai Kunze 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.
Kai Kunze에서 제공하는 콘텐츠입니다. 에피소드, 그래픽, 팟캐스트 설명을 포함한 모든 팟캐스트 콘텐츠는 Kai Kunze 또는 해당 팟캐스트 플랫폼 파트너가 직접 업로드하고 제공합니다. 누군가가 귀하의 허락 없이 귀하의 저작물을 사용하고 있다고 생각되는 경우 여기에 설명된 절차를 따르실 수 있습니다 https://ko.player.fm/legal.
HCI Deep Dives is your go-to podcast for exploring the latest trends, research, and innovations in Human Computer Interaction (HCI). AI-generated using the latest publications in the field, each episode dives into in-depth discussions on topics like wearable computing, augmented perception, cognitive augmentation, and digitalized emotions. Whether you’re a researcher, practitioner, or just curious about the intersection of technology and human senses, this podcast offers thought-provoking insights and ideas to keep you at the forefront of HCI.
Riku Kitamura, Kenji Yamada, Takumi Yamamoto, and Yuta Sugiura. 2025. Ambient Display Utilizing Anisotropy of Tatami. In Proceedings of the Nineteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '25). Association for Computing Machinery, New York, NY, USA, Article 3, 1–15. https://doi.org/10.1145/3689050.3704924 Recently, digital displays such as liquid crystal displays and projectors have enabled high-resolution and high-speed information transmission. However, their artificial appearance can sometimes detract from natural environments and landscapes. In contrast, ambient displays, which transfer information to the entire physical environment, have gained attention for their ability to blend seamlessly into living spaces. This study aims to develop an ambient display that harmonizes with traditional Japanese tatami rooms by proposing an information presentation method using tatami mats. By leveraging the anisotropic properties of tatami, which change their reflective characteristics according to viewing angles and light source positions, various images and animations can be represented. We quantitatively evaluated the color change of tatami using color difference. Additionally, we created both static and dynamic displays as information presentation methods using tatami. https://doi.org/10.1145/3689050.3704924…
Hu, Yuhan, Peide Huang, Mouli Sivapurapu, and Jian Zhang. "ELEGNT: Expressive and Functional Movement Design for Non-anthropomorphic Robot." arXiv preprint arXiv:2501.12493 (2025). https://arxiv.org/abs/2501.12493 Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper, we present the design and prototyping of a lamp-like robot that explores the interplay between functional and expressive objectives in movement design. Using a research-through-design methodology, we document the hardware design process, define expressive movement primitives, and outline a set of interaction scenario storyboards. We propose a framework that incorporates both functional and expressive utilities during movement generation, and implement the robot behavior sequences in different function- and social-oriented tasks. Through a user study comparing expression-driven versus function-driven movements across six task scenarios, our findings indicate that expression-driven movements significantly enhance user engagement and perceived robot qualities. This effect is especially pronounced in social-oriented tasks.…
K. Brandstätter, B. J. Congdon and A. Steed, "Do you read me? (E)motion Legibility of Virtual Reality Character Representations," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) , Bellevue, WA, USA, 2024, pp. 299-308, doi: 10.1109/ISMAR62088.2024.00044. We compared the body movements of five virtual reality (VR) avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: one head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motioncapture, and the state-of-the-art deep-learning model AGRoL. Participants’ emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-capture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications. https://ieeexplore.ieee.org/document/10765392…
The Oscar best picture winning movie CODA has helped introduce Deaf culture to many in the hearing community. The capital "D" in Deaf is used when referring to the Deaf culture, whereas small "d" deaf refers to the medical condition. In the Deaf community, sign language is used to communicate, and sign has a rich history in film, the arts, and education. Learning about the Deaf culture in the United States and the importance of American Sign Language in that culture has been key to choosing projects that are useful and usable for the Deaf.…
J. Lee et al ., "Whirling Interface: Hand-based Motion Matching Selection for Small Target on XR Displays," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) , Bellevue, WA, USA, 2024, pp. 319-328, doi: 10.1109/ISMAR62088.2024.00046. We introduce “Whirling Interface,” a selection method for XR displays using bare-hand motion matching gestures as an input technique. We extend the motion matching input method, by introducing different input states to provide visual feedback and guidance to the users. Using the wrist joint as the primary input modality, our technique reduces user fatigue and improves performance while selecting small and distant targets. In a study with 16 participants, we compared the whirling interface with a standard ray casting method using hand gestures. The results demonstrate that the Whirling Interface consistently achieves high success rates, especially for distant targets, averaging 95.58% with a completion time of 5.58 seconds. Notably, it requires a smaller camera sensing field of view of only 21.45° horizontally and 24.7° vertically. Participants reported lower workloads on distant conditions and expressed a higher preference for the Whirling Interface in general. These findings suggest that the Whirling Interface could be a useful alternative input method for XR displays with a small camera sensing FOV or when interacting with small targets. https://ieeexplore.ieee.org/abstract/document/10765156…
Z. Chang et al ., "Perceived Empathy in Mixed Reality: Assessing the Impact of Empathic Agents’ Awareness of User Physiological States," 2024 IEEE International Symposium on Mixed and Augmented Reality (ISMAR) , Bellevue, WA, USA, 2024, pp. 406-415, doi: 10.1109/ISMAR62088.2024.00055. https://doi.org/10.1109/ISMAR62088.2024.00055 In human-agent interaction, establishing trust and a social bond with the agent is crucial to improving communication quality and performance in collaborative tasks. This paper investigates how a Mixed Reality Agent’s (MiRA) ability to acknowledge a user’s physiological state affects perceptions such as empathy, social connectedness, presence, and trust. In a within-subject study with 24 subjects, we varied the companion agent’s awareness during a mixed-reality first-person shooting game. Three agents provided feedback based on the users’ physiological states: (1) No Awareness Agent (NAA), which did not acknowledge the user’s physiological state; (2) Random Awareness Agent (RAA), offering feedback with varying accuracy; and (3) Accurate Awareness Agent (AAA), which provided consistently accurate feedback. Subjects reported higher scores on perceived empathy, social connectedness, presence, and trust with AAA compared to RAA and NAA. Interestingly, despite exceeding NAA in perception scores, RAA was the least favored as a companion. The findings and implications for the design of MiRA interfaces are discussed, along with the limitations of the study and directions for future work. https://ieeexplore.ieee.org/document/10765390…
Uğur Genç and Himanshu Verma. 2024. Situating Empathy in HCI/CSCW: A Scoping Review. Proc. ACM Hum.-Comput. Interact. 8, CSCW2, Article 513 (November 2024), 37 pages. https://doi.org/10.1145/3687052 Empathy is considered a crucial construct within HCI and CSCW, yet our understanding of this complex concept remains fragmented and lacks consensus in existing research. In this scoping review of 121 articles from the ACM Digital Library, we synthesize the diverse perspectives on empathy and scrutinize its current conceptualization and operationalization. In particular, we examine the various interpretations and definitions of empathy, its applications, and the methodologies, findings, and trends in the field. Our analysis reveals a lack of consensus on the definitions and theoretical underpinnings of empathy, with interpretations ranging from understanding the experiences of others to an affective response to the other's situation. We observed that despite the variety of methods used to gauge empathy, the predominant approach remains self-assessed instruments, highlighting the lack of novel and rigorously established and validated measures and methods to capture the multifaceted manifestations of empathy. Furthermore, our analysis shows that previous studies have used a variety of approaches to elicit empathy, such as experiential methods and situational awareness. These approaches have demonstrated that shared stressful experiences promote community support and relief, while situational awareness promotes empathy through increased helping behavior. Finally, we discuss a) the potential and drawbacks of leveraging empathy to shape interactions and guide design practices, b) the need to find a balance between the collective focus of empathy and the (existing and dominant) focus on the individual, and c) the careful testing of empathic designs and technologies with real-world applications. https://dl.acm.org/doi/10.1145/3687052…
Isna Alfi Bustoni, Mark McGill, and Stephen Anthony Brewster. 2024. Exploring the Alteration and Masking of Everyday Noise Sounds using Auditory Augmented Reality. In Proceedings of the 26th International Conference on Multimodal Interaction (ICMI '24). Association for Computing Machinery, New York, NY, USA, 154–163. https://doi.org/10.1145/3678957.3685750 While noise-cancelling headphones can block out or mask environmental noise with digital sound, this costs the user situational awareness and information. With the advancement of acoustically transparent personal audio devices (e.g. headphones, open-ear audio frames), Auditory Augmented Reality (AAR), and real-time audio processing, it is feasible to preserve user situational awareness and relevant information whilst diminishing the perception of the noise. Through an online survey (n=124), this research explored users’ attitudes and preferred AAR strategy (keep the noise, make the noise more pleasant, obscure the noise, reduce the noise, remove the noise, and replace the noise) toward different types of noises from a range of categories (living beings, mechanical, and environmental) and varying degrees of relevance. It was discovered that respondents’ degrees of annoyance varied according to the kind of noise and its relevance to them. Additionally, respondents had a strong tendency to reduce irrelevant noise and retain more relevant noise. Based on our findings, we discuss how AAR can assist users in coping with noise whilst retaining relevant information through selectively suppressing or altering the noise, as appropriate. https://dl.acm.org/doi/10.1145/3678957.3685750…
Pratheep Kumar Chelladurai, Ziming Li, Maximilian Weber, Tae Oh, and Roshan L Peiris. 2024. SoundHapticVR: Head-Based Spatial Haptic Feedback for Accessible Sounds in Virtual Reality for Deaf and Hard of Hearing Users. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 31, 1–17. https://doi.org/10.1145/3663548.3675639 Virtual Reality (VR) systems use immersive spatial audio to convey critical information, but these audio cues are often inaccessible to Deaf or Hard-of-Hearing (DHH) individuals. To address this, we developed SoundHapticVR, a head-based haptic system that converts audio signals into haptic feedback using multi-channel acoustic haptic actuators. We evaluated SoundHapticVR through three studies: determining the maximum tactile frequency threshold on different head regions for DHH users, identifying the ideal number and arrangement of transducers for sound localization, and assessing participants’ ability to differentiate sound sources with haptic patterns. Findings indicate that tactile perception thresholds vary across head regions, necessitating consistent frequency equalization. Adding a front transducer significantly improved sound localization, and participants could correlate distinct haptic patterns with specific objects. Overall, this system has the potential to make VR applications more accessible to DHH users. https://dl.acm.org/doi/10.1145/3663548.3675639…
Giulia Barbareschi, Ando Ryoichi, Midori Kawaguchi, Minato Takeda, and Kouta Minamizawa. 2024. SeaHare: An omidirectional electric wheelchair integrating independent, remote and shared control modalities. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 9, 1–16. https://doi.org/10.1145/3663548.3675657 Depending on one’s needs electric wheelchairs can feature different interfaces and driving paradigms with control handed to the user, a remote pilot, or shared. However, these systems have generally been implemented on separate wheelchairs, making comparison difficult. We present the design of an omnidirectional electric wheelchair that can be controlled using two sensing seats detecting changes in the centre of gravity. One of the sensing seats is used by the person on the wheelchair, whereas the other is used as a remote control by a second person. We explore the use of the wheelchair using different control paradigms (independent, remote, and shared) from both the wheelchair and the remote control seat with 5 dyads and 1 triad of participants, including wheelchair users and non. Results highlight key advantages and disadvantages of the SeaHare in different paradigms, with participants’ perceptions affected by their skills and lived experiences, and reflections on how different control modes might suit different scenarios. https://dl.acm.org/doi/10.1145/3663548.3675657…
Giulia Barbareschi, Songchen Zhou, Ando Ryoichi, Midori Kawaguchi, Mark Armstrong, Mikito Ogino, Shunsuke Aoiki, Eisaku Ohta, Harunobu Taguchi, Youichi Kamiyama, Masatane Muto, Kentaro Yoshifuji, and Kouta Minamizawa. 2024. Brain Body Jockey project: Transcending Bodily Limitations in Live Performance via Human Augmentation. In Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '24). Association for Computing Machinery, New York, NY, USA, Article 18, 1–14. https://doi.org/10.1145/3663548.3675621 Musicians with significant mobility limitations, face unique challenges in being able to use their bodies to interact with fans during live performances. In this paper we present the results of a collaboration between a professional DJ with advanced Amyotrophic Lateral Sclerosis and a group of technologists and researchers culminating in two public live performances leveraging human augmentation technologies to enhance the artist’s stage presence. Our system combines Brain Machine Interface, and accelerometer based trigger, to select pre-programmed moves performed by robotic arms during a live event, as well as for facilitating direct physical interaction during a “Meet the DJ” event. Our evaluation includes ethnographic observations and interviews with the artist and members of the audience. Results show that the system allowed artist and audience to feel a sense of unity, expanded the imagination of creative possibilities, and challenged conventional perceptions of disability in the arts and beyond. https://dl.acm.org/doi/10.1145/3663548.3675621…
F. Chiossi, I. Trautmannsheimer, C. Ou, U. Gruenefeld and S. Mayer, "Searching Across Realities: Investigating ERPs and Eye-Tracking Correlates of Visual Search in Mixed Reality," in IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 11, pp. 6997-7007, Nov. 2024, doi: 10.1109/TVCG.2024.3456172. Mixed Reality allows us to integrate virtual and physical content into users' environments seamlessly. Yet, how this fusion affects perceptual and cognitive resources and our ability to find virtual or physical objects remains uncertain. Displaying virtual and physical information simultaneously might lead to divided attention and increased visual complexity, impacting users' visual processing, performance, and workload. In a visual search task, we asked participants to locate virtual and physical objects in Augmented Reality and Augmented Virtuality to understand the effects on performance. We evaluated search efficiency and attention allocation for virtual and physical objects using event-related potentials, fixation and saccade metrics, and behavioral measures. We found that users were more efficient in identifying objects in Augmented Virtuality, while virtual objects gained saliency in Augmented Virtuality. This suggests that visual fidelity might increase the perceptual load of the scene. Reduced amplitude in distractor positivity ERP, and fixation patterns supported improved distractor suppression and search efficiency in Augmented Virtuality. We discuss design implications for mixed reality adaptive systems based on physiological inputs for interaction. https://ieeexplore.ieee.org/document/10679197…
S. Cheng, Y. Liu, Y. Gao and Z. Dong, "“As if it were my own hand”: inducing the rubber hand illusion through virtual reality for motor imagery enhancement," in IEEE Transactions on Visualization and Computer Graphics , vol. 30, no. 11, pp. 7086-7096, Nov. 2024, doi: 10.1109/TVCG.2024.3456147 Brain-computer interfaces (BCI) are widely used in the field of disability assistance and rehabilitation, and virtual reality (VR) is increasingly used for visual guidance of BCI-MI (motor imagery). Therefore, how to improve the quality of electroencephalogram (EEG) signals for MI in VR has emerged as a critical issue. People can perform MI more easily when they visualize the hand used for visual guidance as their own, and the Rubber Hand Illusion (RHI) can increase people's ownership of the prosthetic hand. We proposed to induce RHI in VR to enhance participants' MI ability and designed five methods of inducing RHI, namely active movement, haptic stimulation, passive movement, active movement mixed with haptic stimulation, and passive movement mixed with haptic stimulation, respectively. We constructed a first-person training scenario to train participants' MI ability through the five induction methods. The experimental results showed that through the training, the participants' feeling of ownership of the virtual hand in VR was enhanced, and the MI ability was improved. Among them, the method of mixing active movement and tactile stimulation proved to have a good effect on enhancing MI. Finally, we developed a BCI system in VR utilizing the above training method, and the performance of the participants improved after the training. This also suggests that our proposed method is promising for future application in BCI rehabilitation systems. https://ieeexplore.ieee.org/document/10669780…
Pavel Manakhov, Ludwig Sidenmark, Ken Pfeuffer, and Hans Gellersen. 2024. Filtering on the Go: Effect of Filters on Gaze Pointing Accuracy During Physical Locomotion in Extended Reality. IEEE Transactions on Visualization and Computer Graphics 30, 11 (Nov. 2024), 7234–7244. https://doi.org/10.1109/TVCG.2024.3456153 Eye tracking filters have been shown to improve accuracy of gaze estimation and input for stationary settings. However, their effectiveness during physical movement remains underexplored. In this work, we compare common online filters in the context of physical locomotion in extended reality and propose alterations to improve them for on-the-go settings. We conducted a computational experiment where we simulate performance of the online filters using data on participants attending visual targets located in world-, path-, and two head-based reference frames while standing, walking, and jogging. Our results provide insights into the filters' effectiveness and factors that affect it, such as the amount of noise caused by locomotion and differences in compensatory eye movements, and demonstrate that filters with saccade detection prove most useful for on-the-go settings. We discuss the implications of our findings and conclude with guidance on gaze data filtering for interaction in extended reality. https://ieeexplore.ieee.org/document/10672561…
Nicholas Jennings, Han Wang, Isabel Li, James Smith, and Bjoern Hartmann. 2024. What's the Game, then? Opportunities and Challenges for Runtime Behavior Generation. In Proceedings of the 37th Annual ACM Symposium on User Interface Software and Technology (UIST '24). Association for Computing Machinery, New York, NY, USA, Article 106, 1–13. https://doi.org/10.1145/3654777.3676358 Procedural content generation (PCG), the process of algorithmically creating game components instead of manually, has been a common tool of game development for decades. Recent advances in large language models (LLMs) enable the generation of game behaviors based on player input at runtime. Such code generation brings with it the possibility of entirely new gameplay interactions that may be difficult to integrate with typical game development workflows. We explore these implications through GROMIT, a novel LLM-based runtime behavior generation system for Unity. When triggered by a player action, GROMIT generates a relevant behavior which is compiled without developer intervention and incorporated into the game. We create three demonstration scenarios with GROMIT to investigate how such a technology might be used in game development. In a system evaluation we find that our implementation is able to produce behaviors that result in significant downstream impacts to gameplay. We then conduct an interview study with n=13 game developers using GROMIT as a probe to elicit their current opinion on runtime behavior generation tools, and enumerate the specific themes curtailing the wider use of such tools. We find that the main themes of concern are quality considerations, community expectations, and fit with developer workflows, and that several of the subthemes are unique to runtime behavior generation specifically. We outline a future work agenda to address these concerns, including the need for additional guardrail systems for behavior generation. https://dl.acm.org/doi/10.1145/3654777.3676358…
We use cross-Modal correspondence -the interaction between two or more sensory modalities- to create an engaging user experience. We present atmoSphere, a system that provides users immersive music experiences using spatial audio and haptic feedback. We focused on cross-modality of auditory and haptic sensations to augment the sound environment. The atmoSphere consists of a spatialized music and a sphere shaped device which provides haptic feedback. It provides users imagination of large sound environment although they feel haptic sensation in their hands. First user feedback is very encouraging. According to participants, atmoSphere creates an engaging experience. https://dl.acm.org/doi/10.1145/3084822.3084845…
Navigating in a natural way in augmented reality (AR) and virtual reality (VR) spaces is a large challenge. To this end, we present ArmSwingVR, a locomotion solution for AR/VR spaces that preserves immersion, while being low profile compared to current solutions, particularly walking-in-place (WIP) methods. The user simply needs to swing their arms naturally to navigate in the direction where the arms are swung, without any feet or head movement. The benefits of ArmSwingVR are that arm swinging feels natural for bipedal organisms second only to leg movement, no additional peripherals or sensors are required, it is less obtrusive to swing our arms as opposed to WIP methods, and requires less energy allowing prolong uses for AR/VR. A conducted user study found that our method does not sacrifice immersion while also being more low profile and less energy consumption compared to WIP. https://dl.acm.org/doi/10.1145/3152832.3152864…
This paper presents a new approach to implement wearable haptic devices using Shape Memory Alloy (SMA) wires. The proposed concept allows building silent, soft, flexible and lightweight wearable devices, capable of producing the sense of pressure on the skin without any bulky mechanical actuators. We explore possible design considerations and applications for such devices, present user studies proving the feasibility of delivering meaningful information and use nonlinear autoregressive neural networks to compensate for SMA inherent drawbacks, such as delayed onset, enabling us to characterize and predict the physical behavior of the device. https://dl.acm.org/doi/10.1145/3267242.3267257…
As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students. Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user’s eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness. In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment. https://dl.acm.org/doi/10.1145/3526113.3545687…
What we wear (our clothes and wearable accessories) can represent our mood at the moment. We developed Emolleia to explore how to make aesthetic wears more expressive to become a novel form of non-verbal communication to express our emotional feelings. Emolleia is an open wearable kinetic display in form of three 3D printed flowers that can dynamically open and close at different speeds. With our open-source platform, users can define their own animated motions. In this paper, we described the prototype design, hardware considerations, and user surveys (n=50) to evaluate the expressiveness of 8 pre-defined animated motions of Emolleia. Our initial results showed animated motions are feasible to communicate different emotional feelings especially at the valence and arousal dimensions. Based on the findings, we mapped eight pre-defined animated motions to the reported, user-perceived valence, arousal and dominance and discussed possible directions for future work. https://dl.acm.org/doi/10.1145/3490149.3505581…
Robotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers’ agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies. https://dl.acm.org/doi/10.1145/3544548.3581124…
Wheelchair dance is an important form of disability art that is still subject to significant levels of ableism and art exclusion. Wheelchair dancers face challenges finding teachers and choreographers who can accommodate their needs, documenting and sharing choreographies that suit their body shapes and their assistive technologies. In turn, this hinders their ability to share creative expressions. Accessible resources and communication tools could help address these challenges. The goal of this research is the development of a visualization system grounded on Laban Movement Analysis (LMA) that notates movement quality while opening new horizons on perceptions of disabled bodies and the artistic legitimacy of wheelchair dance. The system uses video to identify the body landmarks of the dancer and wheelchair and extracts key features to create visualizations of expressive qualities from LMA basic effort. The current evaluation includes a pilot study with the general public and an online questionnaire targeting professionals to gain feedback supporting practical implementation and real-world deployment. Results from the general public evaluation showed that the visualization was effective in conveying basic effort movement qualities even to a novice audience. Expert consulted via questionnaire stated that the tool could be employed for reflective evaluation, as well as performance augmentation. The LMA visualization tool can support the artistic legitimization of wheelchair dancing through education, communication, performance, and documentation. https://dl.acm.org/doi/10.1145/3597628…
In this paper, we propose a method for utilizing musical artifacts and physiological data as a means for creating a new form of live music experience that is rooted in the physiology of the performers and audience members. By utilizing physiological data (namely Electrodermal Activity (EDA) and Heart Rate Variability (HRV)) and applying this data to musical artifacts including a robotic koto (a traditional 13-string Japanese instrument fitted with solenoids and linear actuators), a Eurorack synthesizer, and Max/MSP software, we aim to develop a new form of semi-improvisational and significantly indeterminate performance practice. It has since evolved into a multi-modal methodology which honors improvisational performance practices and utilizes physiological data which offers both performers and audiences an ever-changing and intimate experience. In our first exploratory phase, we focused on the development of a means for controlling a bespoke robotic koto in conjunction with a Eurorack synthesizer system and Max/MSP software for controlling the incoming data. We integrated a reliance on physiological data to infuse a more directly human elements into this artifact system. This allows a significant portion of the decision-making to be directly controlled by the incoming physiological data in real-time, thereby affording a sense of performativity within this non-living system. Our aim is to continue the development of this method to strike a novel balance between intentionality and impromptu performative results. https://dl.acm.org/doi/10.1145/3623509.3633356…
Running and jogging are popular activities for many visually impaired individuals thanks to the relatively low entry barriers. Research in HCI and beyond has focused primarily on leveraging technology to enable visually impaired people to run independently. However, depending on their residual vision and personal preferences, many chose to run with a sighted guide. This study presents a comprehensive analysis of the partnership between visually impaired runners and sighted guides. Using a combination of interaction and thematic analysis on video and interview data from 6 pairs of runners and guides, we unpack the complexity and directionality of three layers of vocal communication (directive, contextual, and recreational) and distinguish between intentional and unintentional corporeal communication. Building on the understanding of the importance of synchrony we also present some exploratory data looking at physiological synchrony between 2 pairs of runners with different level of experience and articulate recommendations for the HCI community. https://dl.acm.org/doi/10.1145/3613904.3642388…
Detecting interpersonal synchrony in the wild through ubiquitous wearable sensing invites promising new social insights as well as the possibility of new interactions between humans-humans and humans-agents. We present the Offset-Adjusted SImilarity Score (OASIS), a real-time method of detecting similarity which we show working on visual detection of Duchenne smile between a pair of users. We conduct a user study survey (N = 27) to measure a user-based interoperability score on smile similarity and compare the user score with OASIS as well as the rolling window Pearson correlation and the Dynamic Time Warping (DTW) method. Ultimately, our results indicate that our algorithm has intrinsic qualities comparable to the user score and measures well to the statistical correlation methods. It takes the temporal offset between the input signals into account with the added benefit of being an algorithm which can be adapted to run in real-time will less computational intensity than traditional time series correlation methods. https://dl.acm.org/doi/10.1145/3544549.3585709…
The use of wearable sensor technology opens up exciting avenues for both art and HCI research. To be effective, such work requires close collaboration between performers and researchers. In this article, we report on the co-design process and research insights from our work integrating physiological sensing and live performance. https://dl.acm.org/doi/10.1145/3557887…
플레이어 FM에 오신것을 환영합니다!
플레이어 FM은 웹에서 고품질 팟캐스트를 검색하여 지금 바로 즐길 수 있도록 합니다. 최고의 팟캐스트 앱이며 Android, iPhone 및 웹에서도 작동합니다. 장치 간 구독 동기화를 위해 가입하세요.