What if AI assistants like Siri, Cortana, or Alexa had a human face? What would they look like? In an episode from The Big Bang Theory, Rajesh daydreamed about what it would be like to date Siri in person.

Rajesh’s example, while an exaggerated one, reveals that humans are wired to look for a physical and visual character “behind” a voice or text during conversation. This is because among all our sensory channels like sight, touch, hearing, smell, taste, etc., humans rely heavily on visual inputs. One neurological finding states that roughly 30% of neurons in our brain are devoted for sight, whereas only 8% for touch and 2% for hearing (Grady, 1993). Moreover, more than 60% of the brain is somewhat involved with vision, including neurons devoted for vision + touch, vision + motor, vision + attention, and vision + spatial navigation (Goldstein, 2013).

Consciously or not, available visual cues aid us to interact with others in a more smooth and enjoyable way. The case holds true among human-to-human as well as human-to-robot interactions. In an IEEE 2013 study, a fluffy dog tail was attached to a Roomba vacuum robot to suggest its working status via different motion patterns. For instance, if the robot is cleaning a room smoothly, the tail would be wagging to indicate its happiness. People who participated in the study found it easier to understand what the robot was “feeling” and found themselves amused by the robot.

Similarly, AI assistant with a physical body would be able to convey much more information than the current voice-only agents. For instance, the AI assistant’s facial expressions could show emotions, gestures like a simple nod could provide feedback, and movements like pointing to a direction could suggest what to do next. Plus, we get more engaged in the conversation being able to see the other party, since people spend more than 30% of the time trying to make eye contact during face-to-face interactions (Schwind & Jäger, 2016).

Adding visual cues can aid communication with robots: a Roomba with a fluffy tail (Image Link)

ObEN’s artificial intelligence technology quickly and easily creates lively personal avatars that look and talk like you with one selfie and a brief voice recording. Imagine listening to songs recommended by Adele’s avatar or going through today’s NBA sports news reported by Lebron James’ avatar. Advances in AI technology is making this a possibility in the future. As depicted in an episode from Black Mirror, you could even have yourself as the AI assistant who knows and understands you the best. 

In the not so far future, we’ll be able to have our personalized avatar in mobile, virtual reality or augmented reality worlds interact with each other, and speak out famous lines from the blockbuster movie Avatar (2009) to loved ones: “I see you”.

About the Author: Jackie is a researcher on ObEN’s computer vision team

Having a digital self who knows you the best as AI assistant. From Black Mirror (Image Link)
Share

ObEN만의 독점적인 인공지능 기술은 간단하게 한 사람의 2D 이미지와 음성을 결합하여 특별한 3D 아바타를 만들어냅니다. 나만의 아바타를 가상현실이나 증강현실 환경에 담아 더욱 깊이 있고 몰입도 높은 소셜 활동을 경험하세요. HTC VIVE X 투자기업인 ObEN은 2014년 설립되었으며, 캘리포니아 주 패서디나의 선도적인 기술 인큐베이터인 아이디어랩(Idealab)에 위치해 있습니다.