PoV – The Final Project

It all started after moving to New York. Being away from my family and friends, using many kinds of communication apps and devices, made me think about long distance conversations and realizing that there are still so many missing aspects that can make this experience feel more natural and engaging. One of those was the use of space. I started looking for ways to allow people to walk and more important to look around while they are engaged in a conversation. The ability to move freely in a different space, somewhere around the world, can help people think they are actually there and get rid of the annoying limitations of holding the device and seeing the other side from a point of view decided by the person on the other side. Being able to look and walk around can allow us to be engaged in a wider and longer conversation with more than one person.

When I started working on the project I suddenly noticed the bigger picture. I realized that I am actually trying to provide the user with a different point of view at the world. I than decided to change the name of the project to PoV. What is PoV?

PoV allows us to enter and act in a different body, but unlike other technologies that allow us to to do that in a virtual environment, PoV gives us a different body in the real world. PoV works on both sides, the puppet, or the avatar, and the puppeteer, or the user. All the user needs is a smartphone and a cheap VR headset and he is ready to go.

I put a lot of effort into the shape of the puppet. At first, I wanted to create a human size avatar to make the experience feel as close as I can to an average person. In the end, I decided to prototype using a wooden manikin as its material and shape are unique and noticeable and by that inherently provoke the discussion about the way we see the world.

I believe that we perceive ourselves by the way the world sees us. Wether we are human, animals or any other creature, the way our surroundings look and react to us shape the way we see ourselves. This product can be applied to any other form or shape, allowing us to experience the world as any creature we would like.

When I started presenting the product in its different stages, people referred to it as a robot. I found myself explaining again and again that this body has no control over itself and as such I would rather call it a puppet and not a robot, creature, or any other word that describes an independent entity in our world. Still, those conversations made me think I am on the right way as people perceive it as an entity of its own, reassuring that the people interacting with it in the future will make the user feel as he is truly experiencing the world from a different point of view.

 Physical Components

I started with a very vague idea of how to implement all the necessary features I thought the prototype should have. After experiencing with Arduino throughout the semester I attached 3 motors to my board, one 180 degrees servo motor to control the head movements and two continuous servo motors to drive the manikin around. I tried 3 different motor models until I found the one which was reliable and good enough for the project’s requirements.

The Balancing Box

The other important feature was the video camera to provide a live video feed of the world the manikin “sees”. I bought a cheap security camera and started to disassemble it and changing its code, but after working with it I decided that the latency is way too high for the user to feel immersed. I decided to connect the Arduino to a Raspberry Pi that can serve as “the brain” of the product. I used the Raspberry camera and connected it directly to Raspberry, as well as the Arduino. The Raspberry runs a Node.js server which controls all the components.

pov_bb

Coding

I started coding the server side. I learned how to work with Node.js (thanks to Dror Ayalon) and used the serialport npm to transfer data to the Arduino and the socket.io npm to receive data from the controlling user (AKA the client).

The client side was pretty straight forward. I used the p5.js library to extract the rotation data from the user’s mobile device and transferring it to the server in real time.

Dealing with video on the Raspberry Pi was a struggle as the device is not capable of dealing with high quality video properly. After trying to use several applications and methods I came across RaspiMJpeg that is basically a mechanism that takes 30 JPEG pictures per second and streams them to the client one by one, makes it feel like a smooth video stream.

While coding the client side I kept in mind the experience I believe the user should have. I duplicated the “video stream” and faked a stereoscopic view as I wanted the user to use a headset that will isolate him from his surroundings and will correlate his head movements with his smartphone which will allow the puppet to imitate them.
img_2957

I coded the Arduino to react according to the user’s head movements, the puppet head movements were pretty straight forward – imitating the user’s movements. Driving the puppet around, though, required more thinking. As I did not want the user to have more controllers or buttons, I coded the wheels to move forward when the user is looking up and backward when he is looking down. The puppet also turns left or right when the user is tilting his head.

All the code can be found in my GitHub repository.

Fabrication

While working on fabricating the puppet I tried to keep it as clean and unnoticeable as possible. I started prototyping with a standard wooden manikin and when I had a proper plan I bought a bigger manikin to produce the final prototype.

_a1a9786Together with Imri, a designer and a talented carpenter, we drilled the necessary holes, removed the unnecessary springs and attached all the parts together. We built a balancing box using a laser cutter to contain most of the electrical components. The box was designed to hide the wheels in order to give the puppet a more natural feel.

_a1a9792The biggest challenge was to place the camera in one of the manikin’s eyes. We had to cut the head into two pieces and glue the camera inside. After gluing the two parts of the head, the cable got detached and we needed to open it again and find a new way to connect the cable.

_a1a9788Acknowledgments

This process was not possible without the help of many people, both those who shared their experiences over the internet but mainly those around me who answered patiently to my questions and even learned new subjects together with me.

I want to thank especially to Vick Giasov, Imri Givon, Or Fleisher ,Dror Ayalon and Mithru Vigneshwara.

Leave a Reply