ARTICLE

DATE
November 26th,
2015
005

Boundaries Between Life And Non-life

KAME KAI

Boundaries Between Life And Non-life

In order to realize collaboration with various domains, Dentsu Lab Tokyo occasinally hosts talk session where artists, technologists, data scientists, scholars, writers come together.
Taken from Cai Guo-Qiang’s “Caretta Fountain”, a fountain of turtle, this talk session is named “Kamekai” (which literally translates as the “Turtle Gathering”) and aimed to bring various thoughts and view points among the participants and create an interactive session that won’t become a mere lecture.

Ivan Poupyrev & Shiho Fukuhara / イワン・プピレフ& 福原志保


Ivan Poupyrev, Dr.
Dr. Ivan Poupyrev is an award-winning scientist, inventor and designer working at the cutting edge of interaction design and technologies blending the digital and physical realities. Ivan is currently a Technical Program Lead at the Google’s Advanced Technology and Projects (ATAP) division where he directs efforts focused on interaction technologies and design for future digital lifestyle. Prior to Google he was Principal Research Scientist at the Walt Disney Imagineering research division and at Sony Corporate Research laboratories in Tokyo before that. He also did stints at the University of Washington as a visiting scientist, while working on his dissertation at Hiroshima University, Japan and at Princeton University School of Architecture as a Visiting Lecturer.

Shiho Fukuhara
Shiho Fukuhara is a bio artist. After graduating from Central Saint Martins College of Art and Design in London, she got worldwide attention after she released “Biopresence” while at Royal College of Art. After joining “Le Pavillion” she founded Bio Presence BCL together with Georg Tremmel. Her work goes transcends the boundaries of science, art, and design. Currently, at 21st Century Museum of Contemporary Art in Kanazawa, she exhibits “Ghost in the Cell” and also conducting research for Project Jacquard and Project Soli at Google ATAP.


About Ivan Poupyrev Dr.

Currently, I’m working at a place that is Google but not like Google at the same time. It is called Google ATAP (Advanced Technologies and Projects). At here, every project is given 2 years of duration and it is expected to have results within that duration.

Originally, I used to work at Sony’s Computer Science Laboratory, and I was doing projects that were under the budget of $10,000 to $100,000. At Google ATAP, within the 2 years, you are required to make mock-ups and the rest. It’s very exciting but at the same time, very rigorous. I am doing two projects at the time.

There are terms such as “ubiquitous” or “wearable” but I think there is a movement, which the technology is becoming more and more invisible. The important question here is how would technology exist if it becomes invisible. I was always interested in this discussion.

Let me introduce 2 projects today. It’s something I introduced at Google I/O 2015, 4 months ago.

Project Soli

Project Soli

This project starts from the understanding of how amazing human hands are. They are capable of one of the most delicate moves. This is something noteworthy.

The gesture which the clock smith performs here is a gesture something any of the machines can replicate. On the other hand, human hands, are capable of making a very agile yet precise movement. Research on this amazing apparatus has been carried on since the 1940s, regarding how one can quantify the movement of hands.

If you think about it, from elbow to the wrist, then to the tip of the finger, human hand is designed to have more movement by going to edge. At the finger tip, the amount of the movement is 4 times as the elbow.

As seen in the scrolling and swiping gesture on smartphones, you can say that the existing object defines the interaction. Based on the existence of physical object, the interaction capability provided by fingers expands.

soli_01-624x340

In this case, the gesture is based on something physical, however, the physical object itself doesn’t exist. With that said, because of the fact the physical object doesn’t exist, one can say that the possibility of this interaction is limitless.

As you can see in this movie, when there is no object accompanied by a gesture, the gesture itself defines the object. Without physicality, one can interact with the system, and the hand itself becomes the interface.

Using one’s body as an interface is an interesting idea with great potential. By using your own hands and fingers, it creates a tactile feedback. (Although it’s different from the tactile feedback from objects.) The ideal would be to detect this contact and incorporate it into the interface, however, camera vision won’t enable to replicate the preciseness which is required. In fact, there is no sense yet to detect this.

I just said there is no sensor like this in reality, but if we were to make something like this, there would be several requisites.

First is to be able to detect the motion precisely. The second is to accurately detect the structure / position of the fingers. Needless to say, the system has to be able to detect this in 3D. It would be smarter, if the system can detect the movement of arm. Also, the system has to be usable under the sunlight or in a dark room. The size has to be small too.

snap00322_m

As for today, something worthwhile considering at this point is radar technology. It would meet most of the requisites, which I just described. However, the size has been always an issue, so in this project, I tried to combat that, and made several prototypes. As a result, it became very small.

maxresdefault

With a 60Ghz sensor, it’s 9mm x 9mm. It took about 10months to get here. As a development speed, it’s very fast.

In this sensor, we are analyzing the object with “Time of Flight” method. Just like the speed gun gadget, the problem is that it is only capable of detecting one point. Therefore, if you want to analyze something complex, such as the movement of a hand, we will need to detect several points. So, as our approach, we made the device capable of detecting several points. By utilizing machine learning and studying the signals from those points, we made the device capable of understanding the gesture.

snap00376_m

You can use your gesture to go above and below. Different functions can be associated based on the movement and the position of the hand. For instance, you can control the minute if you are close, and also control the hour when you are distant. This is possible because the radar is aware of the 3D space. As shown in the demo, you can do a simple game.

Currently, we are preparing to provide this Soli chip, together with an API, so that the developers can use. The goal of this project is, as shown in my earlier slide, to replicate and capture detailed body gesture, such performed by a skilled craftsmen, and enable that in the digital realm.

Project Jacquard

Project Jacquard

I would like to present another project, Project Jacquard. The previous project, which I presented earlier was mainly about how to replicate a certain gesture in a given space. The second project is a little different in a sense that, the focus is on how we can use a certain gesture on a particular thing.

By “particular something” I mean textile. The original idea came from an observation that the structure of a textile is very similar to that of a touch screen. Then we thought, if we can weave conductive strings within a garment, it would function as a touch screen basically. If it becomes a cloth, then we can perhaps change colors or even change its shape by bending and folding. If something like this is possible, then the same technology used for gadgets will be available on something we put on daily.

What we figured the most important aspect about this project is that, this is not about something we want to exhibit at Maker’s Faire once, but this is about being able to scale and mass-produce interactive device.

We visited various factories around the world, and as a result, we decided to produce this product at a factory in Japan.

If you can take a look at the actual product, there is a conductive wire in the middle, and fibers cover the wire intricately. The production process is multi-structured, and it’s a very complex. Once you have the string made, you weave that into a cloth. And this is what we got first.

weaving-interactive-textiles-2

As we proceeded, we came back once again and thought about how we should make the textile. What we came to as a conclusion is that we should not make the entire garment sensitive, but rather have certain area sensitive and interactive. By making the sensor area asymmetrical, we were able to secure a spot to install electronics. As a result, we believe we were able to make something with high applicability. The size and the texture of the textile are designable.

embedding-electronics-2

We’ve tried the sensor. You can actually count how many fingers are being touched, and how they are being touched. The system can recognize gestures as well. Wit this system, we can, for instance, call some one with a certain gesture, but also call uber if we do the gesture the from the opposite side.

In this project, we are hoping to create a platform. We want to enable people to make garments with various features. We want to provide the possibilities from this technology and create an ecosystem.

We didn’t brief much about the project, and had the tailor make clothes. Can they utilize their own expertise and make something totally different and new? As you can see here, this was possible.

We don’t want to dominate this technology but rather provide this as a tool and permeate for everyone to use. We want to collaborate with everyone.

Currently, we are trying to collaborate with Levi’s. This is a brand from the gold rush in the west coast. If we come up with something nice, please buy one! Thank you very much.

Q&A for Ivan Poupyrev

Audience Most of the people here are working on advertising. I’m sure there are many people here who want to collaborate together, but in that case, do you think there is a chance to work together on an advertising based project?

Yes, of course. As I mentioned before, what we are doing is trying to create a platform and have that an open one. We consider that instead of just utilizing the technology on clothes, but rather further expanding to realms such as space design and product design would bring possibilities who use the technology. The reason why we considered working with an apparel company is that this is the most challenging part.

Audience In Japan, there is a cloth called “noren” that performs a complex movement. It would take some serious gesture recognition techniques here, but do you think using machine learning to overcome this problem is something viable? Personally, I consider it very interesting to collaborate with Japanese products.

I think there is a lot of potential there, and I think it’s interesting. Combining something completely different gives a chance to have something amazing to be born. I like the idea of mixing up something different, rather than merely combining technologies.

Audience I have a question regarding machine learning. With regards to the movement of finger, I think you need to have a quite a large number here, but how do you collect the data?

There are many gestures, and I can’t generalize all of these, but what we do here is, clustering and learning a little by little. Within the project, the machine learning is merely personal at the moment, and we need to expand this learning to be more universal. There are many researches in computer vision, but very few in using radar technology, and we need to advance little by little.

About Shiho Fukuhara

IMG_3450

Today, I would want to talk about the boundaries between life and non-life. People often say “digital” today, but this term is actually derived from “finger.” As you count numbers 0 and 1 with you fingers comes the Latin word “digitus.” Also, there is another term, “auris” which is derived from the word “ear” that also is similar to the word aura. The term “machina” means machines here, and I have drawn a red line in betweed here.

01

In this space, I believe there is something unseen at this moment, something in between life and non-life. What is important is not the machine as digital, but something that is born within this space.

It’s actually very hard to define where life begins. If you consider “cell” as the definition of life, you can perhaps consider a membrane within a soup that creates a line in between the two and that becomes a life as a matter. In that sense, you can say that membrane is line that differentiates what’s life and what is not life. Moreover, you can perhaps define life from a more societal factor. Is this person alive or dead? If the brain is not functional, is that person dead or alive? How about if his/her heart was not functioning? Does that make him/her dead? But perhaps, the cell is alive? It’s a difficult question. But after you think about it, it all becomes irrelevant.

slide03

You don’t really need a line between art, science and design. But if you look at the society, all of the expertise are differentiated by domains. The things I find interesting do not have these lines in between. One would be working on science, and the other would be doing art. Some of would be new, and the other would be old. It could be a traditional performance art, and it could be a new technology. And I consider its important that new ideas and meanings emerge from the line in between those two domains. I, as an artist, work at the “line” between two things.

20th century was said to be the century of computers. It is said that the 21st century is about biology. The size of computers are getting smaller and smaller, almost as big as a rice. However, no matter how small computers become, our hands remain the same size. I think that sheds a light regarding why projects such as Soli and Jacquard comes forth.

Today, computers are getting smaller and smaller, and we wear them almost naturally, but if we assume that the technology is becoming more and more invisible, perhaps the next step would be going under the skin. Perhaps biology would be closer to computers. If technology is advancing and changing at such rate, we need to look ahead as well.

In 1997, professor Charles Vacanti at Harvard university grew an ear object on the back of a mouse. It looked as if the mouse had grown an ear from its back. At the same time, the Human Genome Project has completed, and the AGCT structure of genes became evident, so there was some sort of misunderstanding that it is now possible to combine human ear and a mouse.

Currently, there is a movement of trying to create organ with 3D printers. You use a calcium baess powder to make bones.

There is a similar research done at the University of Tokyo. What they are printing here is liver. There is also research being done by Autodesk, and they are trying to obtain a DNA sample from Van Gogh’s descendant, and trying to claim that they have reproduced Van Gogh’s ear. It’s pretty crazy.

slide12

There is also a researcher at Cambridge and Harvard, and they made the smallest blood vessel with 3D printer. If we are advancing to this point, me might be able to make organs in the same way. If that becomes possible, we may become capable of making human bodies out of machines.

slide17

By the way, have any of you ever seen a “blue carnation” from Suntory? Originally, carnations do not have any blue genes, but by gene recombination, they were possible of making blue carnation. I was stunned.

slide18

Out of curiosity, I asked Suntory where they are making this, and they told me that they were doing this in Columbia. I proceeded to ask why they are not making this in Japan, although it’s something sold in Japan. Their response was that it has something to do with the soil, but something didn’t feel right for me. I thought, if I’m able to reproduce the same thing using the Japanese soil, I would be able to make my point. I used the Japanese sake cup, and I was able to reproduce the same thing without any difficulty.

slide19

Scientists were surprised to see my work when I exhibited the carnation in Belgium. Following the next two years, I sent the protocols I used to the scientists around the world, and asked for ways to refine the process. I updated the methods so that almost anybody can do it. At workshops, I teach how to make genetically altered products to children. It became a biotechnology anyone can do it in a kitchen.

After being successful in cloning and making the protocols open source, my next challenge was to see if it is possible to make this once-blue product back to white again. The concept was to “reverse” the protocol. Speaking from the results, I haven’t been successful so far. While going through the reversing process, various things occur which make it difficult. Looking at this, I would say that the fear people have for genetically altered food product is correct, because it is impossible to go back, once the gene is altered. Although, if this reversing process is successful, I would say it would go on Nature.

slide25

Allow me to introduce another project to you. Many of you may already know, but generally speaking, houses around grave yards are cheap. Death is something to be hidden, and is often kept away from our usual lives.

I wanted to overcome this notion somehow. I came to think that humans are not afraid of death because of the pain, but rather it’s the idea of their own memories disappearing. Starting from that premise, I presented the “Biopresence” project.

The concept of this project is to make a memorial tree bearing a human DNA. First, you take a gene from a human. The genetic information is converted into binary information, and then put back to the tree that won’t harm the tree’s genetic information. I personally didn’t develop this method but initially made by an artist Joe Davis currently at MIT and Harvard.

slide26

What would 50 years later be like if this takes on? Perhaps your great great grand son might confess his first love to you to hear. Graveyards could be a memorial park. The meaning of every day actions would change as well. For instance, if a tree with your grandmother’s gene bears a fruit, that fruit would also contain the same genetic information of your grandmother. Would you eat this fruit or not?

slide27

My latest work is a collaborative work with Miku Hatsune and it is a currently exhibited at 21st Century Museum of Contemporary Art.

Other than the living organism, “cell” originally means small rooms, but when you think about it, Miku Hatsune does not have any physical entity. She has a voice, but not a body. She has a picture of herself, but she doesn’t have a body so she also doesn’t have a cell.

I wanted to see if, how the fans of Miku Hatsune can feel closer to Miku Hatsune.

There are various genetic information published by various universities as a open source project which portray certain physical appearance of a given human, and I collected those genetic information and combined them to represent Miku Hatsune such as her pink-ish skin and her eight head figure. Then, I took this DNA information and injected into iPS cells to make cardiac muscle cells. And this is what I am exhibiting at the museum using the genetic information of Miku Hatsune.

slide29

The cardiac muscle cells are kept in a machine that maintains a certain temperature and humidity, and exhibited as an actual living organism. There are many fans of Miku Hatsune visiting the museum, and by observing Twitter and Facebook, I saw someone saying that he/she wants to die by listening to the heart beat of Miku Hatsune. The cardiac sound played at the museum is made by Nao Tokui from Qosmo. Miku Hatsune’s heart beat is analyzed through video and the cardiac sound is generated. I asked him to make the sound as something you would hear if you were inside the heart.

There is a high pitched sound which synthesizers make time to time, but we made is so that this sound loops intentionally. This sound is usually explained as a noise from the machine, but when you think about it, this is like a ghost that emerged within the machine, and thought it would be interesting to feature this. It was my first attempt to utilize this sound, and became a music that one can never forget.

I also did a machine learning project using Miku Hatsune’s video. Utilizing the Deep Dream algorithm from Google, I went around spots of Kanazawa where it’s famous for its phantom appearances, and used Miku Hatsune to generate images.

The video output became an image which people have different reactions after watching it. When people talk about ghosts, their comments all differ although looking at the same thing.

The exhibition is until Mar. 19th so it would be great if you have a chance to look at it. Thank you very much!

Q&A for Shiho Fukuhara

Audience Art projects regarding biology always have the possibility of conflicting with ethics. Were there similar issue or topics raised when you were working on making the cardiac cell of Miku Hatsune?

I was most likely sure that I would be criticized with the project at first, but surprisingly, that was not the case. The fans of Miku Hatsune didn’t know how to react first, but gradually started to understand this project as a way to have their very own Miku Hatsune. Perhaps one can say that ,Miku Hatsune, to begin with, is an open source platform.

I believe this project exists because of the fact that I chose Miku Hatsune as an entity to represent. Originally, I had the idea to use a real pop idol group, Dempagumi. Inc, but they became famous as I carried on with this project, so I had to go for something different. So I asked the creators of Miku Hatsune if it is ok to use here for this project, and they told me as long as I don’t alter her voice, they were fine. So, it seems that, Miku Hatsune’s soul exists in her voice. There is no character like this besides her.

Another point made by few other people was that genetically modified products are not allowed to exist in public, and so does this project. But I asked the people who are in charge of Cartagena Protocol on Biosafety, and they said it is perfectly fine. I asked them why it is fine, and they told me that it is fine because this project is dealing with genes of human being. Nobody around me knew that you can override and make genetically modified product public, as long as its from a human being.

On the other hand, there are projects that deal with modified genes in Ars Electronica, and still available for public. To be strict, these are not allowed. However, little did you know, in Ars Electronica, there is a bio lab underground, and its address is exactly the same as the address on the surface. So as long as the address is the same, these projects are allowed to be exhibited.

In a lot of my projects, unless you find a lee way like this, the project gets into a dead end. But if you are persistent enough, sometimes there is a secret passage to overcome the barriers. I believe if you destroy the silos like that, most likely something new will emerge.

Thank you very much!


 

NEW POSTS

Dentsu Lab Tokyo