BRIAN ENO'S THE SHIP - Behind the scenes


[Project Overview]

Brian Eno is known as the harbinger of the ambient music genre. Dentsu Lab Tokyo has collaborated with him to produce the music video for his latest solo album “The Ship” since the grammy awards nominated “LUX” from 2012.This entry shows the detailed process of the project.

[Concept Development]

Upon realizing the music video, Brian Eno explained the underlying concept of “TheShip” as “Humankind seems to teeter between hubris and paranoia: the hubris of our ever-growing power contrasts with the paranoia that we’re permanently and increasingly under threat. He constantly mentioned about Titanic, which was the ultimate collection of cutting-edge technologies of that time, often referred as “the unsinkable ship” that eventually met is death inevitably at the North Atlantic during her first voyage.

When we heard this from Brian, we considered what would be something equivalent in the present society, and started talking about artificial intelligence and machine intelligence, and how it relates perfectly with Brian’s statement.

In light with the score’s concept, we thought that the best approach to take for this project is to build a machine intelligence, and let this software generate the music video experience for the viewers.


Build a machine intelligence, and the software generates the video.

[Machine Intelligence Study]

As we decided to utilize machine intelligence, we researched about the projects around this topic in the world thoroughly. As many of the readers may be aware, the topic advances at an astounding rate. Automated driving cars, translation of languages are just one example that appear on the news. One of the famous among all the works is Google’s Deep Dream.


Google’s Deep Dream: This shocking image generated hit the media all over the world.

The more projects we studied, the more we thought that the “error” which the machine intelligence presents is intriguing. For instance, Samim Winiger,who also appeared in our interview previously, has done an amazing project called “Generating Stories about Image.”


Sample text(right) is generated based on the input image(left). The caption generated is modeled from romantic novels.

The machine intelligence built for this project, with a very special training done, generates a caption text which explains the image. When someone uploads an image Google Photo, for instance, a caption which explains the image is generated, and is used to tag the image automatically, enabling the user to manage the images.

In the case of Samim’s project, by making the machine intelligen study vast amount of romantic novels as a training data, the caption generated becomes very passionate, rather than an ordinary explanation of the image.

We considered that the “error” described here has a potential for this project. What if there is a machine intelligence that studied the collective images of human history, and then interpreted the images from the present society? How would the machine intelligence associate the past and the present? We started building demos to answer to these questions.



Sample images from the actual demo.

This machine intelligence is given an image(left), and displays another image(right) which the software thinks it’s “similar.” Logically speaking, the result is erroneous, obviously, however, upon scrutinization, you can see that it’s not a complete mistake, but rather sense a touch of irony to the result.

[Approach and Finished Product]

We aimed to have this aspect incorporated into the actual project as well.


Images collected from over a century to represent history of humankind.


The machine intelligence analyzes the images of current society.


Associates from the past and the present are computed.

The machine intelligence is equipped with images from over a century, and then provided with news images collected online from present society. It associates the present images and the past, and generates a video experience as if the machine intelligence is retrospecting the past.

The images captured below are the results of such video generation.On the left of the screen images from the latest news sources are being loaded. On the top of the screen, images from the past that are related with the current news sources are loaded. In the middle of the screen, those images are combined to create the video experience.




Actual images that are generated.

This project is never at a constant state, but rather, based on the time visited, different video experience is generated. We aim this project to go beyond it’s current form as a website, but rather expand into other forms such as installation.

Dentsu Lab Tokyo