Cube: Fashion Takes Shape
Eduardo Maluf de Campos, Alexander Scholten, Bastien van Delft
10 millions data points
4 neural networks
CUBE: Fashion Takes Shape
CUBE: Fashion Takes Shape is a complex interactive phygital (physical and digital) piece showcasing the online presence of more than 1200 top luxury and fashion brands made in collaboration with Google. More than 10 millions data points and 4 state-of-the-art neural networks were involved!
Secret keys: use Alt+H to toggle the 4K mode.
To create the CUBE, we used the latest machine learning techniques in natural language processing and really pushed the limits of web technologies.
Just to give you an idea of what’s behind the making of the CUBE… it required the implication of 4 different state-of-the-art neural networks, 80 shaders, 10 million data points, and trillions of parameters, with a T, to give shape to the data. By comparison we estimate that it would take 40 years for a human to count to 1 billion without eating or sleeping. 1 trillion would be 40’000 years!
So, how does it work and how to interact with it?
Creation of the Fashion Shape
Our journey started with a first mapping of the entire Fashion and Luxury digital ecosystem textile. More than 1200 top brands and fashion groups were firstly selected and crafted into a unique dataset enriched with their product categories and country of origin data. According to these two attributes, we computed a measure of product and origin affinity to create the first connections among brands. At this point, the initial, cloudy data set was reordered by a machine learning algorithm, able to cluster our brands into 30 different groups according to their measure of affinity, which is reflected in the artwork by the color scale that you can see in the sides of the CUBE.
What each brand stands for
As you may have imagined at this point, each dot of the artwork represents a brand and the network how it connects with the broader fashion and luxury ecosystem. But let’s go deeper into the artwork logic. When interacting with the artwork, you’ll discover that behind each point, each brand, there is a unique AI-driven brand signature, which is reflecting your brand online footprint and representation.
But how was this generated?
Each brand included in the artwork has an incredibly rich presence online. For this reason, we were able to associate with each of them thousands and thousands of data points coming from the web.
Those contents were then translated into english text strings and introduced into a latent space, the brain of a neural network we had to train through billions of sentences to identify and classify contents according to their topics and reach.
At this point, the neural network distributed brands according to the 7 most recurring topics for fashion and luxury: Corporate Comms, Geographical Footprint, Range of Collections, Advertisement, Partnerships and Collaborations, Events and Shows, Sustainable Practices.
We’ve then introduced in the same space different attractors and repulsors to distribute each particle and finally create unique online brand signatures.
But what does this signature stand for?
It is a visual representation of how a brand might be perceived through its online presence by the whole fashion and luxury ecosystem and most importantly by its clients.
By exploring the artwork yourself, you’ll be able to visually discover the portion of their online presences focusing on financial topics or products for instance. Each category is a different color and all together create the representation of the brand identity from the outside world.
Deeper tech details
For those who want more details (feel free to skip it).
For each brand, we’ve gathered thousands of results from the Web to better understand the reach and the kind of topics associated with it. This process is actually very complex and involves the usage of 4 state-of-the-art neural networks.
The first step is to gather data from Google search engines for each brand. These results are then fed to a first machine learning algorithm to detect the language of each result. Second step, we translate all the content to English (10 million data points), weeks of GPU compute time.
In a third step, we use sentiment analysis to detect weird or negative results and filter them out.
Finally, we project each text result into a latent-space, which is basically the “brain” of a neural network and classify each result according to a list of suitable topics, determined by the neural network itself. In simpler words, it means we ask the neural network what would be the best topic to assign to each individual result we have. It has been trained on billions, with a B, of sentences and as a fairly good understanding of the English language. For instance, “Adidas opens a new store..” would be classified as “Growth”. “Versace makes a collaboration with…” would be labelled as “partnership”
We then used this distribution of topics as a percentage to create the art, with different attractors, repulsors and forces for the 260’000 particles we display. Because each brand’s data is unique, its visual signature is also unique.
I hope that you have a better understanding of the artwork and the (insane) amount of work we had to come up with to make this data-driven art piece.
Data Art is the future for brands as it combines valuable insights and meaningful stories. It doesn’t depend on a purely subjective perception but is rooted by the data behind it. Thus any kind of data, even boring spreadsheets can be transformed into interactive and hopefully beautiful art.
Thank you very much to Google for making this possible and to Media.Monks and Event Management for building the Cube. Massive thanks to Eduardo Maluf de Campos, Alexander Scholten and Bastien van Delft for helping me making this project a reality.