Introduction to Computational Creativity
With the rise of Artificial Intelligence and Deep Learning technologies, humankind is on the constant lookout for their new applications. In addition to automating various manual routine operations (objects recognition, predictive analytics, natural language processing), AI has penetrated the domain where the human mind reigned for centuries – art and creativity. Computers have already been used in art as tools for example Neural Style Transfer, aids, and canvas, but with the help of neural networks the machines can now do what seemed impossible even a decade ago – become artists, the creators themselves.
This resulted in a totally new field of Artificial Intelligence called Computational Creativity. Computers already have an immense impact on fine arts, architecture, music, cinema and even the creation of jokes. The connection between human and computer-made works of art is still under discussion. However, we believe that machine-powered creations have the potential to conceive their own full-fledged direction in modern art, design, and fashion. One of the basic examples of Computational Creativity in action is Neural Style Transfer. It has already been described in numerous publications such as this one, but we would like to share our own experience with this rising technology.
Neural Style Transfer is a transformation technique that allows an image or video to adopt the appearance or style of another image or video. It blends two images, using the content and the idea of one of them, but painting it in the colors, shapes, and style of another.
The applications of this technology are versatile. The most obvious use is in fashion. While creating clothes, designers have to take into account the cut, the pattern aesthetics, the fabric, etc. With the help of neural style transfer though, new apparel designs can be created in no time. The machine will consider all these aspects and automatically apply the selected style and pattern, significantly streamlining the process and cutting down on manual labor.
Lately, we’ve been getting many requests from our clients about Neural Style Transfer. That is why we decided to prepare for you a little demo on how to create this algorithm and where it can be used.
In any business, time is of the essence. Just like we explained in the fashion example above, Neural Style Transfer can help to save a significant amount of time. Still, deep learning algorithms used in NST require some time for training, to learn the style before they’re able to apply it. Besides, most of the implementations available on the internet offer only a limited number of styles you can choose from to style your image.
We decided to develop an engine that would be capable of creating as many styles as needed. We focused on the time this system would take for training, as the main factor. The particular task that we embarked on was an algorithm that would turn any photo into the work of art with the help of Neural Style Transfer. From the very start, we managed to reach good timing, but that was only the beginning. Read on to find out how we did it.
Technical Side of Our Style Transfer Engine
We are going to roughly describe how we managed to create a Neural Style Transfer Engine, which is on the intersection of Machine Learning, Image Processing, Convolutional Neural Networks, and Image Denoising. Everything we did was with the help of the following tools: Python, TensorFlow, Keras, and Numpy.
First, we moved the style image out of the model inputs. Style features need to be processed only once and then the system turns them into simple numbers, which are later used during the model training.
To train the model properly, just like in any Data Science task, we needed not one image, but a vast corpus of ~80 thousand images. As a rule, the more data you train your model on, the more accurate results you get.
Our next challenge was image noise. Implementing style transfer was not enough for us, we wanted the quality of the output images to be as high as possible. To achieve this, we applied total variation denoising. As a result, the images were not only a beautiful combination of the original content and the selected style but also contained very few artifacts (in some cases – none at all).
Through these manipulations, we were able to create a model that needed approximately 1.5 hours to learn a new image style. Once it’s trained, our model can apply the style to any image and video in no time.
Note, that this model was created for the sake of research and we weren’t super picky about it. We tried to spend only as little time as necessary to achieve our goals and learn in the process. We didn’t optimize or beautify anything. The performance of the entire pipeline can be further enhanced to reduce the training time, achieve better FPS for videos and improve the overall style transfer quality.
We already have a few ideas on how to shorten the time needed to learn a new style (10 minutes or less) to ensure the minimum waiting time for the users.
Style Transfer Applications: How It Can Boost Your Business
Neural Style Transfer has the potential to be used in any industry. Whether it’s fine arts, fashion or architecture – this technique can automatically change shapes, colors and even the mood of your image or video.
Make Anything Art is a new fun application and a successful business case, describing how people can transform their selfies or videos into surreal art-inspired creations, mimicking the style of Paul Cézanne, Jackson Pollock or Willem de Kooning. A few apps like that are already on the market and we hope that in the nearest future we’ll be able to watch a movie styled like the works by Picasso or Paul Gauguin.
In addition to entertainment, Neural Style Transfer can significantly reduce the manual human input in the manufacture. Pieces of furniture, accessories, and even cars can now be designed in a matter of a few minutes, as opposed to the hours of arduous manual work. What is more, we know of a few cases when Neural Style Transfer was used in web design, transforming the mobile app GUI.