Neural Style Transfer — Deep Learning

stylize the images with Neural networks using PyTorch

Tharun P
6 min readAug 23, 2021
Dog with Neural Style Transfer

Introduction

This tutorial explains how to implement the Neural-Style algorithm, this allows you to take an image and reproduce it with a new artistic style. The algorithm takes three images, an input image, a content image, and a style image, and changes the input to resemble the content of the content image and the artistic style of the style image. We are going to explore the method of style transfer using Deep Convolutional Neural Networks and we also use pre-trained network VGG19 for that.

The principle is simple: we define two distances, one for the content (D_CD C ​ ) and one for the style (D_SD S ​ ). D_CD C ​ measures how different the content is between two images while D_SD S ​ measures how different the style is between two images. Then, we take a third image, the input, and transform it to minimize both its content-distance with the content-image and its style-distance with the style-image. Now we can import the necessary packages and begin the neural transfer.

  • torch, torch.nn, numpy (indispensables packages for neural networks with PyTorch)
  • torch.optim (efficient gradient descents)
  • PIL, PIL.Image, matplotlib.pyplot (load and display images)
  • torchvision.transforms (transform PIL images into tensors)
  • torchvision.models (train or load pre-trained models)
  • copy (to deep copy the models; system package)

Next, we need to choose which device to run the network on and import the content and style images. Running the neural transfer algorithm on large images takes longer and will go much faster when running on a GPU. We can use torch.cuda.is_available() it to detect if there is a GPU available. Next, we set the torch.device for use throughout the tutorial. Also, the method is used to move tensors or modules to the desired device.

Loading the Images

Now we will import the style and content images. The original PIL images have values between 0 and 255, but when transformed into torch tensors, their values are converted to be between 0 and 1. The images also need to be resized to have the same dimensions. An important detail to note is that neural networks from the torch library are trained with tensor values ranging from 0 to 1. If you try to feed the networks with 0 to 255 tensor images, then the activated feature maps will be unable to sense the intended content and style.

However, pre-trained networks from the Caffe library are trained with 0 to 255 tensor images.

Now, let’s create a function that displays an image by reconverting a copy of it to PIL format and displaying the copy using plt.imshow. We will try displaying the content and style images to ensure they were imported correctly.

# desired size of the output image
imsize = 512 if torch.cuda.is_available() else 128 # use small size if no gpu
loader = transforms.Compose([
transforms.Resize(imsize), # scale imported image
transforms.ToTensor()]) # transform it into a torch tensor
def image_loader(image_name):
image = Image.open(image_name)
# fake batch dimension required to fit network’s input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
style_img = image_loader(“./data/images/neural-style/picasso.jpg”)
content_img = image_loader(“./data/images/neural-style/dancing.jpg”)
assert style_img.size() == content_img.size(), \
“we need to import style and content images of the same size”
Left (content) Right (Style)

Loss Functions

We will add this content loss module directly after the convolution layer(s) that are being used to compute the content distance. This way each time the network is fed an input image the content losses will be computed at the desired layers and because of auto grad, all the gradients will be computed. Now, in order to make the content loss layer transparent, we must define a forward method that computes the content loss and then returns the layer’s input. The computed loss is saved as a parameter of the module.

unloader = transforms.ToPILImage() # reconvert into PIL imageplt.ion()def imshow(tensor, title=None):
image = tensor.cpu().clone() # we clone the tensor to not do changes on it
image = image.squeeze(0) # remove the fake batch dimension
image = unloader(image)
plt.imshow(image)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
imshow(style_img, title=’Style Image’)
plt.figure()
imshow(content_img, title=’Content Image’)

Style Loss

class ContentLoss(nn.Module):def __init__(self, target,):
super(ContentLoss, self).__init__()
# we ‘detach’ the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input

Now the style loss module looks almost exactly like the content loss module.

class StyleLoss(nn.Module):def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input

Importing the Model

Now we need to import a pre-trained neural network. We will use a 19 layer VGG network like the one used in the paper.

The architecture of VGG19 Network

PyTorch’s implementation of VGG is a module divided into two child Sequential modules: features (containing convolution and pooling layers), and classifier (containing fully connected layers). We will use the features module because we need the output of the individual convolution layers to measure content and style loss. Some layers have different behaviour during training than evaluation, so we must set the network to evaluation mode using .eval().

Additionally, VGG networks are trained on images with each channel normalized by mean=[0.485, 0.456, 0.406] and std=[0.229, 0.224, 0.225]. We will use them to normalize the image before sending it into the network.

cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)

# create a module to normalize input image so we can easily put it in a
# nn.Sequential
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = torch.tensor(mean).view(-1, 1, 1)
self.std = torch.tensor(std).view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img — self.mean) / self.std

A Sequential the module contains an ordered list of child modules. For instance, vgg19.features contains a sequence (Conv2d, ReLU, MaxPool2d, Conv2d, ReLU…) aligned in the right order of depth. We need to add our content loss and style loss layers immediately after the convolution layer they are detecting. To do this we must create a new Sequential module that has content loss and style loss modules correctly inserted.

Next, we select the input image. You can use a copy of the content image or white noise.

Input Image

Input image

Finally, we must define a function that performs the neural transfer. For each iteration of the networks, it is fed an updated input and computes new losses. We will run the backward methods of each loss module to dynamically compute their gradients. The optimizer requires a “closure” function, which reevaluates the module and returns the loss.

We still have one final constraint to address. The network may try to optimize the input with values that exceed the 0 to 1 tensor range for the image. We can address this by correcting the input values to be between 0 to 1 each time the network is run.

Finally, we can run the algorithm.

Output image

Output image

Result:

Building the style transfer model..
Optimizing..
run [50]:
Style Loss : 4.169304 Content Loss: 4.235330
run [100]:
Style Loss : 1.145476 Content Loss: 3.039176
run [150]:
Style Loss : 0.716769 Content Loss: 2.663749
run [200]:
Style Loss : 0.476047 Content Loss: 2.500893
run [250]:
Style Loss : 0.347092 Content Loss: 2.410895
run [300]:
Style Loss : 0.263698 Content Loss: 2.358449

The above images that I trained are with sample images, I have used Elon Musk, as a model for this experiment!

Bibliography:

A Neural Algorithm of Artistic Style Paper here.

Feedback :)

If you find this article interesting give some claps, please feel free to take my code to improve it. Share this story with your contacts and please leave your feedback below :)

--

--

Tharun P
Tharun P

Written by Tharun P

NLP and Neuroscience and Robotics Enthusiast | Self-Taught | Writing about whatever feels interesting, intriguing and fun.

No responses yet