I have dementia. I respond to a robotic pet as though it were a real one. Am I being duped? Does it matter?

Where do you stand on the ethics of robotic pets versus real ones for people living with dementia? Are we playing with people’s dignity by letting them canoodle with a pet that was assembled on a…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Introduction to auto encoders

in these series of tutorials I explain generative models in deep learning .

Part 1

Auto encoders are an interesting subjects in computer vision . this topic is very popular in academics , and they are mostly used as a gateway for generative models . The good thing about them is that it can be visualized quite well, so we can have a good understanding of what’s happening behind the scenes. In this tutorial we first have a look at simple auto encoders ,try to visualize the layers and then we can have a look at Variational auto encoders.

Auto encoders are unsupervised or self supervised models in deep learning (auto means self in latin) ,because we use the data itself as label to train the model. If you have studied machine learning it works similar to PCA algorithm. we can use them to denoise or compress the data .early segmentation models had a similar architecture to auto encoders.

one downside of autoencoders is that they cant work as a stand-alone model that we can use for compression . we need training data to train the model , then we can compress the data using the trained model.

Auto encoders consist of two parts. encoder encodes the model and outputs the essence of the data and decoder tries to use this code to guess the original image.

So , how can we make an auto encoder model ?

in our typical neural network models for classification or other tasks we try not to make an information bottle neck ,what is an information bottleneck? it’s a layer that is smaller than the original input ,which means that some information about the input will be lost , and compression happens here .

there is an important relationship between information and entropy which I may explain in some other post.

let’s look at the most simple version of an auto encoder :

first importing the packages:

we use MNIST dataset for this autoencoder ,we don’t need labels so we remove them . the images have values between 0 and 255 so we divide them by 255.0 to make them float and between 0 and 1 .

now time to define our model ,

our model will be a stack of dense layers, so we flatten the input data to pass to the dense layers, our autoencoder has 4 trainable layers which have 128, 64 ,256 and 784 nodes(28*28*1 = 784). then we reshape it to compare it with the original image.we use Adam optimizer and mse loss.(note that we didnt use the same variable for all the layers (x,y and z) we use this variables later on to separate encoder and decoder .)

The first two dense layers will be our encoder ,and their output y will be the code after that we have decoder which tries to reconstruct the original image based on 64 values.

let’s see how the model works with test data:

top images are the original image ,and the bottom ones are the reconstructed one ; as you see the reconstructions are pretty reasonable for 64/(28*28) = 8% compression .

We can use convolutional layers for feature extraction which doesn’t have much difference with this auto encoder . we use convolutional layer for variational autoencoders , which we look at next .

Add a comment

Related posts:

Change Values By Experience

Every single individual has a value system whether they know about it or don’t. These values could be acquired consciously or unconsciously via social influence or past experiences, etc. If we take a…

Practical Text Classification With Python

The chore of this article will focus on a text classification API compatible with Python. We will talk about the importance of this API and the programming language! APIs are application programming…

On Solitude

Upon ignoring quantum theory; Individualism in a physiological sense makes you capable of containing an inner universe of meta-physicality. And the mind is the biological device that helps in…