AI Neural Network Implemented In Blender

I just finished this neural network visualization.
I used a real neural network that actually is really predicting the handwritten numbers (from a famous dataset called MNIST). So, the handwritten numbers are the inputs, and the number behind the hidden layer (the neurons in the center), is the output predicted, in other words what the neural network thinks the number is in input.

It is possible to change dynamically the network (number of neurons) straight from animation nodes. I didn’t build the lights animation when the prediction is fired. Yet.

Image

In this link there is another approch I used.

6 Likes

No replies yet? I am not sure if the awesomeness of this project is even realized by most of us here. I am just in love with your render.

This reminds me of the microscope of Open AI (https://openai.com/blog/microscope/), where they visualize the internal representations of neural nets. This “internal representation” kinda looks like a normal map texture. It basically shows us how the model interprets what it is “understanding”…

AI is both exciting and scary! I wonder what possible applications could there be with such a visualization that you made. I mean, as you say, this bad boy is actually making predictions. So for example, we could in fact visualize individual weights of these neurons etc…

Maybe when they pipeline a few models together in order to solve complex problems such as animating a still photo of a human face, such visualization could help us humans better understand these complex systems. Is not it crazy? Since in the last decade neural nets basically rule in most of the problems they’ve been applied to, and we still have no idea how they internally work! :smiley:

1 Like

I’m glad you like it.
Regarding this project, It is very complicate to take a look in an organic manner when you are working with millions, or even billions of parameters.
Take a look this example I did.

Here I visualized a simple convolutional neural network (CNN), and only with four filters the features map generated, it necessity lots of polygons. And a simple CNN already usually has 32 filters, for multiples layers. What I’m trying to say is that visualizing an entire production model of a CNN it’s very computational demanding, but visualizing just a part of it, it could shows what’s going on the network. It is really fascinating to look how the filters and as a consequence, the features map, change autonomously during the training. I mean, we are talking about something that it looks alive, instead are just numbers that change image after image. Really fascinating.
If you want to take a look the code, I create a Github project.

Thank you.

1 Like