neural nets + blender?

https://affinelayer.com/pixsrv/

i came across this project that uses google’s tensorflow and wondered what neural nets could do for blender.

couldn’t this be very useful for procedural texture generation? the user draws some rough shapes and building facades, wood textures, machinery,… will be output?

and i don’t know what method lukas stockner’s denoiser uses but couldn’t neural nets also be very good with denoising (if they get access to all the important render layers)?

Those methods require a tremendous amount of training data. The more flexible you want to have it, the more data you need.
The central question is, who should do it? Do you want the developers to spend a huge amount of time to research where and how it could be used?

The denoiser isn’t using a neural network.

It seems that these days a new fascinating result comes out of machine learning research every other day, be sure to follow Two Minute Papers to not miss out on some of that stuff.

As for practical applications: It really depends. Just because a result is fascinating doesn’t mean it’s all that useful. Yes, you can generate textures of building facades from a sketch, but then you need a large amount of data on building facades and an equally large amount of such sketches (though I believe in this case sketches were derived from the photos). The results aren’t that good either - at least not consistently and predictably so.

In the case of Blender, at least in theory such data could be collected through crowdworking.

Well thats not a full answer, yes it does cost a tremendous amount of training.
However once trained (this doesnt have to be on your machine).
A neural network, it can load the setting from a file, and with futher training, it can apply that model to similair problems. And aplying goes realy fast doesnt take lots of computing time, only the training part costs lots of time.
Talking of Google, they made a tensorflow neural network especially for this purpose (training and solving someone’s else problems) so that it can be aplied on cpu lighter networks (hence thats why they want to have a neural net chip in their new phones).

Well the AI race will be faster then we can imagine, suspect a lot more of this in the next years to come.

I know of a user on the forum here who is working on a denoising network.
And he allready seams to have it working for gray scale images.
I contacted him when he asked for image input on the forum here.

Google now has GAN writing AI itself.

crazy times.

no, i don’t expect anything. i just was looking forward to hear some opinions.

maybe i will do some experiments myself but i don’t know yet how easy tensorflow is to use with python. the papers about neural nets always get way over my head mathematically. :slight_smile:

yes, the facades that come out of the example aren’t always that great. they could be used for background objects maybe.

but somehow i see the potential. imagine something simpler like just a brick texture where you can paint in broken or dirty areas or easily control the joint pattern with some lines. or a wood texture where you have control over the grain, the knotholes and gaps between the boards.

You are heavily oversimplifying this in my opinion. Let’s have a look a neural network for denoising. I have seen the forum post and I am really curious to see the first results! However, this is very new and it is unknown of what it takes to make it production ready.
The straight forward approach is to have the noisy image as input and expect the denoised one as output. With enough training data, that might work. But as it is more likely that this amount of data can not be gathered, it might be useful to have additional information as input to help the network. This might be as simple as a viewport snapshot (maybe through Eevee). It could also be individual images from the rendering process or internals. Depending on the choice, we are talking about different neural networks, that need to be retrained from scratch as far as I can see.
And we haven’t even talked about the input image size yet. How can we make sure it just works? There are techniques which allow different input sizes, but they usually require more training data which automatically leads to longer training time.

Talking in this context about what Google is doing, doesn’t really help. They have a huge amount of people working on those topics and they have the data and computational power.

I think such a network is not image size restricted (as he posted) most likely a image processing kernel would be used.
ea collect small area n by n pixels and denoise an area n-p x n-p. Training time these days is not the issue for neural networks. And de-noising should not be that hard as compared to speach recognition, or object recognition, or stock trading, car driving, or game-playing, (relative simple nets can play mariobross, but chess or the game GO are complex).
BTW google is offering computing time for opensource projects, if i remind correctly).