There’s no doubt AI will be aiding more and more the design pipeline, opening endless possibilities where the only limitation is your own imagination.
Thankfully there’s plenty of resources online to learn and play around with some of these cutting edge tools, and for a long time I entertained the idea of implementing some kind of neural network into Blender taking advantage of python’s strong presence in the deep learning field.
I finally took the challenge, and picked Nvidia’s StyleGAN2-ada network to try and generate textures with pretrained models. After struggling with it for a couple of days I could finally run it inside Blender.
These are some tests of what can be achieved, plugging the generated textures into colour and also displacement of the shader.
Of course there’s plenty to improve, I’ll be working on it in my spare time. In the future it’d be awesome to train models with a dataset of seamless textures. We could even have models trained to generate diffuse maps as well as normal, roughness and whatnot!
Feel free to collaborate!