I apologize if the topic has been talked to death already- whats the general consensus around using AI to generate textures. I’m asking because for now I want to use it as a stop gap while I develop my drawing abilities.
I would like to use it, but at the same time i also don’t feel 100% on board knowing some eagle eyed Ricky R. is going to bash the whole thing because of it. On the other hand from AI to self developed hand cranked textures seems like it’d make a good skill development portfolio/video.
It really depends on what your goal is. If your goal is to create a portfolio that will get you hired, I would not recommend doing this. Generally, hiring managers are looking for your abilities, and a reliance on external texturing (AI generated or not) isn’t a great look for that.
On the other hand, if you just want to have fun and make art, then I say you should use whatever tools you’re comfortable with to make things easier
I don’t have a lot of experience with AI but last time I tried it was very difficult to get to a result when you want something really precise.
But maybe someone with a better feel of AI can manage something better than I did.
It’s basically a tool, if that can help you work simpler and faster then it’s great, but most of the time AI tools are only helpful in the hands of someone experimented.
I can use chat GPT to produce python code for me, or help me figure out some algorithm, but then I’ll need some python skills anyway to clean that and produce something useful.
Same with AI generated images, if it’s your only way to produce image, it won’t be enough to fit in a professional environment.
Then as Joseph said it really depends on your end goal, if you want to build a professional portfolio, avoid all these fancy stuff and concentrate on learning and showcasing fundamental skills.
A lot of beginners skips the basics since they seems boring at first and looks for more shiny and fancy advanced stuff but that’s generally the lack of fundamentals that betrays beginner’s work and nothing else can save that.
But if your goal is to have fun then do whatever you like, it’s not because something doesn’t fit a professional workflow that it’s bad or un-interesting, quite the opposite I think !
There are companies though (like Adobe) who are working on a middle ground, that being tools that are controlled by the user, but with AI assistance.
Adobe for instance now has demos where you can drag and scale objects while AI fills in the gaps that result from the operation, and only the gaps (but AI as a tool in an existing program is currently overshadowed by the visually stunning demos of it doing all the work for you).
People in the art community are generally more okay with “AI as tool” than they are with “AI output as art”. Don’t use AI (or pre-made assets) for parts of the workflow you plan to market as skills, otherwise, most people probably won’t complain if your immaculately modeled building has a wall with texture components derived from AI.
The biggest rule of using AI for art is don’t lie about it. Artstation’s trash-heap of so-called “3D renders” that are clearly just barely touched-up 2D AI output with lies attached is a source of ridicule and frustration in part because it’s such a boldfaced lie and is so egregious it just feels scammy. So if you use AI for the textures, don’t say you made it yourself from scratch in Substance or Photoshop or whatever.
Selling mass produced AI generated texture assets will similarly provoke a negative reaction. It’s lazy and drowns out the real texture artists producing quality work in the marketplace. That’s not what you were talking about doing, but it’s the other way AI could be used and provoke backlash.
There may be some intellectual property rights issues to worry about down the road depending how things shake out in the legal world, particularly if you plan to use your work commercially, but as a layperson who is definitely not a lawyer and is absolutely not giving you legal advice, using AI to generate a texture that is merely a component of a greater piece seems likely to count as sufficiently transformative even then.
There are so many high res free PBR textures out there, that I don’t see the point of AI textures. The generatie I tried all had little flaws, or tiled weirdly.
I would recommend building your own library of materials you like, searchable, with good previews and asset manager ready. For me that works better than searching for a specific one when you’re on a deadline. You can make a mix of AI and downloaded PBRs if you find they can compete.
One thing AI materials are better at: mixes. A dry grass patch with yellow flowers. I would need a mix node, 2 PBRs and a greyscale fac image for that.
It would seem so… but if you look at unity asset store or unreal marketplace you’ll find tons of them. Some with reviews (that means that someone did buy them). It’s strange really… Maybe the general community cares less than the vocal anti ai ones do?
Aside from the uncertain parts one thing seems to be certain right now. Output of AI system isn’t protected by copyright at all (as that requires human authorship). I’m not a lawyer as well, but just yesterday I watched a recent talk from lawyers (in EU) specializing in IP in gamdev about it.
I used some at the beginning of the hype cycle (even made some YT tutorials how to convert those to PBR and use them with Blender and Unity). And there are a few points I can see:
effort (esp. with integrated tools, like Dream Textures addon) is lower to get something
you need weird or abstract textures
project from view for quick renders or background stuff
you want to have unique textures
if you are good at prompting - you can get quite nice consistent results across the textures you generate
I installed Dream Textures and generated a photorealistic cracked old yellow brick wall with vines. Wow.
No PBR though, all baked. Shadows often included in Albedo. Limited to 2k. Takes forever when bigger than 512x512. Very much trial and lots of error. Very unpredictable and many visible mistakes. Good for background items I guess.
Yea, you got to take it to Materialize or Substance Sampler to get to PBR (in the tutorial above I show workflow with Materialize). I recall author of DT was thinking about adding some of it to the addon.
I am not sure if Dream Textures support that, but it’s not hard to upscale it. I used DT as an example of ‘easy’ workflow in my tutorial, but TBH, I was using A1111 web UI most of the time (which has couple of upscallers).
(oh, I just checked repo - DT do support upscaling now)
It’s one of the big limitation. But with practice you get a feel on how to prompt it to achieve more or less what you want. But you’ll never get as much control as with, say Designer.
(Though maybe I shouldn’t say ‘never’, as maybe with ControlNet and some new advances in the tooling one will be able to control it better).
Yea, or for prototyping and quick renders.
I think with effort you could make it work with foreground stuff, but I’m not sure if it would be faster/more efficient workflow compared to using Designer/Painter. But I’m also thinking that this will get better and better with time, so I’m trying to keep track of what happens in this space.
Yes ! These tools are definitely worth learning and probably less controversial since they only save some time without automating some creative decisions.
While in the other hand some image or texture generator allows to create something from a prompt, like a “cracked concrete wall texture with moss and dirt”, these tools are definitely useful too it turns out ( at least to me) that they never provide something relevant in a real case scenario.
That’s why some regular skills are probably always going to be needed alongside the last technologies.