Background Image Replacement

the “draw type” analogy occurred to me the moment I saw this proposal… so doesn’t seem like an “abuse” to me…

I see Matt’s point, but am excited by the possibilities of using this as a clone source…

When in texture paint mode doing it Matt’s way would presumably allow you to stay in that mode and show UI widgets to manipulate the image plane… whereas doing it the “hacked” way would mean going to object mode and back for positioning… not to mention changing objects…

so long term I’d hope for widgets…

Lots of good and useful functionality seems to get deferred to doing things properly and not cluttering the code, though the “proper way” often never seems to get done.

from a user perspective this is quite bad.
It seems to me that with the new “addon” system stuff that isn’t acceptable long term, but is extremely useful to users could be a good way to proceed in cases like this…

I’m using the term addon in what I took to be the original intention of adding python and c plugins, though it’s pretty much python only so far… but guess that’d need some kind of api nd would need the plugins compiling for all platforms…

I think you’re more excited by the prospect of using ‘anything’ as a clone source, not necessarily getting locked into doing it this way.

Lots of good and useful functionality seems to get deferred to doing things properly and not cluttering the code, though the “proper way” often never seems to get done.

from a user perspective this is quite bad.

Speaking in general terms here, not specific to this proposal, this is more of a problem with the open source development method - it can be too open sometimes :slight_smile: Users get to see all the little experiments and ideas and first-attempts from inexperienced coders, that you’d never end up seeing in a closed source model, and can be easily tempted by the idea of that functionality, but are often shielded by the practical implications (at least directly). Almost a bit like wanting candy in the shop front when a more wholesome baked dinner is better in the long term :wink:

There are a few issues - firstly, functionality needs a good design and framework for it to fit into workflow-wise. Once you start adding nifty, easier to code tricks rather than planning and doing it the ‘right way’, it can get you into problems in the future when it needs to be extended. If more functionality needs to be added you’re stuck without elegant ways to do that if you continue piling on extraneous things onto a foundation that’s not really meant to deal with it. I mean this in a workflow sense as well as a code sense.

Taking the example of the grease pencil convert to curves button - the general idea of the functionality is good (drawing 3D curves with a tablet), and is the sort of thing that excites people, but the implementation of using grease pencil for it causes serious workflow limitations. What if you want a nice way to edit those strokes, like continuing a stroke where it left off, or editing or smoothing the points, or any of the other things you need to do in curve edit mode? It becomes a lot more clunky, switching back and forth between modes from disparate areas of the application (sound familiar? :slight_smile: The ‘right way’ imo would be to implement freehand drawing tools within curve edit mode, which have plenty of room to grow within the established curve editing workflow.

And then you have the problem that if something’s been added already but you want to then get around to implementing it the right way, either you have to remove the old way (which people get annoyed about with both re-learning and also not being able to replicate obscure behaviour form the old way) or you get stuck with having multiple different methods for doing the same thing, which gets confusing to use and learn, but also doubles the maintenance cost. Remember when Brecht was proposing removing old features, this is exactly why - the more complexity that is added, the more effort it takes to maintain, finding obscure bugs etc. rather than working on genuinely cool stuff. Brecht seemed frustrated by this, and I was too, when I thought I’d be spending my time making good progress on developing good improvements for 2.5 but ended up having to fix bugs for several months.

Blender 2.5 has been stuck in ‘stabilisation mode’ for about a year now (i started my fulltime stint on 2.5 work last november), and a lot of this huge amount of effort has been spent on tracking down bugs. The more complexity, the more of this there is, and the less ‘right way’ the code is also makes things worse too. Of course users are shielded from a lot of this, making the rationale hard to understand, and especially so when the candy’s sitting right there on the shelf (patch tracker) out of reach. Perhaps there’d be more time to spend on implementing these features the ‘right way’ if there wasn’t so much icky code to fix in the first place :wink:

Anyway, I’m not trying to suggest that adding cam’s proposal will be the end of the world or cause these sorts of problems but I am trying to express that there are indeed serious costs to not doing things the ‘right way’, and that the bigger and more complex that blender gets, the more these costs amplify.

Thanks Broken for this interesting insider view of Blender development.

I like this idea. The same manipulator could be also used to move backdrop for compositing nodes or to move background without moving UVs in UVeditor.

Taking the example of the grease pencil convert to curves button - the general idea of the functionality is good (drawing 3D curves with a tablet), and is the sort of thing that excites people, but the implementation of using grease pencil for it causes serious workflow limitations.

I agree
Another example of good tool partially implemented in timeline: Bind camera to markers

This marks should be local to scene, because the cameras are objects within the scene, the fact edit cameras as global objects in the timeline is messy

That or do it like the Compositor backdrop by moving with ALT + MMB. This could become an issue if there’s multiple images, so perhaps select an active one then move.

Thanks Broken, good response and I do get it.

The flip side would be that most commercial software is full of these “hacks” and “sub-optimal” features…

In the long game I agree that having high standards for what is and isn’t in “trunk” is a good thing

You’re right of course that once “in” a subsequent feature removal usually causes a big up-roar in the community… It does make me wonder about “addons” which seem to be in some debate about whether they’re “officially” supported or “user beware” features…

though I’d hope if this patch were an addon it’d already be in trunk…

…and some things that are currently addons should probably be core functionality too.

Slightly off topic I know.

Wasn’t more generic widgets an SOC this year? they’ll open up the possibilities of blender massively!

@broken, funny you mention doing things correctly, to me, blender 2.5 was half written before Ton wanted to call it stable, so we have old crappy design mixed with new/good/unfinished/undefined design.

So I was thinking, Hey, lets get this right before 2.5x stable release, it cant be THAT hard :D.

  • Having images be able to exist in 3D space IMHO is important, but for this reason I don’t see them as UI elements.
    So ignore clone source for now and consider setting up images to model from, where you want to position them in 3D space and display over/under, manipulate etc.

This is the problem I like to try and solve, Id be interested if there were some design you think would be acceptable.

@Michael W, Addons are a big topic, to give a vague answer - Addons must work well by release date for their intended use case use they will get demoted to contrib svn until someone wants to maintain them. By their intended use case I mean if an architectural addon hangs on a high poly human model, this is an unsupported use case and ignore the bug. This is the advantage with having specialized tools, you can target specific workflows.

Is it a lot of work to add a new datablock to blender ?
It seems to be the more elegant way to add a lot of new feature.
2.5 is well designed to explore data.
Shoud it not be easier to add new type of data ?

It’s actually pretty easy to add a new data type to blender with makes{dna,rna} but integrating it with the old, crufty code can be challenging at times.

Like this example, getting empties to use image datablocks is super easy (since they didn’t have any datablock previously) but getting them to draw in the 3d view needed work done to the raw opengl mesh and custom bone (for ‘image as bones’) drawing code.

Plus there’s some rough edges that still need to be solved like the image icon shows up in the panel header instead of the empty one (since it’s hardcoded that no datablock == empty object) that reveal assumptions made in the code which, IMHO, will only make blender better in the long run.


I do agree to a certain extent that sometimes the owner of an area of code will make summary judgments based on their idea of a ‘perfect implementation’ (that they will never have time to code) to kill a working example that’s just as good (or better even) because it doesn’t fit within their vision of what blender should be.

But what’re you gonna do other than say “Oh well, your loss, fsckers” and move on to the next project?

only thing i wish for background image is that it’s not named “transparency” but rather “alpha”.
because at the moment the values are totally backwards.

alpha 0.0 in a material transparancy eg. is invisible.
transparency 0.0 in the viewport makes the image fully visible.

it’s backwards. I had a modeling scene with 8 diffrent images for backgrounds and I was totally lost until i realized why it displayed the wrong image. because 1.0 didn’t mean visible, but rather invisible.

@broken, I was thinking I could try get a patch together this weekend todo with this bg image stuff.

Probably we’re mostly on the same page but a few things…

  • Won’t try solve clone source Image overlay for tools, this can be done separately
  • This can be image for image/modeling reference.
  • Doesn’t have to use empties, though I do worry about defining new scene data which users have to manipulate.

I’ve come to the conclusion there are 2 acceptable ways to do this which are not disruptive & give enough control.

  1. use objects, either empties or define a new object type especially for this. I know you don’t like this but I disagree its such bad design as you are suggesting.
  2. keep existing image method but add an option to attach them to objects, while we’re at it. move these into a list stored in the scene to avoid window data annoyance.

@aermartin, RNA can invert these options while keeping internal data compatible. should be a 5 line patch.

@ideasman42
I figure if you use objects then you can renderer them like any other object, but would it be possible to use the world image/texture/buffer as a background image.

Wrote a new patch which uses the existing background image options but adds an ‘Object’ field.
This way the patch is kept quite small.
Id still like to move this out of viewport data into the scene but that can be done as a second patch.

https://projects.blender.org/tracker/index.php?func=detail&aid=21740&group_id=9&atid=127
See: EmptyImage_Alternative.patch

@sx-1, no, these should not be rendered, Once you start rendering them, then you will want to have control over shading, and ztransp/raytransp options.
For rendering its better to have an image on a quad.

+1
This seems to be a good new property but the background image is useful too.

Hellow ereryone,Novice try to post .

After getting to try this firsthand in 2.58, I really appreciate the ability to scale and move the image in 3d space without messing up my view. I also really love the fact that it is easy to work in wire frame on my mesh and see the image from every angle while modeling! This is a very valuable addition, and now I really understand just how frustrating the old system of back ground image placement is :slight_smile: The coolest part is that you can immediately access the data of the loaded image anywhere else in Blender after loading it there - the UV Image Editor, the Texture panel, etc. Big thumbs up!!

Thank you Ideasman42!