GSOC 2015 user ideas

Some retopology tools !

Yes! This would be great.
A free select too for quicker masking.

I think no matter what the wishes of users, after all Ton has made it clear several times that the focus of Dev / BF is to Blender even better for artists and, if time permits, to other users.
then consider this nonsense business of making proposals for GSOC.
Remember, enters year in and year out and the focus of BF seems to be ALWAYS make the best film opens-source possible. As if that would change anything in Blender community setting.
To be honest, the current focus of the 3D development tools cease to be strictly focused on Film … everyone knows which area attracts the attention of professionals for quite some time.
So paremeos of us deceive ourselves.
I’m not being pessimistic, just realistic.

More to the point, this is really off-topic. If any prospective GSOC student is looking even for vague ideas on what to propose as Blender features, this could act as a good place to start. If you want, start a thread analysing the potential issues and solutions that the Blender Foundation and the current model of development has if you want to be helpful rather than just randomly state an opinion :stuck_out_tongue:

To add to the list, better UV unwrapping methods, particularly to handle hard-edged surfaces better. Better UV island packing would also be good. The OP should probably compile a list of requests on the first post so anyone looking for ideas doesn’t have to search through the entire thread, as well as perhaps some artists they can contact further about any particular ideas.

The improvements to the mesh deform modifier would be much welcome.

How about paying a GSoC student to add all those patches that people make but never get committed?

So someone who may be only starting to understand the blender code should be the one who reviews and commits these patches. You may as well just remove the code review altogether.

What do you mean ‘utilise NGons’ ? In what way do you want it different to the current implementation with ngons ?

I think these are easy but verry usefull

  • more texture generators, more noise types, fractals, since those can drive so many parts of cycles it would be very usefull.

  • A new kind of skye background, with some generated clouds (think of what we see in terragen, or in flight simulators
    The idea here is to have this as a background image not as object.

  • Almost the same as previous but this as object
    Despite i’ve seen several methods to do clouds, none of them are realy handy or good, they all are more like lucky accidents.
    What we need is a more advanced densitiy distribution for volume shaders. So that a cube will render as a real cloud
    And in which we can give options like how “puffy”, “wooly”, “scattered”, “ripled” the density distribution is.
    There are fractals that can draw clouds, we need this in 3d volume distribution, and we need a few types of tem.
    It might also be possible without fractals, based upon a random seed, and some cell like logic, where cell patern density might lead to clouds who a level of detail based upon distance

    • also the clouds might have options for a slow variation in their distribution
    • it would bbe ok if such clouds would need to be baked, if the fractal math is a bit complex.
  • scaleable rasterization filter (RGB and BW), for print effects

  • for post processing, have options on lights so we could add an new node type that acts like a layer.
    A layer of how much influence each light had on that pixel.
    It would allow for changing light in post processing of a rendered scene.
    Think of a room in daylight and then be able to turn down the external light strength

  • for post processing, some advanced denoise filters (there allready exist some complex setups for this with nodes) but i can imagine some C code could offer some more impact on denoising here.

  • if possible have virtualdub filters inside post processing.

Those would be hard, targeted at animation rendering:
Blender uses render methods that require a whole frame to be calculated every pixel.
When people create animations however often not all pixels change and even if some do, in most cases it wouldnt be much.
Because a video is often 24fps or 25 or 30, and thus differences between frames are small.
So… maybe some kind of study if it would possible to for example render every X frame, and for the frames in between have a morph

  • i’d like to point out the work of luckas here with his Adaptive Sampling; it renders till a certain noise treshold.
    When rendering a next frame, based upon statics one might check if the pixel likelyness is in range or not and if in range use old pixel.

  • About the same area, cycles uses engines that require a lot of calculation to solve light.
    For animations (despite flashing lights) in most cases the solved light might be verry much the same,
    perhaps a little difference but no huge differences, perhaps the solution for light could be ‘remembered’ to solve next frames faster.
    Maybe a student could investigate if our render engines could make use of previous light solutions.

  • DAP audio support (there are many free DAP filters available, it would be verry nice if they could be used inside blender).

I heard at some point that it wasn’t using NGons, but i think I just got confused with some of the topology cuts it makes, so what i said was probably BS. My bad :x

Richard I was being ironic…

Anyway how about making the OpenGL renderer an authentic render engine? That would be awesome for mographics and video titling work. The rasteriser in Masks is already fast and lots of basic 3D stuff is real time now, it just gets stuck in a mode render that makes it hard to mix with non OGL stuff.

Remesh not like we have right now but to have results like zbrush’s dynamic mesh atleast - clean quads (even with help of strokes maybe or something like that, but clean quads) without nasty wavy poly patterns.

Oh yes snapping in Blender is a little uhm sad.

Fix NURBS finally.

Build in TinyCAD as hard coded.

Otherwise I must say I am quite happy with blender.

This is exactly what I was talking about when I said to stay realistic. This is PhD level stuff that ZB is just now getting right. Expecting this from a single student coder over the summer is completely unreasonable, as are a lot of the other suggestions so far.

Im not talking bout full retopo tools, just dynamesh “dirty” results, not ready for production but to proceed with sculpting using multires and not dyntopo.

And yes - Im not sure about complexity from a coder pov

I’d like to be able to add notes to objects . Maybe located under the Object tab, where descriptions of the object can be typed in, so if the item is modeled after a real world counterpart the specs and order number, name of manufacturer and any additional information can be paired with that specific object for later use. That’d be neat.

This:

I know, weights are fun, but muscles would be better. There is a muscle addon for blender, but doing stuff like this automatically would be great. It’s just a proposal.

And hair tools. Styling hair in blender sucks because if you use hair dynamics (on long hair), the hair falls through the scalp. But maybe there’s just an option of blender that I didn’t discover yet.

  1. Fresnel improvements
  2. IES lights

Antialiasing of the surface/material ID pass

And in the depth pass.