I just saw that the makehuman team “finished” their new pose engine. It is fantastic, and would be huge if the rig can be exported along with shapes and weight painting etc. to Blender. But I’m not yet sure what microrotation technology is, so I’m a bit worried that we won’t be able to export to Blender and animate.
Wasnt MakeHuman start out as a script, but is now a stand alone tool!?
Was a script, Is now stand-alone, but you can export as obj and import in blender. So already it’s awsome because you can design a character, pose it, then export the mesh as obj and import in blender, But I’m hoping we can import bones and shapes and weights.
Make Human http://www.dedalo-3d.com/ has long been associated with Blender. The older but still good plugins are still available from their site.
The Make Human Standalone is currently at version : Makehuman_08a_beta_win_setup.exe. For news visit the site.
They now have their own wiki section and there is a forum on site. There is nothing on site that would suggest that MH is not going to be compatible with future versions of Blender. The only other 3d program mentioned on site is K3D. I would presume that the Make Human team would use Blender for much of their testing as MH and Blender have a long history together. We all hope that compatability with Blender will continue.
Makehuman has a pose engine? Where?
(Going to look it up right now)
Koba
Basically an interpolation of lots and lots of hand modeled samples where a modeler corrects the mesh, a difficult and boring task. That is why it takes so long to get finished.
so I’m a bit worried that we won’t be able to export to Blender and animate.
Most likely both systems will be incompatible, since Blender calculates deformations from bone rotations and influences, and there is no equivalent to that in Makehuman.
An importer is not impossible though.
maybe all the shapes could be exported (I realize there’d be hundreds) and driven bones could be used to drive the shapes. In other words, the bones wouldn’t be deforming the mesh at all, but they would be driving the different shapes.
What I’ve wondered about for a long time is this - why can’t Blender load a mesh sequence into the scene? This would mean mesh 1 would be in place at frame 1, mesh 2 at frame 2 and so on. I know it would use a lot of hard drive space but it would be a workaround. If Makehuman supports animation and could export a series of meshes, this should be possible.
Koba
http://www.dedalo-3d.com/
Here is a couple of extracts from the Make Human Forum:
It’s very hard have a precise time line.
The key factor is the activity/number of developers. With actual team/development speed, I think it can be do not in incoming release, but in the next.
This is the idea: export all MH data in xml format. So provide various tools to use this format in various softwares. In example: a python script that read it and import all into Blender.
Regards,
Manuel
The texture will be in tif format.
The obj wil be expoterted with UV info, so you can apply UV textures without problem.
Regards,
Manuel
human-powered wrote:
What about armatures when importing into blender?As far as i know, obj format can’t store armatures. So it should be a .blend in MH export in order to do it.
The idea (not in incoming release, but hight priority) is to write an xml file with all data, and later write a reader into Blender, using python.
OK. Small amount of reaserch on this topic found the above at the Make human site. So Obviously this is of high concern/priotity for the Make Human Team. So let’s wait and see. They have to get the program further developed before the Above things can even be considered but they are being thought about and could be considered as in the pipeline.
No doubt once ready the Script wizards here and at Blender and at MH will all work on Import/Exporters.
The Forum at Mh is Quite new, I wouldn’t go spamming them with issues on this as it may only serve to slow them down.
Keyword: Patience.
M.A.
That is possible, but very difficult. The MH samples are modeled on “posed” positions. In Blender, poses are made with armatures. You would need to pose an armature to match the pose of the sample and run the opposite process blender does when deforming a mesh (using rotation matrices and influences), to unpose the sample and get a corrective shape key to be driven at 100% by a bone when it is in that particular pose.
You would not need to import all the samples, that would be absurd. Blender can do without that many samples, because good Ipo curves and good corrective shapekeys can do the needed interpolation and blending of shapes.
I was in the MH project for a short time and made a few of the original muscle samples. I am a better modeler now than I was a year ago and I am not happy with them, but I can’t do anything about it. I wish back then ipo drivers had existed.
In my idea for an (unofficial) importer, instead of trying to learn all the math and get frustrated trying to force the samples to fit an armature, I would simply create new corrective shapes from scratch. Less than 20 plus their mirrored counterparts (easily generated) are needed.
(edited) Also, I know the pourpose of using Wavefront object files is to allow to export MH models to several 3D programs, but a Blender specific importer can be easily written that directly reads bodysettings, targets and geometry files, all those are simply easy to understand text files.
It seems like what we need is someone who already understands the math etc very well (surely someone). It would be infinately better to have an automated system to get a character into blender. For a skilled modeler such as yourself, it is “easy” to add 20 shapes. But for animators, who are not always the same people as modelers, it could be very difficult. In fact, I would assume MH is most attractive to those with lesser modelling skills, but who could be excellent animators. I realize the ManHours are limited and I’ll be patient, but I think that exporting a ready to animate character from MH is the goal, and it seems we are so close when you see how well they are deforming in MH.
I was not saying that modeling new shapes would not be part of such an automated importer, for anybody to use. I know I should not have said that I would “simply” create new shapes. It is not “easy”, every shape takes many hours to model and tweak. It may also seem to be redundant work, but that is the only way I can do it.
It can be saved as a special target which is adjusted automatically for any imported model, using the same concept of morphing.
As an example I made one of the most radical shape keys, a rising arm:
And then loaded Cicca’s fabulous fat man target. There are some conflicts around the neck, but it doesn’t really look very bad and most importantly, it is 100% Blender automated animation.
That looks pretty good toloban! Let’s see if I understand you. You are saying that a python script could take the exported MH file and add corrective shapes and put it in .blend format? And those standard corrective shapes would work pretty well whether you created a muscular man or a little girl in MH? (or I guess there could be corrective shapes for each base type, and the python script could determine which to use?
Anyway, you don’t necessarily have to explain all the details, though I like knowing what’s going on so I can give helpful suggestions. As long as I know someone capable like you is working on it, I’m happy
Ok I am confused. I have makehuman 0.8 beta windows on this machine. I just ran it to be sure I was not missing the subject of this post. Where is there a “pose engine” in that program. I know you can set different values for different parts of the mesh human’s anatomy(Morph Targets). I know there are presets for differnet types of humans,male,female, older,muscular e.t.c But I do not see anywhere in there when you can actually put the human in a different pose. Am I missing something
There’s a blender 2.5??!
There is no Pose Engine version 0.8 beta . . . it will be in an upcoming release (the next one, or the one following that, I believe).
Not yet. I think the thread starter is hopeful for the next release of Blender (which may be version 2.5).
.
There is no pose engine in 0.8, but there is in makehuman 0.9!
Blender 2.5 will not be released for a few more months, but if we want to be exporting from MH .9 to Blender 2.5, we’ve got to start now.
Absolutely.
Basically they work, just as you can use the same wide nose target on a woman or a monster.
But since all these targets are made by different modelers and in my experience corrective shape keys blend well only with other very well modeled shapes which have been tuned for animation, in some cases, artifacts can occur. For now I can only make speculations since the new version of MH has not been released.
Toloban, you’re getting me very excited about this! It seems to me that we actually could have this working by the 6-8 months that has been predicted before the next blender release. I think that would be HUGE. Please don’t die or go blind or anything, we need you :). If only we could get good cloth by then…
I grabbed my free copy of Poser a while ago when I realised that fast creation of humanoid models was essential to my current project. Of course I felt guilty that I wasn’t using Makehuman - the problem being that there was no way of posing models there. Makehuman has been around a long time and deserves support.
That is why I have personally decided to donate £10 to Makehuman when the next version is released (with posable characters). I’ll give £15 if it is released sooner than I expect (say within a month). Open Source CG needs a project like Makehuman and while I may not be able to donate much personally (I am a student), hopefully other people will feel the same way and support the Makehuman team.
As I need humanoid models for stills and not animation just yet, I would be happy if Makehuman was released earlier without Blender animation export as that can come later. That said, no one replied to my simple idea of using a mesh sequence - it isn’t a completely stupid as an old version of Lightwave had that functionality. In fact, the current fluid sim works exactly that way. I suppose the problem would be keyframing positions in Makehuman for animation.
Koba