[English is not my native language, so this is a raw copy of my post in the blender forums.]
Hi!
I hope I am in the right forum section, because my script is complete and I consider it as a tool.
So, what is it for? As I am playing around with imported game models in Blender 2.5 now, I don’t like Blender’s internal Join Tris to Quads algorithm anymore, because it gives too much wrong joins of the tris. For that reason I have written this python script that works like a human works: First, it finds nearly quadratic join candidates and joins them. Then it works its way through the mesh, expanding the found quads.
A comparison between an original imported mesh, Blender’s output and my script’s output can be found here: http://grunz.bplaced.net/downloads/trijoin-face.png
And the script is here: http://grunz.bplaced.net/downloads/trijoin.py
Any (really, any) comments are welcome.
BTW: Keep an eye on the console when you run this script, because it runs over an hour for a mesh with 8k tris and needs also some time for 1k tris…
– Syron
Looks as a meaningful script although it leaves some tris that may be a problems to mesh topology…
Any way, I attempted to try it but I got (at least) 2 error messages like those:
Traceback (most recent call last):
File "Text.001", line 3, in <module>
ImportError: No module named mathutils
..............................................
Traceback (most recent call last):
File "Text.001", line 244, in <module>
AttributeError: 'ToolSettings' object has no attribute 'mesh_select_mode'
So I was unable to really start the script. May Im using inappropriate Blender version from the series 2.5… Dont know!
Regards,
I am using a trunk build on both Linux and Windows, and I do not have any problems. Can’t imagine why this doesn’t run for you, except that you are using a deprecated build.
that maybe too much overstretch
but I believe that if you manage to compile either
http://www.dimap.ufrn.br/~mfsiqueira/Marcelo_Siqueiras_Web_Spot/CQMesh.html or
sample mesh: http://openflipper.org/uploads/media/Quadrangulations_01.zip ( achieved with http://openflipper.org/index.php?id=271#c2029 )
( approach described in paper : http://openflipper.org/uploads/media/bommes_zimmer_2009_siggraph_01.pdf )
and then
export interfaces of that code as python module say with http://www.swig.org/ and then extend your script with that module
then you may achieve correct ( and adaptive ) quadriangulation for any mesh
and that would be what most artists will thank for.
I think what you are pointing at is remeshing, that means basically a recreation of the mesh preserving the shape, but not using any of the existing vertices. AFAIK this is either a GSOC project or a project already being developed. My script is simply an improved join-tris-to-quads algorithm that is a different approach, even if it focuses on quad-dominant meshes.
Not quite CQmesh ( thought the server could be reached ( for me ) using google cache only ) is about to making quad mesh from triangles - no remeshing, just to bring tris to quads accurately. the second link has some remeshing capabilities, but still can be used for the same purpose
as for GSOC - yes I heard of it, but seems that the way it is implemented is a bit different from both links.
here - I thought not on implementing the whole thing - but just bring different tools together. cqmesh works already and just needs tris as input and has quads as output - so that was the suggestion.
The cqmesh thing looks interesting. I was able to download the published document for it by opening google’s cached pdf in the quick preview, exporting it to google documents and then exporting it from there as a pdf… I didn’t know google caches also documents, but this time I’m happy about the data crawling.
After I downloaded the latest Blender version, the script works at me too… but Im unhappy to download all the time new workouts of Blender to run a new script!!! :no:
Any way… After running the script on several testing objects nothing really has been changed… Perhaps it is a matter of object topology…
Because you mentioned serious timings at > 1 K tris, I tried to arrange for some measurements. I measured the total processing time and individual total timings of most of procs in the script. Below is a listing of my results on 2 of my testing objects. Firstly you have the numbers of verts, edges and faces. Then you have a pair of figures plus a description to the right. Pairs show (1) how many time during script work the respective proc was run and (2) total timing of ALL such runs of that proc. At the row where you have description “Total processing time…”, the first figure is number of running joinTris() proc which is in a separate cycle. Obviously, the timing for running allNeighbours(edges,faces,excl) includes the timing for getEdgeNeighbour(edge,faces,excl).
Here’s my results list:
(V,E,F) = (154,328,176)
0 504 <-- Total processing time...
9 0 <-- Making tris in quadExpand1()
1344 203 <-- getEdgeNeighbour(edge,faces,excl)
432 218 <-- allNeighbours(edges,faces,excl)
0 0 <-- fjoin(me,fi1,fi2)
906 0 <-- joinSort(v1,v2)
906 125 <-- isQuad(me,verts)
48 0 <-- longestEdge(me,edges)
(V,E,F) = (1500,3288,1775)
0 46407 <-- Total processing time...
9 0 <-- Making tris in quadExpand1()
15216 29321 <-- getEdgeNeighbour(edge,faces,excl)
4896 29367 <-- allNeighbours(edges,faces,excl)
0 0 <-- fjoin(me,fi1,fi2)
9424 48 <-- joinSort(v1,v2)
9424 934 <-- isQuad(me,verts)
528 16 <-- longestEdge(me,edges)
It is now quite clear what causes the long processing time - getEdgeNeighbour which runs thousands of times and I think it is ineffective. This proc takes 40% and 63% of the total process time and I suspect that at higher values for (V,E,F) this will go even higher!!! :eek: Sooooo… you may with to re-work it plus related other sections so that a similar proc runs once collecting the data and then only analysis would remain --> i.e. this will be a great time-saving measure, I think!
In 2.49, I use the following to collect data for adjacent verts:
def Make_viv(me):
viv = {}
for edge in me.edges:
v1 = edge.v1.index
v2 = edge.v2.index
try: viv[v1].append(v2)
except KeyError: viv[v1]=[v2]
try: viv[v2].append(v1)
except KeyError: viv[v2] = [v1]
return viv
Again in 2.49, I use the following to collect data for edges in faces:
def Make_ekif(me): # edge keys (+ ed.index) in faces
ekif = dict([(ed.key,[ed.index]) for ed in me.edges])
for face in me.faces:
for key in face.edge_keys:
ekif[key].append(face.index) # add this face to the edge as a user
return ekif
def Make_eif(me):
ekif = Make_ekif(me)
eif = dict([(fi_list[0],fi_list[1:]) for ek,fi_list in ekif.iteritems()])
return eif
Then, it is a matter to refer to the correct data off that dictionary and NOT collecting it each time it is needed… My 2.49 functions can be re-worked to get adjacent edges, i.s. connected edges and translated into Blender 2.54 python… I think… :spin:
Hope this helps!
Regards,
so it is a long way.
I uploaded the CQMesh code here http://www.sim-ai.org/CQMesh-2.zip , CGAL can be taken here
https://gforge.inria.fr/frs/?group_id=52
and finally http://subversion.assembla.com/svn/ull_dr_jin_graphics_docs/Install%20CGAL%20Procedure.pdf it is how to build CGAL.
maybe you try it
@Abidos: Thanks. I’ll start developing of a new internal data structure now, because on every join of 2 tris I have to re-parse the mesh data; writing my own data handlers should speed up everything enourmous.
@skudakov: Well, the CQMesh is a fairly complex algorithm that would better be implemented in C/C++, and if the bmesh code will be merged into blender trunk, the development of the quad dominant remeshing will be more important than cqmesh. But thanks anyways.
@ Syron - OK… This one:
def Make_eie(me):
eie = dict([(ed.index,[]) for ed in me.edges])
vie = Make_vie(me)
for ed in me.edges:
for vi in ed.key:
ed_list = vie[vi]
for ei in ed_list:
eie[ed.index].append(ei)
lst = list(Set(eie[ed.index]))
lst.remove(ed.index)
eie[ed.index] = lst
return eie
effectively creates a dictionary in the form [ei : ei_list] for each edge in the mesh, where ei is the respective edge index and ei_list is a list of its adjacent edged (i.e. those connected to it). This code uses Make_vie proc from my previous posting here (see above).
For the default cube in Blender it gives you:
eie = {0: [1, 10, 3, 8], 1: [0, 2, 8, 9], 2: [11, 1, 3, 9], 3: [0, 11, 10, 2],
4: [8, 9, 5, 7], 5: [8, 10, 4, 6], 6: [10, 11, 5, 7], 7: [9, 11, 4, 6],
8: [0, 1, 4, 5], 9: [1, 2, 7, 4], 10: [0, 3, 5, 6], 11: [2, 3, 6, 7]}
You just need to translate it into 2.54 style cause Im not an expert in it
Regards,
Thanks, Abidos! Even if I didn’t use any of your code, I used your ideas to code a dynamically growing cache system. The problem is still that I flush the cache on every join, so that it is still slow when there is much to do. But I can hopefully create a workaround if I find out how Blender handles the indexes of faces, verts and edges so that I can write a simple remapping function to update the stored indexes.
Sorry for double post, but I have a major update here: http://grunz.bplaced.net/downloads/trijoin_v2.py
It now caches data, and testing it on a 4k tri mesh gave a huge speed improvement.
It still doesn’t work perfect, but I think it is a good help, and suggestions are welcome.
Hi there!
I havent taken a look at your new version yet… Just an idea (that I usually try to implement when dealing with some complex tasks that require dealing with more than 1 mesh data) - establish arrays (lists or dicts) of data for verts, edges and faces, starting with empty sets or filled with the current mesh data, then re-work it as if it is in a mesh… This usually runs much faster then in the real mesh plus the need to update it after each (or a bunch of) change(s) BUT you will need to take MUCH care to correctly modify your 3 arrays so that after finishing work to be able to delete the current mesh data and replace it by your calculated data and this to have some nice meaning
Regards,
That’s exactly what I’ve done.
//edit
ATTENTION! I have fixed a serious bug and changed the behavior a little bit: http://grunz.bplaced.net/downloads/trijoin_v3.py
This version gives much less wrong results (erm… mainly because of the bug), and you have to assist it by joining faces manually in “complicated” areas and then running it again. This is the first release I’d like to declare as a release candidate. Please test it and say if it works or not. Thanks.
//edit 2
Testing it on a 7k tri mesh, it ran about 10 minutes and reduced it to ~4k faces. By manually joining some tris as new starting points, it reduced the tri count again.