Page 2 of 2 FirstFirst 12
Results 21 to 23 of 23
  1. #21
    Originally Posted by Rekov View Post
    After sorting mesh elements by distance from cursor, I still get something of an inconsistency, though not as great. Same with rebuilding the mesh.
    What mesh elements did you sort? For me sorting the vertices by distance from cursor (with the 3d cursor at the origin) gives the same symmetrical result as the mirror method.

  2. #22
    Thank you for clarifying, howardt. I suppose I just find it unfortunate that in a case that seems almost an ideal test case like my model above, you could get such a divergence from what you might expect.

    Initial thoughts: Perhaps make the "adjustment" phase optional and controlled by a check box. Alternatively, have a check box to reorganize vertices before the adjustment phase. Both of these options run the risk of being non-intuitive to users, and potentially not very useful outside of a very few ideal cases, I don't know.

    I wonder if it might be better to split offset bevel into different functions that prioritize different constraints. Offset bevel already almost seems to do so. Take these two examples, both with a segment count of 2 and a profile of 1.00:

    In this case, the distance to the new edge from the beveled edge along existing edges is consistent, though not at the specified bevel amount, which was 0.05. The distance from the beveled edge to the new edge is not preserved at all (i.e. the green and pink lines).

    In this case, basically the opposite occurred. The offset along existing edges varies, but at all points the offset of the bevel itself is constant, and at the specified amount of 0.1.

  3. #23
    Member howardt's Avatar
    Join Date
    Apr 2009
    Westfield, NJ
    Hi Rekov,

    I agree that it is unfortunate to get such divergence in your test model.

    Yes, it would be nice to give users some control over the adjustment phase. There already is some logic that partially does this - try setting the 'clamp overlaps' checkbox, which limits the amount something can be out of spec by about 10%. And there are some cases (helixes, usually) where the adjustment phase can spin out of control and balloon the edges to ridiculous sizes, so there is something that kicks in to prevent adjustment if the edges are getting more than 300% out of spec. Also, if you turn off 'loop slide' then one of the constraints that makes it hard to satisfy everything simultaneously goes away. But none of these things makes for a good bevel in the case of your model, I admit. One problem is that there is a non-deterministic adjustment that happens while the bevel is being done initially - if the vertex at the other end has been processed already, then it sets the widths at the current vertex to match, even if that doesn't match the spec.

    I could add more checkboxes to add more control over these things for the user, but it feels to me that these move far into the realm of having users say "what is this for??". Most users probably aren't even aware of the need for compromising on the specified amounts in order to meet other constraints.

    So I am trying to rethink how to do the adjustments, in a more principled way. Probably not do the 'even out widths at both ends' adjustment during building. And then truly allowing a 'no adjustments' checkbox mode. And finally, trying to model this as a global least-squared-error problem and thereby get (hopefully) smaller needed adjustments and more consistent ones. But I am afraid that it might be computationally expensive.

Page 2 of 2 FirstFirst 12

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts