DigiKlay: A New Digital Sculpting Tool – Feedback Wanted!

Thank you, I’m glad you like the name of the tool!

Regarding Unity, you’re right, it might seem unusual to build a sculpting tool on a game engine. The idea came from Unity’s strengths in real-time rendering, performance optimization, and cross-platform compatibility. By leveraging these features, DigiKlay offers smooth interactions, fast feedback during sculpting, and relatively low system requirements compared to traditional 3D sculpting software.

Technically, Unity provides a flexible framework for handling complex 3D meshes and allows me to easily integrate features like GPU-based rendering, real-time material previews, and pen pressure support. While there are certainly challenges, the engine’s capabilities in real-time manipulation of high-poly models and responsiveness to user input make it a good fit.

As for licensing, it’s indeed a different story! Unity’s licensing model allows me to use the engine under their personal plan, given the project’s current scale.

It’s interesting that you mention MotionBuilder, it’s a great example of how game technology can be adapted for production tools. While DigiKlay is still in its early stages, I see the potential to push the boundaries of real-time sculpting and keep improving performance and user experience.

Thanks again for your feedback! If you have more questions or suggestions, I’d love to hear them.

1 Like

Yeah, all very interesting.

More or less what I was thinking. But also curious, how are you developing it?

I mean, you use the Unity editor to write a sculpt tool and then create an exicutable? Or is there a separate version of the the Unity editor that developers can use to create tools?

How does that work exactly?

Something I have never thought of or been aware of before.

On that note…

Unity also has some quite advanced rendering (HDRP) and FX handling capabilities.

Does this mean you have access to all of this potentially as well?

Then there is the probuilder tool for modeling. (3rd Party?)

This is an intriquing aspect, I had not thought of before. Certainlly peeks my interest.

And finally, why not Unreal Engine? Why Unity?

1 Like

Great questions, and I appreciate your curiosity!

Yes, I’m developing DigiKlay directly within the Unity editor. Essentially, I write custom scripts and UI elements using Unity’s C# API to create the sculpting and object manipulation tools. Once everything works as intended in the editor, I build an executable version of the software that users can run independently of Unity. There’s no separate version of Unity specifically for tool developers—it’s the same Unity editor used for game development, which makes it very versatile.

Regarding rendering, DigiKlay currently uses Unity’s Universal Render Pipeline (URP). I initially explored using HDRP, but I encountered challenges implementing a wireframe system that worked reliably across different platforms. Since a good wireframe system is essential for future developments and for providing a better sculpting experience, I opted for URP, which offers a great balance between visual quality, performance, and flexibility.

ProBuilder is definitely an interesting tool, but DigiKlay is currently focused purely on digital sculpting with a streamlined workflow. That said, Unity’s extensibility means that integrating more advanced modeling features down the line is a possibility.

And finally, why Unity over Unreal Engine? While I’ve used Unreal for a few projects, I’m much more familiar with Unity. Its intuitive C# scripting environment, extensive documentation, and large community support make it ideal for developing a tool like DigiKlay. Unity also allows rapid prototyping, which has been crucial for me in iterating quickly and improving the software.

Thanks again for your thoughtful questions! If you have more, I’m always happy to discuss them.

1 Like

Wow. OK. I never realized you could do all of that with Unity/or game engines

Well this is interesting.

One advantage overall of Unreal Engine is one render solution.

Another would possibly tackle the extensibility with Blueprints.

Anyway, just thoughts.

But this certainly puts a different spin on development potential.

As for sculpting:

Lots of things I am forgetting about Zbrush/Mudbox.

But here are the basics:

Boolean (cut join intersect)
Masking
Cut Tools
Layers
Alpha Brushes
Undo History - that strip across the top of Zbrush that lets you go back and forth along history. But with a much better visual guide of thumnails

Some things I think might be interesting:

Node based operators for nondestructive editing, Booleans, Cut, Remesh, Subdivition, Masking, Material assignments based on masks.

Advanced:

Vector Displacement
UV mapping
Retopology - auto and hand
Replication and duplication along surfaces with alpha and also to paint objects into thersurface like rivets etc.

Interesting challenge, as I had mentioned, to make all of these operations and more as node based.

2 Likes

Thank you so much for your detailed feedback and the feature suggestions! I really appreciate the time you took to outline all these key aspects of sculpting software and potential advanced features.

You’re absolutely right, Unreal’s Blueprint system is incredibly powerful for extensibility, and it’s definitely something that got me thinking. While Unity doesn’t have something identical to Blueprints out of the box, there are ways to approach visual scripting or node-based systems, which could be very interesting for non-destructive operations like Booleans, cuts, remeshing, and even advanced things like replication or vector displacement.

Regarding your list of features:

  • Boolean operations, masking, and cut tools: These are on my near-term roadmap since they’re essential for many workflows.
  • Alpha brushes and undo history: Definitely important, and I’d love to replicate something similar to ZBrush’s timeline with thumbnails for better navigation.
  • Node-based operators for non-destructive editing: This is a brilliant idea and aligns well with my long-term vision for DigiKlay. Making tools modular and composable could open up endless possibilities for users.
  • Remeshing: I already have a remeshing algorithm in place, though there’s certainly room for improvement. I plan to refine it further to ensure better topology and smoother results.
  • UV mapping, retopology, and replication tools: I hadn’t fully considered replication along surfaces, but it’s a great addition to the list of potential features.

That said, I also want to remain very focused on keeping the software as simple and intuitive as possible. I believe that simplicity is DigiKlay’s biggest strength and a key differentiator from giants like Blender and ZBrush. My goal is to provide a tool that lowers the barrier to entry for those who want to approach digital sculpting without being overwhelmed by complex features. Simplicity will remain my guiding principle as I continue developing the software.

I’m currently focused on stabilizing the core experience, but your input gives me a lot of exciting directions to explore as I move forward. If you or anyone in the community would like to contribute further ideas or test features in development, I’d be more than happy to collaborate!

Thanks again, and feel free to share more thoughts anytime.

1 Like

It’s extremely obvious all of your replies are written by ChatGPT… you need to write your own posts here, you can use AI to help with translation or to write parts, but the burden of what you post here has to be your own words

5 Likes

Hi @joseph ,
you’re right, I’ve been using AI primarily to help me with translation and to improve how I present my ideas in English. Writing in a non-native language can be challenging.

That said, everything I’ve shared here reflects my real thoughts, experiences and work on DigiKlay.

1 Like

Ok cool, let me explain a few features better.

And I will get to it later, but regarding keeping it simple, I could not agree more. And will go further to say that the best advanced features are those that are presented in a simple way, easy to understand and use.

More details later.

So features explained:

Replication is best thought of in terms of stroke.

All accessed through the stoke setting of any brush.

So there are more features with stroke. Both Zbrush and Mubox have these features in varying degrees of interface simplicity.

Stroke:

Continuous - normal default most apps use

This creates a blended line from the selected brush along the users stroke like a real world paint brush

Dot - rather than continuous it takes the brush and paints it once in succession along the users brush stroke.

If the brush is a default circle, for example (Plotoshop), it would make a line of dots along the users stroke with user defined space in between.

Any other brush would have this same ability. An alpha of a foot print would make a line of footprints etc.

Two extra features here would be

  • mirror stroke across the center line of the stroke.

  • stroke offset

Vertical away from the center of the brush along the path.

horizontal distance from the center of the brush.

To visualize these settings think distance between the feet and the offset and distance from the center of the stride of footprints.

Would also be useful for stitching or punched holes in shoes etc.

Smooth - is an adjustable curve strength interpolated along the stroke to anticipate a smooth curve rather than the normal erratic stroke of the user which is usually uneven.

I will pause here to discuss how all of these advanced features and settings could be hidden from the user as well as exposed in a simple predictable interface.

I will start this by discussing what a brush actually is.

A brush is simply an empty container.

Think of it as a node. It has inputs operators and an output.

In its bare minimum state, a brush would be a dot with blurred edges (an alpha plugged into the input) and with a default stoke operator setting plugged in and of course the output would be what occurs on the surface when the user paints.

Alternatively a brush node could be possibly plugged into a 3D node input and then that is output to the surface of the sculpt as a bump.

This would be your default draw brush in Blender and Zbrush.

Side bar: Every brush is made this way.

A menu of thumbnails of a variety of brushes would be nothing more than a brush with different node settings - inputs outputs operations- saved as a brush.

By default, the average beginner user would never even have to know each brush can be edited and saved or that they could create any of these brushes on their own with custom settings in the brush mode editor or even know that the node editor even exists.

Parameters for each brush could be exposed by the brush author (just as in shader nodes in Unity) for the user to adjust. Such as strength, size, stroke settings etc.

So also in the Brush Node Editor would be the ability to choose a 2D or 3D input.

Enter the replication of objects along a surface.

It’s just another input to the node. It’s not a special brush or extra feature for the initiate. Even a basic user could find this and also make one.

Enter brush libraries.

Now here is where community input could be helpful.

Similar to Photoshop and Zbrush or Substance.

Now you have set the stage for an infinite number of brushes to be created.

But by default - eventually - there could be shelfs of brushes available. For the basic user these would just be the easy to access default brushes. Say a dozen or so.

As they get more advanced, they could learn that they can add additional libraries.

Then that these libraries could be found online.

Then that in fact they could make their own brushes in a simple node interface which is far more straightforward than making custom brushes in Zbrush.

So, the design of the nodes for brushes is not my area of expertise but I think the more simple and extensible and repeatable the better.

Think of the main brush node as a shader.

Ok this is long.

Layers:

Simply put like Photoshop.

Also…

Again nodes.

Sculpt layers could be simple interface elements. But digging deeper you find the layers are in fact nodes within a node tree.

And a layer has inputs operators and an output.

One layer is plugged into another through a mix node.

It can mix on a simple level or rather take the output of the brush and mix its output with another layer.

So for example you could make a mask layer which would simply erase the layer underneath by the value of the brush.

Or it could add.

Or it could cut 3D parts away from the underlying layer, or add 3D parts to the underlying layer etc.

Again these would be in fact node connections.

And again, hidden from users by exposing simple layer settings.

That’s more than enough for now… ha ha ha!

3 Likes

Your ideas are very useful for creating a simple and modular interface that allows the creation of infinite variations. Giving users the ability to create infinite brushes is certainly feasible by exposing the variables that I currently use to create brushes and manage their behavior.
As for the point and continuous line modes, these depend on how the brush application cycles are managed, continuously or at regular intervals. Currently my brushes are all managed by continuous cycles except for the tessellated brush. In fact, the calculations are performed several dozen times per second and the tessellation, without managing an interval, would generate a very dense mesh!
Adding the point mode for the other brushes is not very complicated.
Returning to the main node, giving the user the ability to manage the variables is a simple way to give them the ability to create infinite variations. This also applies to materials, for example, having now exposed two variables for each material in sculpting mode (metalness and smoothness), each user can create the material they deem most appropriate.

At the moment I’m working to maximize the calculation speed to allow the management of millions of polygons like in ZBrush. I think that with Unity jobs and the Burst Compiler this result is achievable.

Thanks for your thoughts, you are a volcano of ideas and a great source of inspiration!
(P.S. this post was written without the help of AI :slight_smile: )

3 Likes

Cool, it was such a long post many things buried.

But there is one theme I would like to highlight.

And that is the concept of the user making changes and the resulting structure to that being also available.

Maya is a good example but Houdini even better because people know going in, it is node based, where with Maya, by and large, they don’t know that.

But both apps create a node network when you edit. In Maya it depends what you are doing, but let’s say for editing meshes, if history is on, it will create a chain of nodes for each action.

Houdini, the nodes are always on, as I recall.

The point is that these advanced features are hidden to the average user.

If you think of this from the point of view of “keeping a simple interface that even beginners can use”, with a little planing in the beginning, you can build advanced feature capabilities into a simple interface.

Even if you decided to hide that underlying network until you have time to build a user interface for it, the network will be there.

And as you add features, they can be added to this framework that you have established early on.

Hope that makes sense.

1 Like

I understand the concept you want to convey, simplify in the sense of hiding the complexity and creating basic tools that can interact with each other to create more complex tools. In my initial conception, probably because of my work background, simplifying meant above all separating the necessary from the superfluous and then eliminating everything that is superfluous. This conception, however, can lead users, as they gain experience, to be frustrated by the fact that the software requires too many operations to obtain a specific result. Your approach is certainly better and I will certainly treasure it.
As for nodes, Blender itself is a great example.
Another thing that came to mind regarding simplification: I don’t know if it’s still like this, but when I used dynamesh in ZBrush there was a certain point where if you wanted to add details you were forced to scale the mesh. This currently also happens with DigiKlay. I thought about automating this process by not having to show the user a message that the mesh is too dense when it tries to create too many subdivisions and then force him to scale the mesh. I can replace it with a function that automatically scales the mesh, without making it known to the user. The user will only know that he can add an infinite (maybe infinite is a bit excessive!) number of subdivisions. What do you think?
Thanks again, your help is precious.

1 Like

For materials and other render features like environment, yes. But I think it ends there. Nodes are powwerful of course.

I think with “everything nodes” initiative, that this will result in more features, (modeling, dynamics and constraints) acting the same as, for example, Mateirals. Changes you make in the material panel will create the underlying node network in the shader edtitor.

But for all of this to work as nodes, I think they have to tackle dependencies first.

Yes, it is a confusing feature, dynamesh. Being tied to scale. But this is a workflow issue. I don’t think you would go in using dynamesh with the plan to keep scaling your mesh. If you are doing it like that, probably it is not a good idea.

Scale is like a sacred “don’t touch” attribute when working with assets on a pipeline. And you absolutely do not want to be scaling a mesh in the background without the user knowing it.

Having an asset show up down the pipleline at a different scale will add a ton of unecessary randomity and even production error, especially with a team and add uncessary time to sort out and fix.

Dynamesh is a good blocking tool. It takes the playce of Z Sphreres basically. And this idea of re-tesselating or remeshing is only useful for blocking.

It is an unfortunate linear workflow in a process that should be more fluid and forgiving as scupting. But it is what it is.

The usual workflow would be to block out your model with dynamesh and then copy it in case you need to go back, and then freeze it.

Then subdivide and add details.

Another process - of a few options - is to retopo it - then subdivide and add details.

So you plan ahead with Dynamesh. If you know you need more details than you can get with the current size, scale it in increments of 10. Usually 10x or 100x and note it.

So that when the asset goes down the pipeline it has not been scaled randomly and it can be easily scaled back to where it was.

Dynamesh is powerful, but you have to know how and where to use it.

1 Like

I don’t actually want to use dynamesh, but I have a similar behavior when applying subdivisions. At a certain point the vertices are so close that there are constraints due to the precision of the variable type that is being used (in my case float). For example: if I have to put a vertex in a central position between two other vertices that are between 0 and 1 (I consider a single dimension for simplicity) I position it at 0.5, continuing I will go to 0.25, then to 0.125 and so on. The number of digits after the decimal point increases up to the maximum allowed by the variable type.
To overcome this, a possible solution is to scale the mesh and then, going back to the previous example, the two initial vertices would no longer be at 0 and 1, but for example, at 0 and 10 and the central point would be inserted at 5, the next one at 25 and so on.
I’m not sure I explained myself well.

As for remeshing, I have currently adopted a simpler solution, creating triangles as homogeneous as possible, with sides equal to the average of the sides of the triangles present at the time of remeshing. Compared to dynamesh, the user does not have to decide or change the resolution. It seems like a simplification to me.

1 Like

I think I understand about the math. I mean in your simple example.

And I am just using Dynamesh as a workflow example.

Any method used by the sculptor to regenerate and up the resolution, if tied to scale, it should be known and documented that you only use one scale of your model.

Even the documentation in Zbrush as I recall simply states resolution is tied to mesh scale.

It does not say scale up the mesh as you are working. That is a bad workflow. And it points to the limits of any such tool, and the intent for it to be used as a blocking tool.

So I would avoid that as a solution. And definitely don’t scale the mesh without them knowing it.

So the documentation should simply state that the scale of the mesh determines the possible resolution. But that only one scale of the mesh should be used.

I can’t emphasize enough the great trouble you will cause an artist if you advocate scaling the mesh as a solution for resolution during the working process.

I could come up with numerous examples. But I will leave it at that for now.

1 Like

I understand, in fact I abandon this strategy, also because it does not create problems only for the artist, I saw that it would create a chain of other problems also in my code! I also did some tests by temporarily applying the scale and then rescaling in reverse, but even this path seems to have some critical issues. So at the moment I have only increased the limit that I had imposed for the density of the mesh (it was too restrictive) and when the user tries to exceed that limit a message appears saying that the mesh is too dense and can try to scale the mesh if he wants to add further levels of detail. Anyway, changing the subject, I am finally managing to manage meshes with millions of polygons. I still have to work a bit to improve performance and review some functions, but surely in the next version this result will be fully achieved. Last night I was sculpting a mesh with more than three million polygons in my software, it seems like a great result and I am very happy with this. :slight_smile:

1 Like

Nice!

Let me know when it comes available and I will pick up a copy from Gumroad.

Maybe I can have my sculpting artist give it a try.

Curious how he would find the performance compared to Zbrush which he uses every day.

1 Like

That would be awesome!
This motivates me to work even harder, ZBrush is a very high benchmark, but I will do my best.
I will let you know as soon as it is available, thank you very much!

1 Like

Indeed. I am not worried about competing with Zbrush. I would just like to see my lead sculptor take it for a spin.

Maybe I could contribute by having him do some sculpting and record a time lapse.

I have one or more artists who also sculpt but he is the top dog.

But before I commit to that, let’s let him give it a go.

My sculpting team:

1 Like

Awesome, I would be honored.
The models in the video are amazing, some have so much detail that they are true works of art!

1 Like