It would be fun to play with an example if you can provide one.
OIDN runs on any Intel CPU with SSE4.1 support (introduced in 2006) or better or any Apple Silicon CPU, so there’s no need for CUDA or anything else special and it should work pretty much everywhere.
I think OIDN tries to do everything automagically and there really isn’t any input to the process that will let you tweak it at that level. Assuming you’re using the default final render Cycles settings with OIDN enabled for the Denoise and the Albedo and Normal passes selected for it (which I presume enables those passes internally as they no longer show automatically enabled in the Passes properties) then the Normal pass is the hint to the denoiser that your fine detail is real and not noise.
But if the detail is substantially smaller than a pixel, then both your intra-pixel detail and its associated Normal information may get blurred out resulting in the denoiser smoothing it out even more. I haven’t experimented with this though to get a feel for the behavior.
Some things I can think of offhand that might be worth experimenting with:
Turn off denoising and rely on the new(ish) progressive refinement to give you a clean final render in a reasonable amount of time.
Try increasing the render resolution. If you double the x and y resolution you’ll give the denoiser 4x the information to work with and this might make the Normal data more effective. You can then reduce the image to the desired final resolution as a post process using the tool of your choice if that seems to preserve the detail better.
You could even try getting fancy for specific problems by using the Denoise Compositing node (which is OIDN) rather than the automatic render denoise option, and you could probably use something like Cryptomatte to mask off objects you don’t want to denoise etc. You would turn off the Render Denoise and turn on the Denoising Data data pass to get the passes needed to feed the Denoise node in the compositor.
For an extreme option that one might use in say a commercial production of an animated series etc., it’s actually possible to train your own custom AI denoising filter for OIDN rather than using the default one. So you could try to teach OIDN to recognize the kind of textures that you use. Not sure what would be involved in swapping out the model that Blender is loading with OIDN.
GitHub - OpenImageDenoise/oidn: Intel® Open Image Denoise library