Someone’s probably going to mention the nVidia filter or GIMP, so I’ll just say that that’s not how it’s done.
There are two ways of creating a normal map - one, the most reliable way, is to generate it from actualy geometry by baking a highpoly onto a lowpoly.
The second method is via 2D conversion, from a greyscale image to a normal map. It’s important to know what this does: it looks at your greyscale values and makes anything dark a valley and anything bright is a hill. So if your image is brighter where it’s near the camera and darker when it’s far away - great. The problem is that that’s not how lighting works and so pictures that aren’t specifically made to be normal maps will pretty much always be wrong. Your image has a high contrast in the middle, because it’s where you’re looking through the ice and can see the darkness below, but there’s isn’t actually more height contrast there.
To be fair, this image will probably produce a good enough result at a glance, but taking an unedited picture to normalise it is the wrong way to do it.