If the displacement input socket is grey, you only need to scale the value from the texture to whatever is suitable. Use adaptive subdivision (experimental feature) if you can, or regular subdivision if you can’t.
If the displacement input socket is blue, you need to translate the value from the texture using a displacement node (not the vector displacement node). Same subdivision rules apply.
Depending on the use-case, you can choose to ignore it completely, or use it as a (secondary) bump map. Microdisplacement in Blender can be extremely memory intensive (I’ve seen 72GB, and I only have 32, so…), I rarely bother with it. On the positive side, you won’t experience dark spots (especially on sharp reflections) because actual geometry will mask what essentially becomes “illegal normals” using bump maps.
Sorry for the late reply! I think I’m starting to get what you’re saying… but I’m still a little confused. If I’m using an earlier version of Blender (2.79 to be exact), would I just plug in the image texture node to the displacement output?
Yes. Maybe depends on which 2.79 you’re using. Can’t remember, haven’t used it in ages except for importing stuff at work.
If your displacement socket is grey, plug the texture in directly via a multiply node to control it.
If your displacement socket is blue, you need to use a displacement node to convert the height value into something the displacement socket understands.