Using just the accelerometers isn’t enough. Accelerometers just measure the apparent down direction; you can’t tell “yaw” from the accelerometers alone. And on top of that, accelerometers just measure acceleration; you need then to covert that to speed, and then to position. Without yaw you can’t turn the camera to the sides, or rotate it while facing down or up, without getting out of alignment; and by having to convert from acceleration to speed, and then from speed to position, on each step you’re accumulating even more errors from the already noisy values you get from the cheap accelerometers.
To help with that, you could also take in consideration the readings from the gyroscope (a gyrometer to be more accurate); to correct for apparent rotations that are actually caused by linear motion, and to get a reading for the yaw axis. They measure rotation speed, not acceleration, so it is a bit better than what you get from the accelerometers because it is one less step; but they are still quite noisy, so the calculated values for rotation will still drift with time.
To correct that, you would need to also take in consideration the reading from the magnetometer (the compass). But the magnetometer isn’t perfect either; it is quite slow, and will give bad values when near things made of metal (the very hardware of the phone distorts it’s reading, but usually it’s calibrated on the factory to fix that, since they know what the shape and position of the metal parts are and that they will always be there).
So even if there was an app that logged all that data in perfect synchrony with the video recording, the results would get quite bad if you either moved a lot or quickly, or shot anything but a very short video.
With Camera tracking in Blender, you pretty much just need to have many (i think it’s a minimum of 8 in common between each neighbouring frames of the video, but the more the better) easily trackable points (anything with good contrasts and not very like the stuff around it), and you can get Blender to figure out not only the motion of the camera but even its parameters (stuff like lens distortion and the field of view). Depending on the scene you might not even need to actually place any markers yourself, stuff like window corners, flowers, patches of dirt etc might be all that you need; though it’s not all that hard to get markers there yourself, just throw some confetti (if it’s not windy), or a few rocks, or even just glue some post-it’s around; you just need to make sure they aren’t coplanar, that means the points you’re tracking aren’t all aligned with a single plane, and that you have enough of them in view at all times.
Youtube got many tutorials on this, a couple might be a bit outdated, but not too much has changed; if you get lost, try the manual or searching for other tutorials on camera tracking.
Here’s one that looks good that i just found:
(if you want a shorter one, just do a search and look at the duration of the videos on the search results)
ps: since i’ve just found that one in particular, i haven’t watched it fully; if it turns out there is something important missing in it, just try another one etc