-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
most accurate way to determine location and accuracy #146
Comments
Sorry for the delay, this question has pointed out a bug we will need to address in camera. GPSU is written live, at the end of the payload timing window. The 'live' part is key. As the location data within GPS5, is sync'd with the video, which is delayed some frames due to the hypersmooth computations. So the metadata in GPS5 is sync with the pipeline delay of HS, a good thing, but the sticky metadata like GPSU is not, the bug. This is why there is a disparity between the first and second payloads for GPSU. GPS5/GPSU was implemented before there was Hypersmooth, so this small error was unnoticed. As GPSU is live, and GSP5 is delayed, the GPSU will be about 1 to 2 seconds ahead of the time it should be. In multiple GoPro camera shoots, this is not an issue, as all the cameras have the same bug, and therefore agree on time. The time error is constant (after the first payload) and can be compensated for, however there may be a tad difference between camera models, so it should be addressed in camera. A solution for now, is you will need to smooth the GPSU values over a number of payloads, skipping the first. The smoothing will remove sampling jitter. With some experimentation you can determine the amount of time to subtract for accurate wall time, likely near 1.5s. |
So going on a slight tangent, how would one best use the GPSU datetime value in relation to logged points across GoPro models? Since GPS5 may contain 17-19 points (presumably depending on timing) that have no individual relative timestamp, I've made an average and just attached the GPSU value to that (i.e. "accurate enough", but perhaps not?). I also need a relative timeline for the points, and similar to @gsimko I've assumed the While most of the footage I process is done at "walking speed", lowering the need for extreme accuracy, I'd still like to be as correct as possible. |
We too hoped GPSU could be used for camera sync. It is good, but not great, not good enough for audio sync. It better between like camerass, as the errors are similar -- so a bunch of HERO8s will sync better, than mixed camera models. Working to fix the time precision for a future cameras. |
@dnewman-gpsw Yes, audio needs precision. I work with language researchers and it's usually a good old clap, visible and audible to all cameras, that does the trick. Requires manual work afterwards of course. Is this what you referred to? For the non-GoPro branded camera we've used, GPS clock sync between devices is usually off by up to several 100s ms. For future updates, could one perhaps have one camera sync its clock via satellite, then act as the master clock that the other cameras can sync with directly or even follow (re-sync every X seconds/minutes)? Via e.g. bluetooth or whatever wireless protocols are available? |
I'm looking at the data from my GoPro9 and wondering what's the most accurate way to estimate my position in space and time (wall-clock time, not video time).
For the sake of arguments, let's assume a video with 29.97fps. The payloads in the mp4 are stored using stts=1001. If I understand correctly, this means that 1001 milliseconds is the time difference between neighboring payloads. Typically each payload has 18 GPS5 samples and 1 GPSU sample.
I'm unsure about the interpretation of the GPSU field - is that the GPS time belonging to the first GPS5 sample, which in turn corresponds to the 1st video frame? Alternatively, does it belong to the last (18th) GPS5 sample, therefore roughly corresponds to the gps time at the 30th video frame, which is the end of the 1001ms time interval?
There's also an STMP field, which I suspect could be used to further improve the accuracy. If it says 153000us, does that mean that the GPSU time is lagging 153ms compared to the video? E.g. first GPSU value corresponds to 153ms into the video?
Finally, in my test video my first two payloads have GPSU times with a difference of 2200ms. Could that mean that GPSU is only a best effort and either of the scenarios described above (GPSU reflects the beginning/end of the 1001ms time interval) could and does happen within a single recording?
I found another issue that asked a similar question but there was no clear resolution there as the answer was to rely on video timing, which I cannot do in this case due to the need of the actual wall-clock time. Any help is greatly appreciated and big thanks for open-sourcing this gpmf parser!
The text was updated successfully, but these errors were encountered: