One of the technical challenges in this project is to figure out the exact time offset of two waveforms. I think I’ve solved that sufficiently.
The Good
My algorithm correctly detects the time skew of two waveforms. Here’s the raw data from two mics:
And here’s after the skew is corrected for:
The two waveforms are a very good match.
The Bad
The algorithm is very computationally intensive. My first pass at the code, finding the skew took 10-20 minutes for two 50 millisecond waveforms. With a little optimization from caching interpolation results and discarding excess precision, I got it down to 1-2 minutes (much better, but still pretty slow. I may be able to get another factor of 2 or 3 my switching to C++ from Python, but getting the code right will be more difficult.
The Skewed
It has occasionally detected skews in the range I’d expect for two microphones next to each other (a few hundred microseconds), but most of the skews have been in the 2-2.5 millisecond range, which is about 10 times what I’m hoping for. More work on time sync is needed apparently.