Clipping test videos automatically.

Screenshot of the application I developed as part of a system to automatically edit, name, and save test videos to match test data timescales.


While I was working at Lear Corporation, one of the major pain points for the test technician running the static safety tests for seats (essentially pulling on them to ensure they meet strength and deflection standards) was the videos recording these tests. There was no good, simple triggering system to start and stop recording videos of these tests from multiple camera angles, and so he would have to turn them all on before the test, turn them off after, and manually edit each clip to what looked like the start and stop of the test, which was very time consuming and also very inaccurate, meaning the timescale in the videos could not be correlated to the timescale in the test data, hindering engineering analysis of any issues that may have occurred in a given test.


I tried a few approaches to help fix this problem. The first attempt I implemented was to automatically use a wireless shutter trigger remote with a receiver on the cameras, by setting the test bench software to run a small C# program to send a command over serial to make a microcontroller “press the button” on the remote at the right time.

The microcontroller wired directly to the PCB from the wireless shutter remote with a logic level converter.

This didn’t really work out well, because the cameras would not start at the same time, taking several seconds to adjust focus before starting the video, and sometimes not focusing correctly at all. Because of this, automatically triggering the cameras was out.


This left only the possibility of including some distinct marker of the test duration in the videos, and attempting to automate video editing after the fact using that. Off the bat, a visual indicator was out because it would be either too difficult and inefficient to automate the image processing to find it, or would throw off the footage or be a safety concern (e.g. using a bright light flash).

This left sound as the remaining alternative. So, I tested out different tones with simple tone generator app and recorded the results with one of the cameras, and ultimately found that a 15kHz tone was very distinct against the background noise on the test lab floor, but still within the frequency pickup range of our cameras.

An example audio spectrogram with a brief 15kHz tone over shop noise, seen clearly as a solid line in the top-left.

As it so happens, ffmpeg can generate spectrogram images easily via command line. This made it relatively simple to develop a C# program that could get these spectrograms made, parse the pixels at the y-height for 15kHz, and find the rising edge of a pulse of that tone against background noise.

So, I repurposed the microcontroller from the earlier attempt to generate a 15kHz tone, and wired up an amp and some beefy speakers for the test station. Now at Lear Corporation, at the beginning of each static safety test, a 15kHz tone plays.

The rest was pretty clear-cut: grabbing the duration of the test duration from the test data make it simple to calculate the end time of the test, since the sound gave the start time. From there, it was just implementing the rest of the ffmpeg magic in the C# application to clip the video, transcode it, name the file, and save it where it was supposed to go.


With this new system, static safety test videos were no longer a dreaded backlog task for the test technician, and instead could be processed by him trivially, making them readily available for review when needed, and overall improving our testing workflow.

Previous
Previous

Demonstrating a seamless ecosystem experience.

Next
Next

Making test documentation easier to manage.