Gotta Go Fast: Image Based Comparisons For Speedrunning

QCrHhfCb

Within the past couple of years, Speedrunning has emerged as a form of popular entertainment. For those that do not know, Speedrunning is Capturejust trying to finish a game of the Runners choice as fast as possible. Usually, this amounts to breaking the game in ways that were not known to the developers. In order to gauge ones progress throughout a run, most runners use a program called LiveSplit. This Open Source program allows the user to set up named “Splits” and then move through them and record the time when they press a button. This allows a Speedrunner see if they are ahead or behind their Personal Best times for a segment of the game. One problem that this leads to is when the competition between first place is very close, as in less than half a second, the reaction time of a player pressing the button could be a factor into the final time. What I aimed to achieve was to remove the human component of splitting, and make it an automated process based off of a user-supplied image.

The first task I attempted was to just receive a live video feed from a USB capture device. Since LiveSplit it written in C#, I started writing a simple program just to capture the input. Thankfully, I discovered AForge, a group of libraries which includes both video capturing and image comparisons. After looking over some documentation and making a simple video viewer that compared every new frame with a given image, it was time to implement it into LiveSplit.

Forking LiveSplit itself was no problem, as it should be, however, working through the program was a much more arduous task. There is little to no documentation on how anything functions, and even less comments. After F12’ing and Finding All References enough times, I figured out what I needed to do. Looking at how the already made AutoSplitter component works (Which the AutoSplitter works off of by observing RAM values). The following code snippet is the primary code that is run within the update loop:

                if (currentFrame != null && ComparisonImages[state.CurrentSplitIndex] != null)
                {
                    
                    
                    var matchings = tm.ProcessImage(currentFrame, ComparisonImages[state.CurrentSplitIndex]);
                    
                    if (matchings[0].Similarity * 100 >= 100 - state.CurrentSplit.ImageSimilarityThreshold)
                    {
                        Model.Split();
                        ImagesToDispose.Add(currentFrame);
                    }
                }

The main problem with this section of code is not a problem if you do not need to have image comparison done in quick manner. The primary method of image comparison I was using was exhaustive template matching. Template Matching is where you exam each image on a pixel by pixel basis. When dealing with comparing two 1920×1080 images, then the amount of calculations that are performed is a large amount that needs to be done in a very short time. When finally getting everything working, I found out that the amount of time it takes to process even two 640×480 images was enough to make the timer chug.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *