LS: No More All Nighters

It has become almost a weekly thing for a few members of the team to pull an all-nighter the day before/of class in order to make sure that the game is ready for presenting. In my entire career as a college student I have never pulled an all-nighter in order for school work, or really anything. One of the main reasons is that I would much rather prefer sleep over none. But also the mental impact of working on the same thing for over 10 hours straight with no breaks is huge. It is neither healthy or good for the team. I’m suggested that we have another work meeting during the week instead of everyone rushing at the 11th hour to make sure everything is ready and bug free before hand, but it falls on deaf ears.

Swiping To The Answer

logo

I’ve got a confession to make, I love Tinder. Not because of the dating aspect, but because the swiping just feels so good. However, objectifying into a Yes or No within a couple of seconds is morally questionable. Because of that, I decided to write a game to quench my swiping thirst and not be interrupted by being notified I have a match

It feels so good

Because I do not own a Mac or an iOS device, I decided to use libGDX to build an Android game. After following the setup guides, I was all set to start making a game within an hour. Looking at the examples games, I noticed that libGDX is very similar to XNA. Because of this, I started to structure my code as I did in a Graphics/Game Engine Programming class I took years ago. Having Managers that would process multiple entities, and making each entity responsible for updating and drawing itself. However, just like the Game Industry, I started to realize that a Manager system is not the best way. In hindsight, I should have made this game in a more Entity Component way. The actual game itself could be seen as a rhythm game such as Tap Tap Revenge, Notes in different lanes travel down the screen. However, one must swipe the note into a collection bucket at the bottom of the screen when the note is ready to be. After reading a Reddit/Ludem Dare post about how to make a rhythm game and not have it fall out of sync, I decided to adopt the Conductor system described in the post. The system essentially works by tying the position of things that are dependent on rhythm to the music’s current position, and never updating a variable without applying the musics crotchet to it (the crotchet is the length of time for a single quarter note of a beat). The “Game” itself was never really finished due to other mandatory projects taking up time, so the current state of the game can be found on my GitHub Here

Gotta Go Fast: Image Based Comparisons For Speedrunning

QCrHhfCb

Within the past couple of years, Speedrunning has emerged as a form of popular entertainment. For those that do not know, Speedrunning is Capturejust trying to finish a game of the Runners choice as fast as possible. Usually, this amounts to breaking the game in ways that were not known to the developers. In order to gauge ones progress throughout a run, most runners use a program called LiveSplit. This Open Source program allows the user to set up named “Splits” and then move through them and record the time when they press a button. This allows a Speedrunner see if they are ahead or behind their Personal Best times for a segment of the game. One problem that this leads to is when the competition between first place is very close, as in less than half a second, the reaction time of a player pressing the button could be a factor into the final time. What I aimed to achieve was to remove the human component of splitting, and make it an automated process based off of a user-supplied image.

The first task I attempted was to just receive a live video feed from a USB capture device. Since LiveSplit it written in C#, I started writing a simple program just to capture the input. Thankfully, I discovered AForge, a group of libraries which includes both video capturing and image comparisons. After looking over some documentation and making a simple video viewer that compared every new frame with a given image, it was time to implement it into LiveSplit.

Forking LiveSplit itself was no problem, as it should be, however, working through the program was a much more arduous task. There is little to no documentation on how anything functions, and even less comments. After F12’ing and Finding All References enough times, I figured out what I needed to do. Looking at how the already made AutoSplitter component works (Which the AutoSplitter works off of by observing RAM values). The following code snippet is the primary code that is run within the update loop:

                if (currentFrame != null && ComparisonImages[state.CurrentSplitIndex] != null)
                {
                    
                    
                    var matchings = tm.ProcessImage(currentFrame, ComparisonImages[state.CurrentSplitIndex]);
                    
                    if (matchings[0].Similarity * 100 >= 100 - state.CurrentSplit.ImageSimilarityThreshold)
                    {
                        Model.Split();
                        ImagesToDispose.Add(currentFrame);
                    }
                }

The main problem with this section of code is not a problem if you do not need to have image comparison done in quick manner. The primary method of image comparison I was using was exhaustive template matching. Template Matching is where you exam each image on a pixel by pixel basis. When dealing with comparing two 1920×1080 images, then the amount of calculations that are performed is a large amount that needs to be done in a very short time. When finally getting everything working, I found out that the amount of time it takes to process even two 640×480 images was enough to make the timer chug.

 

 

 

LS: My Time Is Now

Today is the night of the Senior Show, after a rough last final weekend, those that did not go to PAX worked very hard to make sure the game was done, pulling all-nighters, against my discretion, in order to pull it off. As a team, we decided to all wear matching Dan Shredder Shirts to the show. The shirts themselves look like a Dan Shredder Tour Shirt. All that is left is to stand in front of the audience and network with recruiters to try and find a job.

LS: We’re Going to PAX!

I’m currently writing this from the couch of the apartment I am staying at for PAX East. Half the team has come to PAX to view everything that there is to offer. I am here myself as a volunteer Enforcing. The Gold Master for Dan Shredder is due Monday evening, and I won’t be back in time to work on the game. The five who are not down in Boston for the weekend have taken the duty to work overdrive in order to finish the game. Already they have hit a breaking point. A commit from the super work meeting we had on Wednesday broke the game and we had no idea why. With anyone who could testing older commits to find out which commit broke the game (Even me, through the power of a 4G connection and remote desktop).

LS: The C++ is dead, Long Live the C++

incrementally-over-all-at-once

For the entire semester, the way programming duties have been split up is between gameplay and engine rewriting. Coming into this semester, the majority of Dan Shredder was written in Blueprints, and it showed, the game could not run at 60FPS on anything except for the lowest graphics settings. The goal of the C++ transition is that it would help us reach the goal of having the game run at 60fps on the highest graphics settings. Today we decided to cut the rewrite that was being worked on for over five weeks.

The visible progress that we had on the C++ rewrite in those five weeks was very little. Externally, there was none, internally, we were only getting to the point where the game would be playable, but not in the way it was currently. The player could strum notes and songs could be played and all these individual functions were written. However, when it came time to start putting everything together, things starting breaking left and right. One of the biggest perpetrators of this, was that every system was very coupled into each other, unable to be tested until other classes were done. For instance, the InputHandler class required functions in the PlayerGuitarController class to be written, which those functions required the Fretboard class to be written, which required the Note Receivers to be written. This might seem like a worst case scenario, layers of layers of requirements just to get one thing working. However, this was commonplace.

The hole that was metaphorically dug  while doing this C++ rewrite was also very deep. So deep in fact, that I never actually came out of it. I entered that hole when other team members were thinking about what to do this semester, and how we want to move forward as a new time. I emerged from that hole to a different landscape, as if someone built a city on top of that city. The game itself looked completely different. Ideas that were just that were implemented into the game. There was a very real disconnect between the C++ rewrite team and the rest of the team.

The problem with a hole is that by the time the hole is fully dug, there needs to be more added. We realized that by the time we rewrote the entire game, we would have to rewrite the new features that were implemented  to work with the non- C++ code. And by the time we were finished with that, there would be new things to write. We would always be playing catch up, never fully being caught up to the rest of the team.

 

LS: Dr. Unreal or: How I Learned to Love the Crash

download

One of the most annoying parts of transition from Unity development to Unreal development is crashing. In Unity, if a project crashed, someone did something wrong, very wrong. Usually Unity will just throw exceptions and break out of the current function. Unreal on the other hand, seems to not believe in exceptions, and if one is thrown, will straight up crash to desktop. Unreal will even go as far as to not allow try catch statements in C++ code and will not let the code compile. I can see both the pros and cons of the Unreal approach to error handling compared to Unity’s handling.

On the pro side,this ensures that you know when something goes wrong in your game, you know something went wrong. This helps make sure that when your game is packaged into an executable, that it should not ever crash. I know other developers who when they see the red error text in Unity, they just not pay attention to it and just accept it happens. If it doesn’t break the game, then it never gets fixed. When Unreal forces you to crash to desktop, it makes sure that  you know that the error is not good. This might be a good thing to experience in school now since in the game development world there are no exceptions, so making sure to not create any now is a good thing.

However, the downsides to crashing whenever an error is thrown is the frustration factor. When the process of getting back to the point you were at includes having to reopen UE4, hope you saved before you pressed play, and then you can start making changes again. When this happens multiples times in a night, it becomes demoralizing. Spending a few minutes getting back to the spot you just were at and then it crashes within seconds. Thankfully, Unreal provides a stack dump of when it crashed to help you know exactly where things went wrong. However, sometimes it provides no help, as in the feature image of this article. When this happens, you just have to start commenting out recently written code. Which can be a fun process in itself.

LS: How To Fake Being a Rock God

816918-picture_6

One of the most cool things about rhythm games like Guitar Hero was that you have this feeling that the notes you are playing actually make sense to play and feel nice to perform flawlessly. This blog post is mostly going to be about Guitar Hero note theory.

There are a set of patterns in Guitar Hero that are common across many songs. One of the most common ones are scales that go higher or lower.

Mosh 1 showcases descending four note scales

The most extreme case of this in Guitar Hero 3 is the Mosh 1 section in Slayer’s Raining Blood. This entire section is just the following pattern in the image repeating with occasional breaks for a chord. It is often considered the hardest part of the song, however, it is in essence the same pattern. When played correctly, one rarely has to strum, it evokes a feeling of zen where all there is is you, and the feeling of a cascading waterfall, of blood. This pattern does not have to be used with just four notes descending, it has been used in many other places, including patterns of 3 notes ascending as in the Guitar Break in Jordan, by Buckethead or as a repeated three note triplet, such as in Solo A in Metallica’s One.

Another popular technique that are in a variety of Guitar Hero Songs are trills. They are two notes repeated over an amount of time. They can be at a decent pace, as in Tonight I’m Gonna Rock You by Spinal Tap in Guitar Hero 2, or an insane blistering pace as in Surfing With the Alien by Joe Satriani. There are various techniques to play these notes, the version I use is to root my index finger on the lower note and use my middle finger or ring finger to do the hammer-ons. For faster ones, most higher level players actually take their strumming hand and use it to help fret.

This may seem like a weird blog post, but it has a purpose. One of the main goals of Guitar Hero is to make the player feel like they are playing the actual instrument, and not just some plastic toy. Having to use multiple hands in order to play notes, especially trills has been common in guitar playing since Eddie Van Halen’s hayday. Using all four fingers to play a riff is how Raining Blood is actually played.

 

Team SAOS: The Root of the Problem

3o85xtvtZaIvd7pAQg

A Chinese philosopher once said, “千里之行,始於足下” , “A thousand miles begins with a single step.” This is true for both grand conquests and game production. Going back to the very formation of Team NAH, it was one that was never stressed. Some teams formed due to friendships, others formed from the remains of Production 2 teams, some were “formed” out of the leftovers. Team NAH in the beginning, had never worked with each other on a single game. We quickly formed as a team, and left it as that for months. It was not until halfway through the summer, that we realized that we should do some preparation/discussions about the coming semester. Still nameless at this point, we would suggest game ideas, usually tagged along with some joke or gimmick. When it came time to actually name ourselves, we were stumped. On a walk home from work one day, I jokingly said we should just call ourselves, “Names Are Hard.” We all liked the name and it somewhat reflected our relaxed nature of a team. It may have taken a couple of weeks to get the workflow going, but when it did, we were walking at a brisk pace. We planned everything in advanced so we know when things needed to be done, and what was needed from us for that to happen. By the end, all of us had a product that we were happy with. Even though we did not move forward into production, I feel like we all moved forward in our respective disciplines.

Individually, I believe that I am not a creative person at all. Aiding in the creative process of the team is something I can do in a roundabout way as a Programmer. Something I heard other Programmers talked about near the beginning of the semester was the “Programmer’s Veto.” In essence, the Programmer’s Veto is the idea that since you are the primary way to get ideas or mechanics into the game, then you have some extra say in how it exists in game. Now, some people would think of this as, “I don’t like this idea so I’m not going to do it” which is a toxic mentality, however, the Programmer’s Veto can be used in a positive light. Since I was usually closest to the game integration wise, I feel like I had a better sense of what was in and out of scope with the current state of the game/technology. This may sound like I was actually hindering the creative process instead of contributing to it, I can think of one time where I said “I don’t think we can do this with the time we have left.” After explaining to the team why I thought this, they agreed with me. The way I most often added to the creative process was the way I would add features to the game, sometimes sparking new ideas for other features.

There was one time throughout the development of The Root of the Problem where I had to think about how what I was doing was going to be used by other members of the team. Rewriting how enemies spawned came out of nowhere, and was a great way to spend a restless night. Previously, enemies could only be spawned with one wave, and in a very specific way, something that that needed to be changed so that spawning enemies could be done easier. When I was rewriting the system, the one thing I had in mind was, “What would be the easiest way for Scott [our Designer] to make interesting enemy arrangements.” After a couple hours of hacking away, I finally reached a point where spawning enemies was much simpler and more intuitive that the previous state it was in.

Some important key decisions that I had to make throughout the course of the semester was Git vs SVN, Unity vs Unreal, and The Root of the Problem vs our other prototype, Blind Faith. To be fair, the first two were questions that were raised before we even started development. The Git vs SVN debate is one that seems to be a foreign one to many of my peers. I know many teams that went with Subversion because they and everyone on their team already knew it, which are valid reasons. After talking with the team, they said that there was no preference between Git or SVN, so that might beg the question, “Why did you go with Git then?” The answer to that question is that over the course of the summer, I was growing accustom to Git and saw that it was being used much more than Subversion outside of the educational environment we were all used too.  It may have caused some troubles when everyone had to switch from the SVN mindset over to a Git one, but I still think it was the right choice.

The next major decision was Unreal vs Unity. One of the main reasons we were wanting to go towards Unreal was the fact that from the start we did not want our game to have the “Unity” look that so many games before it had. We wanted to stand out and have people think that our product was not made in Unity. The downside to Unreal however, was that very few of us had ever done anything in it, let alone make a game. After spending a couple weeks during the summer trying to navigate through Unreal’s, at the time, atrociously lacking C++ documentation, I told the group that if we were to use Unreal, we could make the same thing in Unity at a much faster pace, and it would probably be much more stable. In the end, the main way to not have a game look like an “Unity Game” is to create a unique art style that either abandons the Standard Shader, or uses it to it’s fullest potential.

The final decision was probably one of the hardest and most important decisions I, and the whole team had to make. We were all very into the idea of The Root of the Problem since inception, but most of the faculty we talked to did not like the idea and loved the premise of Blind Faith, a game where the player helps guide damned souls out of hell by creating a path for them. I vividly remember sitting at dinner one night with the team and we were all discussing what game we should go forward with. Whenever we talked about Blind Faith, it seemed we all had a much less passionate tone about the game, compared to The Root of the Problem, where we would all sound excited to be working on it. We all knew it would be the harder game to pull off, and we went forward with it knowing that. The deciding factor for going forward with The Root of the Problem instead of Blind Faith was that we all wanted to make a game we would have fun playing and have fun developing.

Moving forward into Production, there are a few things that I learned that are not nice to have, but rather required to have in order to have a functioning team. The largest one was camaraderie and communication. Throughout the course of the development of The Root of the Problem, I wanted to keep working on the game because I wanted to keep working with the team. Looking at our slack, you would see both channels where we would just spam gifs, while at the same time daily scrum channels. This allowed us, or at least me, to feel comfortable addressing any concerns that there was with the game.  Wanting to work with your team is probably the most important factor when it comes to game development.

Team NAH: We’re Going To Woodstock!

UahQpa-G

“By the way you guys, I signed us up for GMGF.” Being told this was somewhat frightening news. GMGF, or Green Mountain Games Festival was about two weeks away, and our game was not in a state that I would be proud to show off to the public. It was in a raw state that many would consider to be visible still in the early stages of development. Not that much thought was put into the game itself, the systems were there, but there was not that much attention put into how they were structured. Various bugs were in the game that made restarting the game a thing that had to be done very often during QA Tests. Getting the game to a point that I would want to show it off to the public would take a large amount of work, but we managed to pull it off.

 

On of the primary concerns with the game was that spawning enemies was very rudimentary. You hit a trigger and it would spawn X amount of Y enemies spread across Z spawners. There was no way to offset enemy spawning or make conditional spawning. One of the first tasks I set myself up to do is rewrite how enemy spawning works. After a few hours working on a side branch, I emerged back with a much more robust way to handle enemy spawning. Now, each area has a set amount of waves. Each wave has a set amount of spawners that that specific wave can spawn from, what types of enemies can spawn in this wave, and how many enemies will spawn in this wave. The most important feature of this new system is that a designer can set if the next wave spawns off of a time delay, or off of a percentage of enemies being dead. This allowed us to set up a much longer level and more unique spawning systems. The best part of having this new spawning system is that it is very simple to create a new area and set up how the enemies spawn there.

 

The second feature that I primarily worked on in order to have the game be ready for GMGF was adding Quality of Life features to our grenade weapons. Heading into this sprint players had an unlimited number of grenades, couldn’t control how far they went, and couldn’t tell where they would end up. To address these features, there were a few simple changes, and some more complex changes. The simple one was to have the grenades use an ammo system that you regained more from enemy drops. The more complex fix was to preview how far the grenades would go. Due to how the grenades gain their trajectory, the path of the grenade is unknown until the object itself is spawned. To allow for a preview, I had to move a lot of the calculations from the grenade itself to the object that spawns the grenades. After that, it was a matter of graphing a kinematic equation, which thanks to some google-fu turned up Line Renderers being able to do such that.

 

Adding these changes to TRotP made me feel much more confident about how we showed the game off at GMGF, which a full report of how it went will be written in the upcoming weeks