The Chrome T-Rex game, except the dinosaur is you

This weekend, I modified the Chrome T-Rex game to bring it into the physical world. The game is projected onto a wall, and you play by standing where the T-Rex would typically stand, and then physically hopping over the obstacles. It’s a lot of fun to play, and much harder than the original (at least for me—I’m a better spacebar-pusher than athlete). The game detects when you hit a cactus using a deep learning model that runs in real-time on the webcam feed.

Check out the source code.

I started by extracting the T-Rex game from Chromium’s source. Like many Google projects, the javascript code includes Closure Compiler annotations, which integrate nicely with TypeScript. Their code is structured beautifully, and it was a snap to figure out which parts I needed to modify—the main game loop and collision detection functions.

To detect the position of the player’s body, I used PoseNet, a pre-trained pose estimation model that takes a video feed and returns a confidence score and the coordinates of the player’s joints. I was blown away by its accuracy and speed—it’s fast enough to run every frame, which is especially impressive given that it runs entirely in the browser!

One complication: PoseNet can help locate the player’s body in the webcam feed, but where is the player’s body in relation to the game and the obstacles? This was one of the more challenging aspects of the project to wrap my head around, and I ended up designing a calibration routine to figure out how to convert between the webcam’s coordinates and the game’s coordinates.

There are four markers positioned at known locations in the game viewport. By locating where those markers appear in the webcam feed, we can then compute the perspective transform. I used the change-perspective library to do this for me, since I didn’t want to work out the linear algebra by hand. But if you’re interested in the math, this StackExchange answer is a great resource.

By the way, industry lingo for those markers is “fiducial markers.” If I wanted to get really fancy, I could replace them with QR codes in order to do the calibration automatically, like a lot of augmented reality games do.

Special thanks to Swarthmore College for lending me the projector that I used to develop and test this project.

Full source code is available on GitHub. The readme includes instructions for setting it all up if you have the equipment and would like to give the game a try.

https://github.com/veggiedefender/projectordino