By Anthony F. Voellm (aka Tony the @p3rfguy / G+) and Emily Bedont


On Wednesday, October 24th, while sitting under the Solar System, 30 software engineers from the Greater Seattle area came together at Google Kirkland to partake in the first ever Test Edition of Ship Wars. Ship Wars was created by two Google Waterloo engineers, Garret Kelly and Aaron Kemp, as a 20% project. Yes, 20% time does exist at Google!  The object of the game is to code a spaceship that will outperform all others in a virtual universe - algorithm vs algorithm.

The Kirkland event marked the 7th iteration of the program which was also recently done in NYC. Kirkland however was the first time that the game had been customized to encourage exploratory testing. In the case of "Ship Wars the Test Edition," we planted 4 bugs that the engineering participants were awarded for finding. Well, we ran out of prizes and were quickly reminded that when you put a lot of testing minded people in a room, many bugs will be unveiled! One of the best unveiled bugs was not one of the four planted in the simulator. When turning your ship 90 degrees, the ship actually turned -90 degrees. Oops!

Participants were encouraged to test their spaceship built on their own machine or a Google Chromebook. While the coding was done in the browser, the simulator and web server were run on Google Compute Engine. Throughout the 90 minutes, people challenged other participants to duels. Head-to-head battles took place on Chromebooks at the front of the room. There were many accolades called out but in the end, there could only be one champion who would walk away with a brand spankin’ new Nexus7. Check out our video of the evening’s activities.

Sounds fun, huh? We sure hope our participants, including our first place winner shown receiving the Nexus 7 from Garret, enjoyed the evening! Beyond the battles, our guests were introduced to the revived Google Testing Blog, heard firsthand that GTAC will be back in 2013, learned about testing at Google, and interacted with Googlers in a "Googley" environment. Achievement unlocked.

Special thanks to all the Googlers that supported the event!

1 comment


By Anthony F. Voellm (aka Tony the perfguy)

It’s amazing what has happened in the field of test in the last 20 years... a lot of “art” has turned into “science”. Computer scientists, engineers, and many other disciplines have worked on provable systems and calculus, pioneered model based testing, invented security fuzz testing, and even settled on a common pattern for unit tests called xunit. The xunit pattern shows up in open source software like JUnit as well as in Microsoft development test tools.

With all this innovation in test, there’s no wonder test is dead.  The situation is no different from the late 1800’s when patents were declared dead. Everything had been invented. So now that everything in test has been invented, it’s dead.

Well... if you believe everything in test has been invented then please stop reading now :)

As an aside:  “Test is dead” was a keynote at the Google Test Automation Conference (GTAC) in 2011.  You can watch that talk and many other GTAC test talks on YouTube, and I definitely recommend you check them out here.  Talks span a wide range of topics ranging from GUI Automation to Cloud.

What really excites me these days is that we have closed a chapter on test. A lot of the foundation of writing and testing great software has been laid (examples at the beginning of the post, tools like Webdriver for UI, FIO for storage, and much more), which I think of as Testing 1.0. We all use Testing 1.0 day in and day out. In fact at Google, most of the developers (called Software Engineers or SWEs) do the basic Testing 1.0 work and we have a high bar on quality.  Knuth once said "Be careful about using the following code -- I've only proven that it works, I haven't tested it." 

This brings us to the current chapter in test which I call Testing 1.5.  This chapter is being written by computer scientists, applied scientists, engineers, developers, statisticians, and many other disciplines.  These people come together in the Software Engineer in Test (SET) and Test Engineer (TE) roles at Google. SET/TEs focus on; developing software faster, building it better the first time, testing it in depth, releasing it quicker, and making sure it works in all environments.  We often put deep test focus on Security, Reliability and Performance.  I sometimes think of the SET/TE’s as risk assessors whose role is to figure out the probability of finding a bug, and then working to reduce that probability. Super interesting computer science problems where we take a solid engineering approach, rather than a process oriented / manual / people intensive based approach.  We always look to scale with machines wherever possible.

While Testing 1.0 is done and 1.5 is alive and well, it’s Testing 2.0 that gets me up early in the morning to start my day. Imagine if we could reinvent how we use and think about tests.  What if we could automate the complex decisions on good and bad quality that humans are still so good at today? What would it look like if we had a system collecting all the “quality signals” (think: tests, production information, developer behavior, …) and could predict how good the code is today, and what it most likely will be tomorrow? That would be so awesome...

Google is working on Testing 2.0 and we’ll continue to contribute to Testing 1.0 and 1.5. Nothing is static... keep up or miss an amazing ride.

Peace.... Tony

Special thanks to Chris, Simon, Anthony, Matt, Asim, Ari, Baran, Jim, Chaitali, Rob, Emily, Kristen, Annie, and many others for providing input and suggestions for this post.