Today, Elon Musk and Sam Altman’s OpenAI non-profit published its first batch of open-source code, aimed at making AI programs smarter, more diverse, and less murderous. Called “OpenAI Gym,” the new code consists of a series of “environments” designed to test and improve various machine learning systems. Most of those environments are standard tasks like completing algorithms, managing physics simulators, or playing Go against a fixed opponent. The hope is that the tasks will give OpenAI and others a way to rank and improve various AI approached, potentially guiding them to new ways to teach machines to learn overall.
The environments also include 59 different classic Atari games, including Space Invaders, Ms. PacMan, Pitfall, Qbert, and Pong. Prospective AIs can play each game based on visual outputs or based on the RAM of the simulation itself. As with the more conventional challenges, the goal is to score as well as possible, effectively racking up high scores. OpenAI will keep a running tally of who does the best on the challenges, although organizers insist those scores will be more reviewed results than a conventional leaderboard.
It’s a more open version of a tactic that Google’s DeepMind team adopted last year, although it’s unclear how well training algorithms on arcade games actually worked out for them — but now that the environments are open source, anyone’s free to compete.