[ad_1]
During practice, players are first faced with simple single-player games such as finding the purple cube or placing a yellow ball on the red floor. They move on to more complex multiplayer games like hide and seek or capture the flag, where teams compete to be the first to find and grab their opponent’s flag. The court manager does not have a specific goal, but strives to improve the overall ability of the players over time.
Why is this cool? AIs like DeepMind’s AlphaZero have beaten the world’s top chess and go players. But they can only learn one game at a time. As DeepMind co-founder Shane Legg said when I spoke to him last year, it’s like changing your chess brain to your Go brain every time you want to switch the game.
Researchers are currently trying to create an AI that can learn multiple tasks at the same time, which means teaching them general skills to facilitate adaptation.
One exciting trend in this direction is open learning, where AIs learn many different tasks without a specific goal. In many ways, this is how humans and other animals learn through aimless play. But this requires a huge amount of data. XLand automatically generates this data in an endless stream of tasks. It’s like POET, an artificial intelligence training dojo where bipedal bots learn to overcome obstacles in a 2D landscape. However, XLand’s world is much more complex and detailed.
XLand is also an example of how AI learns to create itself, or what Jeff Kloon, who helped develop POET and leads the team working on the topic at OpenAI, calls AI Generating Algorithms (AI-GA). “This work expands the boundaries of AI-GA,” says Kloon. “It’s very interesting to see.”
[ad_2]
Source link