SAN FRANCISCO — In an exciting collaboration, OpenAI, AIcrowd, Carnegie Mellon University, and DeepMind are set to co-organize two prestigious competitions at NeurIPS 2020. These competitions, utilizing the Procgen Benchmark and MineRL, aim to drive advancements in reinforcement learning.
Procgen Competition
The Procgen Competition, a cornerstone of the event, challenges participants to enhance sample efficiency and generalization in reinforcement learning. Contestants will optimize agent performance across a fixed number of environment interactions spanning 16 publicly available environments from the Procgen Benchmark. Additionally, four secret test environments have been introduced specifically for this competition, adding an element of surprise and complexity. The diversity of environments promises robust evaluations of algorithms, ensuring substantial progress in the field. Detailed round information can be accessed here.
The procedurally generated nature of Procgen environments necessitates agents to adapt to novel scenarios, making it a rigorous testbed for generalization capabilities. Designed for simplicity and efficiency, these environments facilitate easy replication of baseline results and encourage rapid experimentation to refine reinforcement learning methods.
MineRL Competition
Building on the success of previous editions, the MineRL 2020 Competition continues to pioneer advancements in leveraging human demonstrations to optimize reinforcement learning. Participants will compete to develop algorithms capable of achieving specific tasks in Minecraft using a constrained number of simulator samples and computational resources. Specifically, contestants aim to replicate achievements such as obtaining a diamond from raw pixels within defined limits. The competition provides access to the MineRL-v0 dataset, comprising extensive human gameplay data, to aid in the development of efficient learning strategies.
The competition’s framework ensures fairness by training finalists’ models under stringent hardware and compute constraints. This approach not only fosters innovation but also mitigates the risk of overfitting solutions to the simulation environment. Detailed competition guidelines are available here.
Both competitions are poised to attract top researchers and practitioners from around the globe, setting the stage for significant breakthroughs in reinforcement learning methodologies.