# vime **Repository Path**: mirrors_openai/vime ## Basic Information - **Project Name**: vime - **Description**: Code for the paper "Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks" - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2022-01-11 - **Last Updated**: 2026-02-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README **Status:** Archive (code is provided as-is, no updates expected) # How to run VIME Variational Information Maximizing Exploration (VIME) as presented in Curiosity-driven Exploration in Deep Reinforcement Learning via Bayesian Neural Networks by *R. Houthooft, X. Chen, Y. Duan, J. Schulman, F. De Turck, P. Abbeel* (http://arxiv.org/abs/1605.09674). To reproduce the results, you should first have [rllab](https://github.com/rllab/rllab) and Mujoco v1.31 configured. Then, run the following commands in the root folder of `rllab`: ```bash git submodule add -f git@github.com:openai/vime.git sandbox/vime touch sandbox/__init__.py ``` Then you can do the following: - Execute TRPO+VIME on the hierarchical SwimmerGather environment via `python sandbox/vime/experiments/run_trpo_expl.py`.