# iSEE
**Repository Path**: mirrors_allenai/iSEE
## Basic Information
- **Project Name**: iSEE
- **Description**: Intepretability method to find what navigation agents learn
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2022-06-17
- **Last Updated**: 2026-02-14
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# What do navigation agents learn about their environment?
#### Kshitij Dwivedi, Gemma Roig, Aniruddh Kembhavi, Roozbeh Mottaghi
#### CVPR 2022
#### Project Page
We present iSEE, a framework that allows interpreting the dynamic representation of the navigation agents in terms of human interpretable concepts.
The navigation agents were trained using AllenAct framework. In this repository, we provide:
- Dataset of RNN activations and concepts
- Code to evaluate how well trained agents predict concepts
- Code to get top-K relevant neurons for predicting a given concept
### Citation
If you find this project useful in your research, please consider citing:
```
@InProceedings{Dwivedi_2022_CVPR,
author = {Dwivedi, Kshitij and Roig, Gemma and Kembhavi, Aniruddha and Mottaghi, Roozbeh},
title = {What Do Navigation Agents Learn About Their Environment?},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {10276-10285}
}
```
### Contents
## 💻 Installation
To begin, clone this repository locally
```bash
git clone https://github.com/allenai/iSEE.git
```
Install anaconda and create a new conda environment
```bash
conda create -n iSEE
conda activate iSEE
```
Install xgboost-gpu using the following command
```bash
conda install -c anaconda py-xgboost-gpu
```
Install other requirements
```bash
pip install -r requirements.txt
```
## 📊 Dataset
Please download the dataset from here. Then unzip it inside data directory
## Concept Prediction
Run the following script to evaluate how well concepts can be predicted by trained agent (Resnet-objectnav) and compare it to corresponding untrained baseline
```bash
python predict concepts.py --model resnet --task objectnav
```
Arguments:
+ ```--model```: We used two architectures Resnet and SimpleConv. Options are ```resnet``` and ```simpleconv```
+ ````--task````: Options are ```objectnav``` and ```pointnav```
The script will generate plots and save them in ```results/task/model/plots``` directory
## TopK neurons
Run the following script to find which neurons were most relevant in predicting a given concept (e.g. front reachability) by a trained agent (Resnet-objectnav).
```bash
python get_topk_neurons.py --model resnet --task objectnav --concept reachable_R=2_theta=000
```
Arguments:
+ ```--model```: We used two architectures Resnet and SimpleConv. Options are ```resnet``` and ```simpleconv```
+ ````--task````: Options are ```objectnav``` and ```pointnav```
+ ````--concept````: The concepts used in the paper are ```reachable_R=2_theta=000``` (Reachability at 2xgridSize and front) and ```target_visibility```. For full list of concepts in the dataset, please
refer to columns of ```data/trajectory_dataset/train/objectnav_ithor_default_resnet_pretrained/metadata.pkl``` file.
The script will generate SHAP beeswarm plot for the concept and save it in ```results/task/model/shap_plots``` directory.
## Acknowledgements
We thank the SHAP authors for easy to use code and ManipulaThor authors for Readme template.