# zero-shot-gcn **Repository Path**: da_da_pong/zero-shot-gcn ## Basic Information - **Project Name**: zero-shot-gcn - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 1 - **Created**: 2021-08-20 - **Last Updated**: 2022-01-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # Zero-shot GCN This code is a re-implementation of the zero-shot classification in ImageNet in the paper [Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs](https://arxiv.org/abs/1803.08035). The code is developed based on the [TensorFlow framework](https://www.tensorflow.org/) and the Graph Convolutional Network (GCN) [repo](https://github.com/tkipf/gcn/tree/master/gcn).  Our pipeline consists of two parts: CNN and GCN. - **CNN**: **Input** an image and **output** deep features for the image. - **GCN**: **Input** the word embedding for every object class, and **output** the visual classifier for every object class. Each visual classifier (1-D weight vector) can be applied on the deep features for classification. ## Citation If you use our code in your research or wish to refer to the benchmark results, please use the following BibTeX entry. ``` @article{wang2018zero, title={Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs}, author={Wang, Xiaolong and Ye, Yufei and Gupta, Abhinav}, journal={CVPR}, year={2018} } ``` ## Using Our Code ```bash git clone git@github.com:JudyYe/zero-shot-gcn.git cd zero-shot-gcn/src ``` Without further specification, we default the root directory to `zero-shot-gcn/src`. ## Dataset Preparation Please read [`DATASET.md`](DATASET.md) for downloading images and extracting image features. ## Testing Demo With extracted feature and semantic embeddings, at this point, we can perform zero-shot classification with the [model](https://www.dropbox.com/sh/q9mid4wjj5vy0si/AADg8_NobfxkDot3VM7tE8Fua?dl=0) we provide. ```Shell wget -O ../data/wordnet_resnet_glove_feat_2048_1024_512_300 https://www.dropbox.com/s/e7jg00nx0h2gbte/wordnet_resnet_glove_feat_2048_1024_512_300?dl=0 python test_imagenet.py --model ../data/wordnet_resnet_glove_feat_2048_1024_512_300 ``` The above line defaults to `res50` + `2-hops` combination and test under two settings: unseen classes with or without seen classes. (see the paper for further explaination.) We also provide other configurations. Please refer to the code for details. ## Main Results We report the results with the above testing demo code (using ResNet-50 visual features and GloVe word embeddings). All experiments are conducted with the ImageNet dataset. We first report the results on testing with only unseen classes. We compare our method with the state-of-the-art method [`SYNC`](https://arxiv.org/abs/1603.00550) in this benchmark.