# LIP **Repository Path**: frontxiang/LIP ## Basic Information - **Project Name**: LIP - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-01-19 - **Last Updated**: 2024-01-19 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # LIP: Local Importance-based Pooling PyTorch implementations of LIP (ICCV 2019). [[paper link]](https://openaccess.thecvf.com/content_ICCV_2019/papers/Gao_LIP_Local_Importance-Based_Pooling_ICCV_2019_paper.pdf) This codebase is now complete and it contains: - [x] the implementation of LIP based on PyTorch primitives, - [x] LIP-ResNet, - [x] LIP-DenseNet, - [x] ImageNet training and testing code, - [x] CUDA implementation of LIP. ## News [2021] A case of LIP when `G(I)=I`, SoftPool, is accepted to ICCV 2021. Check [SoftPool](https://github.com/alexandrosstergiou/SoftPool). ## A Simple Step to Customize LIP LIP as the learnable generic pooling, its code is simply (in PyTorch): ``` def lip2d(x, logit, kernel=3, stride=2, padding=1): weight = logit.exp() return F.avg_pool2d(x*weight, kernel, stride, padding)/F.avg_pool2d(weight, kernel, stride, padding) ``` You need a sub fully convolutional network (FCN) as the logit module (whose output is of the same shape as the input) to produce the logit. You can customize the logit module like ``` logit_module_a = nn.Identity() lip2d(x, logit_module_a(x)) // it gives SoftPool logit_module_b = lambda x: x.mul(20) lip2d(x, logit_module_b(x)) // it approximates max pooling logit_module_c = lambda x: x.mul(0) lip2d(x, logit_module_c(x)) // it is average pooling logit_module_d = nn.Conv2d(in_channels, in_channels, 1) // the simple projection form logit module lip2d(x, logit_module_d(x)) logit_module_e = MyLogitModule() // Your customized logit module (a FCN) begins here lip2d(x, logit_module_e(x)) ``` ## Dependencies 1. Python 3.6 2. PyTorch 1.0 3. tensorboard and tensorboardX ## Pretrained Models You can download ImageNet pretrained models [here](https://drive.google.com/drive/folders/1KCt22JTob1hHiPmpLOlgZo3fvTRc11SJ). ## ImageNet Please refer to [imagenet/README.md](./imagenet/). ## CUDA LIP Please refer to [cuda-lip/README.md](./cuda-lip/). ## Misc If you find our research helpful, please consider citing our paper. ``` @InProceedings{LIP_2019_ICCV, author = {Gao, Ziteng and Wang, Limin and Wu, Gangshan}, title = {LIP: Local Importance-Based Pooling}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {October}, year = {2019} } ```