# nlp-journey **Repository Path**: collecthub/nlp-journey ## Basic Information - **Project Name**: nlp-journey - **Description**: NLP 相关的一些文档、论文及代码, 包括主题模型(Topic Model)、词向量(Word Embedding)、命名实体识别(Named Entity Recognition)、文本分类(Text Classificatin)、文本生成(Text Generation)、文本相似性(Text Similarity)计算、机器翻译(Machine Translation)等,涉及到各种与nlp相关的算法,基于tensorflow 2.0。 https://github.com/msgi/nlp-journey - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 5 - **Forks**: 2 - **Created**: 2020-02-09 - **Last Updated**: 2022-09-15 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # nlp journey > Your Journey to NLP Starts Here ! > I sincerely welcome you to pull requests! ***全面拥抱tensorflow2,代码全部修改为tensorflow2.0版本。*** ## 一. 基础知识 * [基础知识](docs/basic.md) * [工具教程](tutorials/) * [实践笔记](docs/notes.md) * [常见问题](docs/fq.md) * [实现代码](nlp/) ## 二. 经典书目([`百度云`](https://pan.baidu.com/s/1sE_20nHCfej6f9yRaisz7Q) 提取码:b5qq) * 算法的乐趣. [`原书地址`](http://www.ituring.com.cn/book/1605) * 概率图入门. [`原书地址`](https://stat.ethz.ch/~maathuis/papers/Handbook.pdf) * Deep Learning.深度学习必读. [`原书地址`](https://www.deeplearningbook.org/) * Neural Networks and Deep Learning. 入门必读. [`原书地址`](http://neuralnetworksanddeeplearning.com/) * 复旦大学《神经网络与深度学习》邱锡鹏教授. [`原书地址`](https://nndl.github.io/) * 斯坦福大学《语音与语言处理》第三版:NLP必读. [`原书地址`](http://web.stanford.edu/~jurafsky/slp3/ed3book.pdf) * CS224d: Deep Learning for Natural Language Processing. [`课件地址`](http://cs224d.stanford.edu/) ## 三. 必读论文 ### 01) 模型与优化 * LSTM(Long Short-term Memory). [`地址`](http://www.bioinf.jku.at/publications/older/2604.pdf) * Sequence to Sequence Learning with Neural Networks. [`地址`](https://arxiv.org/pdf/1409.3215.pdf) * Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. [`地址`](https://arxiv.org/pdf/1406.1078.pdf) * Dropout(Improving neural networks by preventing co-adaptation of feature detectors). [`地址`](https://arxiv.org/pdf/1207.0580.pdf) * Residual Network(Deep Residual Learning for Image Recognition). [`地址`](https://arxiv.org/pdf/1512.03385.pdf) * Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. [`地址`](https://arxiv.org/pdf/1502.03167.pdf) * How transferable are features in deep neural networks. [`地址`](https://arxiv.org/pdf/1411.1792.pdf) * A Critical Review of Recurrent Neural Networks for Sequence Learning. [`地址`](https://arxiv.org/pdf/1506.00019.pdf) * Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks. [`地址`](https://arxiv.org/pdf/1810.09536.pdf) * Distilling the Knowledge in a Neural Network. [`地址`](https://arxiv.org/pdf/1503.02531.pdf) ### 02) 综述论文 * An overview of gradient descent optimization algorithms. [`地址`](https://arxiv.org/pdf/1609.04747.pdf) * Analysis Methods in Neural Language Processing: A Survey. [`地址`](https://arxiv.org/pdf/1812.08951.pdf) * Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. [`地址`](https://arxiv.org/pdf/1910.10683.pdf) ### 03) 文本增强 * EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks.[`地址`](https://arxiv.org/pdf/1901.11196.pdf) ### 04) 文本预训练 * A Neural Probabilistic Language Model. [`地址`](https://www.researchgate.net/publication/221618573_A_Neural_Probabilistic_Language_Model) * word2vec Parameter Learning Explained. [`地址`](https://arxiv.org/pdf/1411.2738.pdf) * Language Models are Unsupervised Multitask Learners. [`地址`](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) * An Empirical Study of Smoothing Techniques for Language Modeling. [`地址`](https://dash.harvard.edu/bitstream/handle/1/25104739/tr-10-98.pdf?sequence=1) * Efficient Estimation of Word Representations in Vector Space. [`地址`](https://arxiv.org/pdf/1301.3781.pdf) * Distributed Representations of Sentences and Documents. [`地址`](https://arxiv.org/pdf/1405.4053.pdf) * Enriching Word Vectors with Subword Information(FastText). [`地址`](https://arxiv.org/pdf/1607.04606.pdf). [`解读`](https://www.sohu.com/a/114464910_465975) * GloVe: Global Vectors for Word Representation. [`官网`](https://nlp.stanford.edu/projects/glove/) * ELMo (Deep contextualized word representations). [`地址`](https://arxiv.org/pdf/1802.05365.pdf) * BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [`地址`](https://arxiv.org/pdf/1810.04805.pdf) * Pre-Training with Whole Word Masking for Chinese BERT. [`地址`](https://arxiv.org/pdf/1906.08101.pdf) * XLNet: Generalized Autoregressive Pretraining for Language Understanding[`地址`](https://arxiv.org/pdf/1906.08237.pdf) ### 05) 文本分类 * Bag of Tricks for Efficient Text Classification (FastText). [`地址`](https://arxiv.org/pdf/1607.01759.pdf) * A Sensitivity Analysis of (and Practitioners’ Guide to) Convolutional Neural Networks for Sentence Classification. [`地址`](https://arxiv.org/pdf/1510.03820.pdf) * Convolutional Neural Networks for Sentence Classification. [`地址`](https://arxiv.org/pdf/1408.5882.pdf) * Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. [`地址`](http://www.aclweb.org/anthology/P16-2034) ### 06) 文本生成 * A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence Natural Language Generation. [`地址`](https://arxiv.org/pdf/1805.06553.pdf) * SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. [`地址`](https://arxiv.org/pdf/1609.05473.pdf) * Generative Adversarial Text to Image Synthesis. [`地址`](https://arxiv.org/pdf/1605.05396.pdf) ### 07) 文本相似性 * Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks. [`地址`](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.723.6492&rep=rep1&type=pdf) * Learning Text Similarity with Siamese Recurrent Networks. [`地址`](https://www.aclweb.org/anthology/W16-1617) * A Deep Architecture for Matching Short Texts. [`地址`](http://papers.nips.cc/paper/5019-a-deep-architecture-for-matching-short-texts.pdf) ### 08) 自动问答 * A Question-Focused Multi-Factor Attention Network for Question Answering. [`地址`](https://arxiv.org/pdf/1801.08290.pdf) * The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. [`地址`](https://arxiv.org/pdf/1812.08989.pdf) * A Knowledge-Grounded Neural Conversation Model. [`地址`](https://arxiv.org/pdf/1702.01932.pdf) * Neural Generative Question Answering. [`地址`](https://arxiv.org/pdf/1512.01337v1.pdf) * Sequential Matching Network A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots.[`地址`](https://arxiv.org/abs/1612.01627) * Modeling Multi-turn Conversation with Deep Utterance Aggregation.[`地址`](https://arxiv.org/pdf/1806.09102.pdf) * Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network.[`地址`](https://www.aclweb.org/anthology/P18-1103) * Deep Reinforcement Learning For Modeling Chit-Chat Dialog With Discrete Attributes. [`地址`](https://arxiv.org/pdf/1907.02848.pdf) ### 09) 机器翻译 * Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. [`地址`](https://arxiv.org/pdf/1406.1078v3.pdf) * Neural Machine Translation by Jointly Learning to Align and Translate. [`地址`](https://arxiv.org/pdf/1409.0473.pdf) * Transformer (Attention Is All You Need). [`地址`](https://arxiv.org/pdf/1706.03762.pdf) * Transformer-XL:Attentive Language Models Beyond a Fixed-Length Context. [`地址`](https://arxiv.org/pdf/1901.02860.pdf) ### 10) 自动摘要 * Get To The Point: Summarization with Pointer-Generator Networks. [`地址`](https://arxiv.org/pdf/1704.04368.pdf) * Deep Recurrent Generative Decoder for Abstractive Text Summarization. [`地址`](https://aclweb.org/anthology/D17-1222) ### 11) 关系抽取 * Distant Supervision for Relation Extraction via Piecewise Convolutional Neural Networks. [`地址`](https://www.aclweb.org/anthology/D15-1203) * Neural Relation Extraction with Multi-lingual Attention. [`地址`](https://www.aclweb.org/anthology/P17-1004) * FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. [`地址`](https://aclweb.org/anthology/D18-1514) * End-to-End Relation Extraction using LSTMs on Sequences and Tree Structures. [`地址`](https://www.aclweb.org/anthology/P16-1105) ### 12) 推荐系统 * Deep Neural Networks for YouTube Recommendations. [`地址`](https://static.googleusercontent.com/media/research.google.com/zh-CN//pubs/archive/45530.pdf) * Behavior Sequence Transformer for E-commerce Recommendation in Alibaba. [`地址`](https://arxiv.org/pdf/1905.06874.pdf) * MV-DSSM:A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems. [`地址`](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/frp1159-songA.pdf) ## 四. 必读博文 * 如何学习自然语言处理(综合版). [`地址`](https://mp.weixin.qq.com/s/lJYp4hUZVsp-Uj-5NqoaYQ) * The Illustrated Transformer.[`地址`](https://jalammar.github.io/illustrated-transformer/) * Attention-based-model. [`地址`](http://www.wildml.com/2016/01/attention-and-memory-in-deep-learning-and-nlp/) * Modern Deep Learning Techniques Applied to Natural Language Processing. [`地址`](https://nlpoverview.com/) * Bert解读. [`地址`](https://zhuanlan.zhihu.com/p/49271699) * 难以置信!LSTM和GRU的解析从未如此清晰(动图+视频)。[`地址`](https://blog.csdn.net/dqcfkyqdxym3f8rb0/article/details/82922386) * 深度学习中优化方法. [`地址`](https://blog.csdn.net/u012328159/article/details/80311892) * 从语言模型到Seq2Seq:Transformer如戏,全靠Mask. [`地址`](https://spaces.ac.cn/archives/6933) * Applying word2vec to Recommenders and Advertising. [`地址`](http://mccormickml.com/2018/06/15/applying-word2vec-to-recommenders-and-advertising/) ## 五. 相关优秀github项目 * transformers. [`地址`](https://github.com/huggingface/transformers) * HanLP. [`地址`](https://github.com/hankcs/HanLP) ## 六. 相关优秀博客 * [52nlp](http://www.52nlp.cn/) * [科学空间/信息时代](https://kexue.fm/category/Big-Data) * [刘建平Pinard](https://www.cnblogs.com/pinard/) * [零基础入门深度学习](https://www.zybuluo.com/hanbingtao/note/433855)