Difference between revisions of "图谱构建小组"
(10 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | ||
− | # 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600. | + | == 综述 == |
− | # 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606. | + | # 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.[http://crad.ict.ac.cn/CN/article/downloadArticleFile.do?attachType=PDF&id=3127 paper] |
− | # Wang | + | # 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.[http://www.xml-data.org/dzkj-nature/html/201645589.htm paper] |
+ | # 李舟军, 范宇, 吴贤杰. 面向自然语言处理的预训练技术研究综述[J]. 计算机科学, 2020, v.47(03):170-181.[http://www.cnki.com.cn/Article/CJFDTotal-JSJA202003028.htm paper] | ||
+ | |||
+ | == 预训练模型 == | ||
+ | # Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv preprint arXiv:1508.01991, 2015.[https://arxiv.org/pdf/1508.01991.pdf paper] | ||
+ | # Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf paper] | ||
+ | # Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[https://arxiv.org/pdf/1810.04805.pdf paper] | ||
+ | # Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[https://arxiv.org/pdf/1907.11692.pdf paper] | ||
+ | # Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019.[https://arxiv.org/pdf/1904.09223.pdf paper] | ||
+ | # Zhang Z, Han X, Liu Z, et al. ERNIE: Enhanced language representation with informative entities[J]. arXiv preprint arXiv:1905.07129, 2019.[https://arxiv.org/pdf/1905.07129.pdf paper] | ||
+ | # Sun Y, Wang S, Li Y K, et al. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding[C]//AAAI. 2020: 8968-8975.[https://arxiv.org/pdf/1907.12412.pdf?source=post_page paper] | ||
+ | # Cui Y, Che W, Liu T, et al. Pre-training with whole word masking for chinese bert[J]. arXiv preprint arXiv:1906.08101, 2019.[https://arxiv.org/pdf/1906.08101.pdf paper] | ||
+ | # Jiao X, Yin Y, Shang L, et al. Tinybert: Distilling bert for natural language understanding[J]. arXiv preprint arXiv:1909.10351, 2019.[https://arxiv.org/pdf/1909.10351.pdf paper] | ||
+ | # Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[https://papers.nips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf paper] |
Latest revision as of 11:00, 11 November 2020
综述
- 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.paper
- 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.paper
- 李舟军, 范宇, 吴贤杰. 面向自然语言处理的预训练技术研究综述[J]. 计算机科学, 2020, v.47(03):170-181.paper
预训练模型
- Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv preprint arXiv:1508.01991, 2015.paper
- Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.paper
- Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.paper
- Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.paper
- Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019.paper
- Zhang Z, Han X, Liu Z, et al. ERNIE: Enhanced language representation with informative entities[J]. arXiv preprint arXiv:1905.07129, 2019.paper
- Sun Y, Wang S, Li Y K, et al. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding[C]//AAAI. 2020: 8968-8975.paper
- Cui Y, Che W, Liu T, et al. Pre-training with whole word masking for chinese bert[J]. arXiv preprint arXiv:1906.08101, 2019.paper
- Jiao X, Yin Y, Shang L, et al. Tinybert: Distilling bert for natural language understanding[J]. arXiv preprint arXiv:1909.10351, 2019.paper
- Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.paper