Difference between revisions of "图谱构建小组"

From dbgroup
Jump to: navigation, search
Line 5: Line 5:
  
 
== 预训练模型  ==
 
== 预训练模型  ==
 +
# Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv preprint arXiv:1508.01991, 2015.[https://arxiv.org/pdf/1508.01991.pdf]
 +
# Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf]
 
# Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[https://arxiv.org/pdf/1810.04805.pdf]
 
# Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[https://arxiv.org/pdf/1810.04805.pdf]
 
# Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[https://arxiv.org/pdf/1907.11692.pdf]
 
# Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[https://arxiv.org/pdf/1907.11692.pdf]
 +
# Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019.[https://arxiv.org/pdf/1904.09223.pdf]
 +
# Zhang Z, Han X, Liu Z, et al. ERNIE: Enhanced language representation with informative entities[J]. arXiv preprint arXiv:1905.07129, 2019.[https://arxiv.org/pdf/1905.07129.pdf]
 +
# Sun Y, Wang S, Li Y K, et al. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding[C]//AAAI. 2020: 8968-8975.[https://arxiv.org/pdf/1907.12412.pdf?source=post_page]
 
# Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[https://papers.nips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf]
 
# Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[https://papers.nips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf]
# Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf]
 

Revision as of 16:50, 10 November 2020

综述

  1. 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.[1]
  2. 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.[2]
  3. 李舟军, 范宇, 吴贤杰. 面向自然语言处理的预训练技术研究综述[J]. 计算机科学, 2020, v.47(03):170-181.[3]

预训练模型

  1. Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging[J]. arXiv preprint arXiv:1508.01991, 2015.[4]
  2. Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[5]
  3. Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[6]
  4. Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[7]
  5. Sun Y, Wang S, Li Y, et al. Ernie: Enhanced representation through knowledge integration[J]. arXiv preprint arXiv:1904.09223, 2019.[8]
  6. Zhang Z, Han X, Liu Z, et al. ERNIE: Enhanced language representation with informative entities[J]. arXiv preprint arXiv:1905.07129, 2019.[9]
  7. Sun Y, Wang S, Li Y K, et al. ERNIE 2.0: A Continual Pre-Training Framework for Language Understanding[C]//AAAI. 2020: 8968-8975.[10]
  8. Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[11]