Difference between revisions of "图谱构建小组"

From dbgroup
Jump to: navigation, search
Line 1: Line 1:
'''Survey papers'''</br>
+
== 综述 ==
# 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.</br>
+
# 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.[http://crad.ict.ac.cn/CN/article/downloadArticleFile.do?attachType=PDF&id=3127]
# 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.</br>
+
# 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.[http://www.xml-data.org/dzkj-nature/html/201645589.htm]
# Wang Q , Mao Z , Wang B , et al. Knowledge Graph Embedding: A Survey of Approaches and Applications[J]. IEEE Transactions on Knowledge and Data Engineering, 2017, PP(99):1-1.</br>
+
# 李舟军, 范宇, 吴贤杰. 面向自然语言处理的预训练技术研究综述[J]. 计算机科学, 2020, v.47(03):170-181.[http://www.cnki.com.cn/Article/CJFDTotal-JSJA202003028.htm]
 +
 
 +
== 预训练模型  ==
 +
# Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[https://arxiv.org/pdf/1810.04805.pdf]
 +
# Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[https://arxiv.org/pdf/1907.11692.pdf]
 +
# Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[https://papers.nips.cc/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf]
 +
# Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf]

Revision as of 15:58, 10 November 2020

综述

  1. 刘峤, 李杨, 段宏,等. 知识图谱构建技术综述[J]. 计算机研究与发展, 2016, 53(3):582-600.[1]
  2. 徐增林, 盛泳潘, 贺丽荣,等. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4):589-606.[2]
  3. 李舟军, 范宇, 吴贤杰. 面向自然语言处理的预训练技术研究综述[J]. 计算机科学, 2020, v.47(03):170-181.[3]

预训练模型

  1. Devlin J , Chang M W , Lee K , et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[J]. 2018.[4]
  2. Liu Y , Ott M , Goyal N , et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach[J]. 2019.[5]
  3. Yang Z , Dai Z , Yang Y , et al. XLNet: Generalized Autoregressive Pretraining for Language Understanding[J]. 2019.[6]
  4. Vaswani A , Shazeer N , Parmar N , et al. Attention Is All You Need[J]. arXiv, 2017.[7]