TensorMSA

Toggle navigation
  • GITHUB
  • About
  • TensorMSA Guide
    • TensorMSA 소개
    • 개발환경
    • 설치 및 실행
    • 서비스 구성
    • 관리자 화면 사용 가이드
    • API 연동 개발 가이드
      • 신경망 관리 기준
      • AutoML 사용
      • 이미지 알고리즘
        • ResidualNet
        • CNN
        • AutoEncoder
      • 정형데이터 알고리즘
        • Wide&Deep
        • AutoEncoder
        • RNN
        • XgBoost
      • 자연어처리 알고리즘
        • CharCNN
        • RNN
        • Seq2Seq
        • Attention Seq2Seq
        • BiLstmCrf
        • Word2Vec
        • Glove
        • FastText
        • Doc2Vec
        • E2EMN
    • TensorMSA Demo
  • Deep Learning
    • Deep Tendency
    • Deep Architecture
    • Paper Study
    • TensorFlow
    • DeepLearning4j
    • Django
    • Python
    • R Programming
    • ADP/ADSP
    • Hadoop
    • Spark
  • ETC
    • Algorithm Study
    • Thread Safe Programming
    • Development Tools
    • OpenCV
    • Android

IMPORTANT NATURAL LANGUAGE PROCESSING (NLP) RESEARCH PAPERS OF 2018

By tmddno1@naver.com | April 22, 2019 | No Comments | Paper Study
  1. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
  2. Sequence Classification with Human Attention
  3. Phrase-Based & Neural Unsupervised Machine Translation
  4. What you can cram into a single vector: Probing sentence embeddings for linguistic properties
  5. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
  6. Deep contextualized word representations
  7. Meta-Learning for Low-Resource Neural Machine Translation
  8. Linguistically-Informed Self-Attention for Semantic Role Labeling
  9. A Hierarchical Multi-task Approach for Learning Embeddings from Semantic Tasks
  10. Know What You Don’t Know: Unanswerable Questions for SQuAD
  11. An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
  12. Universal Language Model Fine-tuning for Text Classification
  13. Improving Language Understanding by Generative Pre-Training
  14. Dissecting Contextual Word Embeddings: Architecture and Representation

Post navigation

← Light-Head R-CNN
Semi-Supervised Sequece Modelling with Cross-View Training  →

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Blog Stats

  • 275,195 hits

ShopIsle powered by WordPress