查看原文
其他

EMNLP2021主会议-656篇长文分类-附论文链接

刘聪NLP NLP工作站 2023-11-28
大家好,我是刘聪NLP。
EMNLP2021论文列表已经放出已久,我之前的一篇文章,也给出了PaperList,其中,主会接收了656篇长文191篇短文,Findings接收了305篇长文119篇短文
最近,笔者也是花了几个晚上的时间,把656篇长文过了一边,并将其进行了详细的归类划分(论文附带链接),主要包括:36篇QA系统(阅读理解、问答、检索)、17篇情感分析(方面级情感分析、篇章集情感分析、情绪分析)、42篇对话系统45篇信息抽取(关键词抽取、术语抽取、实体抽取、实体分类、关系抽取、事件抽取、观点抽取)、6篇事件检测68篇预训练语言模型应用(Transformer优化、语言模型下游应用、语言模型探索、分析等)、37篇数据集、任务及评估45篇机器翻译37篇多模态19篇摘要(对话摘要、多文档摘要、代码摘要)、51篇文本生成(段落生成、对话生成、复述、问题生成)、7篇文本风格改写13篇推理(因果推断、多跳推理、知识推理、常识推理)、21篇模型鲁棒性及对抗10篇模型压缩(模型优化、剪枝、蒸馏)、19篇小样本(元学习、零样本、低资源)、26篇知识表征6篇多语言12篇社会道德伦理偏见2篇虚假新闻检测、14篇指代、链指、消歧及对齐3篇ASR8篇数据增强2篇纠错22篇图相关15篇文本分类13篇NLP基础(分词、词性、语义理解、句法分析)、60篇其他。可以发现今年多模态的论文出了不少,感觉会是明年的大热,并且Best Long Paper就是一篇多模态的论文。
注意:有部分论文可能包含多个类别的内容,笔者仅凭自己理解进行了单分类。
文章篇幅较长、各位同学可以自行找到自己感兴趣的分类。赶紧去看论文吧!!!

由于部分论文还没有公布,因此暂无链接,在知乎文章中会持续更新。


摘要

1、MSˆ2:Multi-Document Summarization of Medical Studies

https://arxiv.org/abs/2104.06486

2、TimelineSummarization based on Event Graph Compression via Time-Aware Optimal Transport

https://underline.io/lecture/37289-timeline-summarization-based-on-event-graph-compression-via-time-aware-optimal-transport

3、CAST: EnhancingCode Summarization with Hierarchical Splitting and Reconstruction of AbstractSyntax Trees

https://arxiv.org/abs/2108.12987

4、Low-ResourceDialogue Summarization with Domain-Agnostic Multi-Source Pretraining

https://arxiv.org/abs/2109.04080

5、Aspect-ControllableOpinion Summarization

https://arxiv.org/abs/2109.03171

6、SgSum:TransformingMulti-document Summarization into Sub-graph Selection

https://arxiv.org/abs/2110.12645

7、ControllableNeural Dialogue Summarization with Personal Named Entity Planning

https://arxiv.org/abs/2109.13070

8、Learn to Copyfrom the Copying History: Correlational Copy Network for AbstractiveSummarization

9、QuestEval:Summarization Asks for Fact-based Evaluation

https://arxiv.org/abs/2103.12693

10、Fine-grainedFactual Consistency Assessment for Abstractive Summarization Models

11、ARMAN:Pre-training with Semantically Selecting and Reordering of Sentences forPersian Abstractive Summarization

https://arxiv.org/abs/2109.04098

12、Models andDatasets for Cross-Lingual Summarisation

https://underline.io/lecture/37790-models-and-datasets-for-cross-lingual-summarisation

13、SimpleConversational Data Augmentation for Semi-supervised Abstractive DialogueSummarization

https://www.cc.gatech.edu/~dyang888/docs/emnlp21_chen_coda.pdf

14、Learning OpinionSummarizers by Selecting Informative Reviews

https://arxiv.org/abs/2109.04325

15、Finding aBalanced Degree of Automation for Summary Evaluation

https://arxiv.org/abs/2109.11503

16、Decision-FocusedSummarization

https://arxiv.org/abs/2109.06896

17、CLIFF:Contrastive Learning for Improving Faithfulness and Factuality in AbstractiveSummarization

https://arxiv.org/abs/2109.09209

18、Enriching andControlling Global Semantics for Text Summarization

https://arxiv.org/abs/2109.10616

19、AUTOSUMM:Automatic Model Creation for Text Summarization


文本生成

1、Sentence-PermutedParagraph Generation

https://arxiv.org/abs/2104.07228

2、StructuralAdapters in Pretrained Language Models for AMR-to-text Generation

https://arxiv.org/abs/2103.09120

3、Mathematical WordProblem Generation from Commonsense Knowledge Graph and Equations

https://arxiv.org/abs/2010.06196

4、Extract, Denoiseand Enforce: Evaluating and Improving Concept Preservation for Text-to-TextGeneration

https://arxiv.org/abs/2104.08724

5、Learning toSelectively Learn for Weakly-supervised Paraphrase Generation

https://arxiv.org/abs/2109.12457

6、CoLV: ACollaborative Latent Variable Model for Knowledge-Grounded Dialogue Generation

7、A Three-StageLearning Framework for Low-Resource Knowledge-Grounded Dialogue Generation

https://arxiv.org/abs/2109.04096

8、NegatER:Unsupervised Discovery of Negatives in Commonsense Knowledge Bases

https://arxiv.org/abs/2011.07497

9、Evaluating theMorphosyntactic Well-formedness of Generated Texts

https://arxiv.org/abs/2103.16590

10、Automatic TextEvaluation through the Lens of Wasserstein Barycenters

https://arxiv.org/abs/2108.12463

11、More is Better:Enhancing Open-Domain Dialogue Generation via Multi-Source HeterogeneousKnowledge

12、ParaphraseGeneration: A Survey of the State of the Art

13、RevisitingPivot-Based Paraphrase Generation: Language Is Not the Only Optional Pivot

14、DiscoDVT:Generating Long Text with Discourse-Aware Discrete Variational Transformer

https://arxiv.org/abs/2110.05999

15、Coupling ContextModeling with Zero Pronoun Recovering for Document-Level Natural LanguageGeneration

16、ParallelRefinements for Lexically Constrained Text Generation with BART

https://arxiv.org/abs/2109.12487

17、Few-Shot TextGeneration with Natural Language Instructions

https://arxiv.org/abs/2012.11926

18、Structure-AugmentedKeyphrase Generation

https://underline.io/lecture/37549-structure-augmented-keyphrase-generation

19、Exposure Biasversus Self-Recovery: Are Distortions Really Incremental for AutoregressiveText Generation?

https://arxiv.org/abs/1905.10617

20、GeneratingSelf-Contained and Summary-Centric Question Answer Pairs via DifferentiableReward Imitation Learning

https://arxiv.org/abs/2109.04689

21、UnsupervisedParaphrasing with Pretrained Language Models

https://arxiv.org/abs/2010.12885

22、KnowledgeEnhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation

https://arxiv.org/abs/2109.05487

23、HeterogeneousGraph Neural Networks for Keyphrase Generation

https://arxiv.org/abs/2109.04703

24、LeveragingOrder-Free Tag Relations for Context-Aware Recommendation

https://www.zhuanzhi.ai/paper/573b75f47d411ab6c6a7f5722877993a

25、Adaptive Bridgebetween Training and Inference for Dialogue Generation

https://arxiv.org/abs/2110.11560

26、ConRPG: ParaphraseGeneration using Contexts as Regularizer

https://arxiv.org/abs/2109.00363

27、ImprovingSequence-to-Sequence Pre-training via Sequence Span Rewriting

https://arxiv.org/abs/2101.00416

28、Finding needlesin a haystack: Sampling Structurally-diverse Training Sets from Synthetic Datafor Compositional Generalization

https://arxiv.org/abs/2109.02575

29、Jointly Learningto Repair Code and Generate Commit Message

https://arxiv.org/abs/2109.12296

30、ReGen:Reinforcement Learning for Text and Knowledge Base Generation using PretrainedLanguage Models

https://arxiv.org/abs/2108.12472

31、GeneSis: AGenerative Approach to Substitutes in Context

https://www.researchgate.net/publication/355646366_GeneSis_A_Generative_Approach_to_Substitutes_in_Context

32、Data-to-textGeneration by Splicing Together Nearest Neighbors

https://arxiv.org/abs/2101.08248

33、IGA: AnIntent-Guided Authoring Assistant

https://arxiv.org/abs/2104.07000

34、Just Say No:Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts

https://arxiv.org/abs/2108.11830

35、The Perils ofUsing Mechanical Turk to Evaluate Open-Ended Text Generation

https://arxiv.org/abs/2109.06835

36、Truth-ConditionalCaptions for Time Series Data

https://arxiv.org/abs/2110.01839

37、Studying wordorder through iterative shuffling

https://arxiv.org/abs/2109.04867

38、AESOP:Paraphrase Generation with Adaptive Syntactic Control

https://vnpeng.net/bibliography/sun2021aesop/

39、Math WordProblem Generation with Mathematical Consistency and Problem ContextConstraints

https://arxiv.org/abs/2109.04546

40、BuildingAdaptive Acceptability Classifiers for Neural NLG

41、"Was it"stated" or was it "claimed"?: How linguistic bias affectsgenerative language models

42、Refocusing onRelevance: Personalization in NLG

https://arxiv.org/abs/2109.05140

43、Building theDirected Semantic Graph for Coherent Long Text Generation

44、IterativeGNN-based Decoder for Question Generation

http://qizhang.info/paper/emnlp2021.3921_Paper.pdf

45、TextCounterfactuals via Latent Optimization and Shapley-Guided Search

https://arxiv.org/abs/2110.11589

46、AutomatedGeneration of Accurate & Fluent Medical X-ray Reports

https://arxiv.org/abs/2108.12126

47、ExploringMethods for Generating Feedback Comments for Writing Learning

https://underline.io/lecture/38704-exploring-methods-for-generating-feedback-comments-for-writing-learning

48、EARL:Informative Knowledge-Grounded Conversation Generation with Entity-AgnosticRepresentation Learning

49、Asking QuestionsLike Educational Experts: Automatically Generating Question-Answer Pairs onReal-World Examination Data

https://arxiv.org/abs/2109.05179

50、FiD-Ex:Improving Sequence-to-Sequence Models for Extractive Rationale Generation

https://arxiv.org/abs/2012.15482

51、Asking It All:Generating Contextualized Questions for any Semantic Role

https://arxiv.org/abs/2109.04832


机器翻译

1、Investigating theHelpfulness of Word-Level Quality Estimation for Post-Editing MachineTranslation Output

2、GFST:Gender-Filtered Self-Training for More Accurate Gender in Translation

https://www.amazon.science/publications/gfst-gender-filtered-self-training-for-more-accurate-gender-in-translation

3、RobustOpen-Vocabulary Translation from Visual Text Representations

https://arxiv.org/abs/2104.08211

4、Sparse Attentionwith Linear Units

https://arxiv.org/abs/2104.07012

5、"Wikily"Supervised Neural Translation Tailored to Cross-Lingual Tasks

https://arxiv.org/abs/2104.08384

6、RecurrentAttention for Neural Machine Translation

7、Zero-ShotCross-Lingual Transfer of Neural Machine Translation with MultilingualPretrained Encoders

https://arxiv.org/abs/2104.08757

8、Uncertainty-AwareBalancing for Multilingual and Multi-Domain Neural Machine Translation Training

https://arxiv.org/abs/2109.02284

9、EnliveningRedundant Heads in Multi-head Self-attention for Machine Translation

10、UniversalSimultaneous Machine Translation with Mixture-of-Experts Wait-k Policy

https://arxiv.org/abs/2109.05238

11、Don't Go FarOff: An Empirical Study on Neural Poetry Translation

https://arxiv.org/abs/2109.02972

12、mT6:Multilingual Pretrained Text-to-Text Transformer with Translation Pairs

https://arxiv.org/abs/2104.08692

13、Cross AttentionAugmented Transducer Networks for Simultaneous Translation

https://underline.io/lecture/38693-cross-attention-augmented-transducer-networks-for-simultaneous-translation

14、UnsupervisedNeural Machine Translation with Universal Grammar

15、EncouragingLexical Translation Consistency for Document-Level Neural Machine Translation

https://underline.io/lecture/37496-encouraging-lexical-translation-consistency-for-document-level-neural-machine-translation

16、Neural MachineTranslation Quality and Post-Editing Performance

https://arxiv.org/abs/2109.05016

17、ScheduledSampling Based on Decoding Steps for Neural Machine Translation

https://arxiv.org/abs/2108.12963

18、Learning toRewrite for Non-Autoregressive Neural Machine Translation

19、Towards Makingthe Most of Dialogue Characteristics for Neural Chat Translation

https://arxiv.org/abs/2109.00668

20、DistributionallyRobust Multilingual Machine Translation

https://arxiv.org/abs/2109.04020

21、Document Graphfor Neural Machine Translation

https://arxiv.org/abs/2012.03477

22、Graph Algorithmsfor Multiparallel Word Alignment

https://arxiv.org/abs/2109.06283

23、LanguageModeling, Lexical Translation, Reordering: The Training Process of NMT throughthe Lens of Classical SMT

https://arxiv.org/abs/2109.01396

24、MultilingualUnsupervised Neural Machine Translation with Denoising Adapters

https://arxiv.org/abs/2110.10472

25、Self-SupervisedQuality Estimation for Machine Translation

https://www.researchgate.net/publication/354219860_Self-Supervised_Quality_Estimation_for_Machine_Translation

26、Rethinking DataAugmentation for Low-Resource Neural Machine Translation: A Multi-Task LearningApproach

https://arxiv.org/abs/2109.03645

27、BERT, mBERT, orBiBERT? A Study on Contextualized Embeddings for Neural Machine Translation

https://arxiv.org/abs/2109.04588

28、One Source, TwoTargets: Challenges and Rewards of Dual Decoding

https://arxiv.org/abs/2109.10197

29、Classification-basedQuality Estimation: Small and Efficient Models for Real-world Applications

https://arxiv.org/abs/2109.08627

30、EfficientInference for Multilingual Neural Machine Translation

https://arxiv.org/abs/2109.06679

31、ControllingMachine Translation for Multiple Attributes with Additive Interventions

32、A GenerativeFramework for Simultaneous Machine Translation

33、Translation-basedSupervision for Policy Generation in Simultaneous Neural Machine Translation

34、AfroMT:Pretraining Strategies and Reproducible Benchmarks for Translation of 8 AfricanLanguages

https://arxiv.org/abs/2109.04715

35、A Large-ScaleStudy of Machine Translation in Turkic Languages

https://arxiv.org/abs/2109.04593

36、Cross-Attentionis All You Need: Adapting Pretrained Transformers for Machine Translation

https://arxiv.org/abs/2104.08771

37、GeneralisedUnsupervised Domain Adaptation of Neural Machine Translation with Cross-LingualData Selection

https://arxiv.org/abs/2109.04292

38、Rule-basedMorphological Inflection Improves Neural Terminology Translation

https://arxiv.org/abs/2109.04620

39、LearningKernel-Smoothed Machine Translation with Retrieved Examples

https://arxiv.org/abs/2109.09991

40、AligNART:Non-autoregressive Neural Machine Translation by Jointly Learning to EstimateAlignment and Translate

https://arxiv.org/abs/2109.06481

41、ImprovingMultilingual Translation by Representation and Gradient Regularization

https://arxiv.org/abs/2109.04778

42、MachineTranslation Decoding beyond Beam Search

https://arxiv.org/abs/2104.05336

43、ComparingFeature-Engineering and Feature-Learning Approaches for MultilingualTranslationese Classification

https://arxiv.org/abs/2109.07604

44、Multi-SentenceResampling: A Simple Approach to Alleviate Dataset Length Bias and Beam-SearchDegradation

https://arxiv.org/abs/2109.06253

45、ContrastiveConditioning for Assessing Disambiguation in MT: A Case Study of Distilled Bias

https://openreview.net/forum?id=RvO9DqoWI9V


多模态

1、Inflate andShrink:Enriching and Reducing Interactions for Fast Text-Image Retrieval

2、Multi-ModalOpen-Domain Dialogue

https://arxiv.org/abs/2010.01082

3、Adaptive ProposalGeneration Network for Temporal Sentence Localization in Videos

https://arxiv.org/abs/2109.06398

4、ProgressivelyGuide to Attend: An Iterative Alignment Framework for Temporal Sentence Grounding

https://arxiv.org/abs/2109.06400

5、R^3Net:Relation-embeddedRepresentation Reconstruction Network for Change Captioning

https://arxiv.org/abs/2110.10328

6、Unimodal andCrossmodal Refinement Network for Multimodal Sequence Fusion

7、CTAL:Pre-training Cross-modal Transformer for Audio-and-Language Representations

https://arxiv.org/abs/2109.00181

8、LayoutReader:Pre-training of Text and Layout for Reading Order Detection

https://arxiv.org/abs/2108.11591

9、On Pursuit ofDesigning Multi-modal Transformer for Video Grounding

https://arxiv.org/abs/2109.06085

10、ImprovingMultimodal fusion via Mutual Dependency Maximisation

https://arxiv.org/abs/2109.00922

11、Relation-awareVideo Reading Comprehension for Temporal Language Grounding

https://arxiv.org/abs/2110.05717

12、MultimodalPhased Transformer for Sentiment Analysis

13、Scalable FontReconstruction with Dual Latent Manifolds

https://arxiv.org/abs/2109.06627

14、Discovering theUnknown Knowns: Turning Implicit Knowledge in the Dataset into ExplicitTraining Examples for Visual Question Answering

https://www.semanticscholar.org/paper/Discovering-the-Unknown-Knowns%3A-Turning-Implicit-in-Kil-Zhang/11261a5a3fff2605a8a4d8dac2ff3a9734c56093

15、COVR: A Test-Bedfor Visually Grounded Compositional Generalization with Real Images

https://arxiv.org/abs/2109.10613

16、JointMulti-modal Aspect-Sentiment Analysis with Auxiliary Cross-modal RelationDetection

https://underline.io/lecture/37497-joint-multi-modal-aspect-sentiment-analysis-with-auxiliary-cross-modal-relation-detection

17、Broaden theVision: Geo-Diverse Visual Commonsense Reasoning

https://arxiv.org/abs/2109.06860

18、VisuallyGrounded Reasoning across Languages and Cultures

https://arxiv.org/abs/2109.13238

19、Region underDiscussion for visual dialog

https://githubmemory.com/repo/mmazuecos/Region-under-discussion-for-visual-dialog

20、Vision GuidedGenerative Pre-trained Language Models for Multimodal Abstractive Summarization

https://arxiv.org/abs/2109.02401

21、Natural LanguageVideo Localization with Learnable Moment Proposals

https://arxiv.org/abs/2109.10678

22、Point-of-InterestType Prediction using Text and Images

https://arxiv.org/abs/2109.00602

23、JournalisticGuidelines Aware News Image Captioning

https://arxiv.org/abs/2109.02865

24、Vision-and-Languageor Vision-for-Language? On Cross-Modal Influence in Multimodal Transformers

https://arxiv.org/abs/2109.04448

25、Visual News:Benchmark and Challenges in News Image Captioning

https://underline.io/lecture/37789-visual-news-benchmark-and-challenges-in-news-image-captioning

26、HintedBT:Augmenting Back-Translation with Quality and Transliteration Hints

https://arxiv.org/abs/2109.04443

27、WhyAct:Identifying Action Reasons in Lifestyle Vlogs

https://arxiv.org/abs/2109.02747

28、Hitting yourMARQ: Multimodal ARgument Quality Assessment in Long Debate Video

https://underline.io/lecture/37897-hitting-your-marq-multimodal-argument-quality-assessment-in-long-debate-video

29、Mind theContext: The Impact of Contextualization in Neural Module Networks forGrounding Visual Referring Expressions

https://www.amazon.science/publications/mind-the-context-the-impact-of-contextualization-in-neural-module-networks-for-grounding-visual-referring-expression

30、CrossVQA:Scalably Generating Benchmarks for Systematically Testing VQA Generalization

31、Weakly-SupervisedVisual-Retriever-Reader for Knowledge-based Question Answering

https://arxiv.org/abs/2109.04014

32、Iconary: APictionary-Based Game for Testing Multimodal Communication with Drawings andText

https://underline.io/lecture/38750-iconary-a-pictionary-based-game-for-testing-multimodal-communication-with-drawings-and-text

33、IntegratingVisuospatial, Linguistic, and Commonsense Structure into Story Visualization

https://arxiv.org/abs/2110.10834

34、VideoCLIP:Contrastive Pre-training for Zero-shot Video-Text Understanding

https://arxiv.org/abs/2109.14084

35、StreamHover:Livestream Transcript Summarization and Annotation

https://arxiv.org/abs/2109.05160

36、Text2Mol:Cross-Modal Molecule Retrieval with Natural Language Queries

https://underline.io/lecture/37985-text2mol-cross-modal-molecule-retrieval-with-natural-language-queries

37、NewsCLIPpings:Automatic Generation of Out-of-Context Multimodal Media

https://arxiv.org/abs/2104.05893


QA系统

1、Joint PassageRanking for Diverse Multi-Answer Retrieval

https://arxiv.org/abs/2104.08445

2、Cross-PolicyCompliance Detection via Question Answering

https://arxiv.org/abs/2109.03731

3、Matching-orientedEmbedding Quantization For Ad-hoc Retrieval

https://arxiv.org/abs/2104.07858

4、AnsweringOpen-Domain Questions of Varying Reasoning Steps from Text

https://arxiv.org/abs/2010.12527

5、Less is More:Pretrain a Strong Siamese Encoder for Dense Text Retrieval Using a Weak Decoder

https://arxiv.org/abs/2102.09206

6、InteractiveMachine Comprehension with Dynamic Knowledge Graphs

https://arxiv.org/abs/2109.00077

7、ContrastiveDomain Adaptation for Question Answering using Limited Text Corpora

https://arxiv.org/abs/2108.13854

8、ImprovingUnsupervised Question Answering via Summarization-Informed Question Generation

https://arxiv.org/abs/2109.07954

9、TransferNet: AnEffective and Transparent Framework for Multi-hop Question Answering overRelation Graph

https://arxiv.org/abs/2104.07302

10、Summarize-then-Answer:Generating Concise Explanations for Multi-hop Reading Comprehension

https://arxiv.org/abs/2109.06853

11、Will thisQuestion be Answered? Question Filtering via Answer Model Distillation forEfficient Question Answering

https://arxiv.org/abs/2109.07009

12、AdaptiveInformation Seeking for Open-Domain Question Answering

https://arxiv.org/abs/2109.06747

13、Distantly-SupervisedDense Retrieval Enables Open-Domain Question Answering without EvidenceAnnotation

https://arxiv.org/abs/2110.04889

14、ConnectingAttributions and QA Model Behavior on Realistic Counterfactuals

https://arxiv.org/abs/2104.04515

15、MultivalentEntailment Graphs for Question Answering

https://arxiv.org/abs/2104.07846

16、Learning withInstance Bundles for Reading Comprehension

https://arxiv.org/abs/2104.08735

17、Condenser: aPre-training Architecture for Dense Retrieval

https://arxiv.org/abs/2104.08253

18、FewshotQA: Asimple framework for few-shot learning of question answering tasks usingpre-trained text-to-text models

https://arxiv.org/abs/2109.01951

19、EnhancingMultiple-choice Machine Reading Comprehension by Punishing IllogicalInterpretations

20、Multi-stageTraining with Improved Negative Contrast for Neural Passage Retrieval

21、Synthetic DataAugmentation for Zero-Shot Cross-Lingual Question Answering

https://arxiv.org/abs/2010.12643

22、RocketQAv2: AJoint Training Method for Dense Passage Retrieval and Passage Re-ranking

https://arxiv.org/abs/2110.07367

23、StructuredContext and High-Coverage Grammar for Conversational Question Answering overKnowledge Graphs

https://arxiv.org/abs/2109.00269

24、Ultra-HighDimensional Sparse Representations with Binarization for Efficient TextRetrieval

https://arxiv.org/abs/2104.07198

25、IR like a SIR:Sense-enhanced Information Retrieval for Multiple Languages

https://underline.io/lecture/38901-ir-like-a-sir-sense-enhanced-information-retrieval-for-multiple-languages

26、ImprovingQuestion Answering Model Robustness with Synthetic Adversarial Data Generation

https://arxiv.org/abs/2104.08678

27、Phrase RetrievalLearns Passage Retrieval, Too

https://arxiv.org/abs/2109.08133

28、SituatedQA:Incorporating Extra-Linguistic Contexts into QA

https://arxiv.org/abs/2109.06157

29、Neural NaturalLogic Inference for Interpretable Question Answering

30、A SemanticFeature-Wise Transformation Relation Network for Automatic Short Answer Grading

http://nlpgrouppennstate.blogspot.com/2021/08/paper-by-zhaohui-li-accepted-in-emnlp.html

31、RankNAS:Efficient Neural Architecture Search by Pairwise Ranking

https://arxiv.org/abs/2109.07383

32、Entity-BasedKnowledge Conflicts in Question Answering

https://arxiv.org/abs/2109.05052

33、MitigatingFalse-Negative Contexts in Multi-document Question Answering with RetrievalMarginalization

https://arxiv.org/abs/2103.12235

34、EnhancingDocument Ranking with Task-adaptive Training and Segmented Token RecoveryMechanism

https://underline.io/lecture/38042-enhancing-document-ranking-with-task-adaptive-training-and-segmented-token-recovery-mechanism

35、SmoothingDialogue States for Open Conversational Machine Reading

https://arxiv.org/abs/2108.12599

36、TopicTransferable Table Question Answering

https://arxiv.org/abs/2109.07377


情感分析

1、Beta DistributionGuided Aspect-aware Graph for Aspect Category Sentiment Analysis with AffectiveKnowledge

2、To be Closer:Learning to Link up Aspects with Opinions

https://arxiv.org/abs/2109.08382

3、Perspective-takingand Pragmatics for Generating Empathetic Responses Focused on Emotion Causes

https://arxiv.org/abs/2109.08828

4、Solving AspectCategory Sentiment Analysis as a Text Generation Task

https://arxiv.org/abs/2110.07310

5、ImprovingMultimodal Fusion with Hierarchical Mutual Information Maximization forMultimodal Sentiment Analysis

https://arxiv.org/abs/2109.00412

6、PoweringComparative Classification with Sentiment Analysis via Domain AdaptiveKnowledge Transfer

https://arxiv.org/abs/2109.03819

7、DimensionalEmotion Detection from Categorical Emotion

https://arxiv.org/abs/1911.02499

8、Learning ImplicitSentiment in Aspect-based Sentiment Analysis with Supervised ContrastivePre-Training

9、Seeking Commonbut Distinguishing Difference, A Joint Aspect-based Sentiment Analysis Model

10、Aspect SentimentQuad Prediction as Paraphrase Generation

https://arxiv.org/abs/2110.00796

11、Cross-lingualAspect-based Sentiment Analysis with Aspect Term Code-Switching

12、TowardsLabel-Agnostic Emotion Embeddings

13、Few-Shot EmotionRecognition in Conversation with Sequential Prototypical Networks

https://arxiv.org/abs/2109.09366

14、CLASSIC:Continual and Contrastive Learning of Aspect Sentiment Classification Tasks

https://underline.io/lecture/37960-classic-continual-and-contrastive-learning-of-aspect-sentiment-classification-tasks

15、Not AllNegatives are Equal: Label-Aware Contrastive Loss for Fine-grained TextClassification

https://arxiv.org/abs/2109.05427

16、ImprovingFederated Learning for Aspect-based Sentiment Analysis via Topic Memories

https://underline.io/lecture/38051-improving-federated-learning-for-aspect-based-sentiment-analysis-via-topic-memories

17、ImplicitSentiment Analysis with Event-centered Text Representation

https://underline.io/lecture/38062-implicit-sentiment-analysis-with-event-centered-text-representation


预训练语言模型应用

1、Editing FactualKnowledge in Language Models

https://arxiv.org/abs/2104.08164

2、Pushing on TextReadability Assessment: A Transformer Meets Handcrafted Linguistic Features

https://arxiv.org/abs/2109.12258

3、What to Pre-Trainon? Efficient Intermediate Task Selection

https://arxiv.org/abs/2104.08247

4、TextDetoxification using Large Pre-trained Neural Models

https://arxiv.org/abs/2109.08914

5、Memory andKnowledge Augmented Language Models for Inferring Salience in Long-Form Stories

https://arxiv.org/abs/2109.03754

6、FinetuningPretrained Transformers into RNNs

https://arxiv.org/abs/2103.13076

7、A Simple andEffective Positional Encoding for Transformers

https://arxiv.org/abs/2104.08698

8、MATE: Multi-viewAttention for Table Transformer Efficiency

https://arxiv.org/abs/2109.04312

9、Raise a Child inLarge Language Model: Towards Effective and Generalizable Fine-tuning

https://arxiv.org/abs/2109.05687

10、GradTS: AGradient-Based Automatic Auxiliary Task Selection Method Based on TransformerNetworks

https://arxiv.org/abs/2109.05748

11、Allocating LargeVocabulary Capacity for Cross-lingual Language Model Pre-training

https://arxiv.org/abs/2109.07306

12、DILBERT:Customized Pre-Training for Domain Adaptation with Category Shift, with anApplication to Aspect Extraction

https://arxiv.org/abs/2109.00571

13、GAML-BERT:Improving BERT Early Exiting by Gradient Aligned Mutual Learning

14、The Power ofScale for Parameter-Efficient Prompt Tuning

https://arxiv.org/abs/2104.08691

15、Masked LanguageModeling and the Distributional Hypothesis: Order Word Matters Pre-training forLittle

https://arxiv.org/abs/2104.06644

16、TransPrompt:Towards an Automatic Transferable Prompting Framework for Few-shot TextClassification

17、Improving MathWord Problems with Pre-trained Knowledge and Hierarchical Reasoning

18、ERNIE-M:Enhanced Multilingual Representation by Aligning Cross-lingual Semantics withMonolingual Corpora

https://arxiv.org/abs/2012.15674

19、PermuteFormer:Efficient Relative Position Encoding for Long Sequences

https://arxiv.org/abs/2109.02377

20、What Changes CanLarge-scale Language Models Bring? Intensive Study on HyperCLOVA:Billions-scale Korean Generative Pretrained Transformers

https://arxiv.org/abs/2109.04650

21、GlobalExplainability of BERT-Based Evaluation Metrics by Disentangling alongLinguistic Factors

https://arxiv.org/abs/2110.04399

22、TransformerFeed-Forward Layers Are Key-Value Memories

https://arxiv.org/abs/2012.14913

23、What's in YourHead? Emergent Behaviour in Multi-Task Transformer Models

https://arxiv.org/abs/2104.06129

24、Fast, Effective,and Self-Supervised: Transforming Masked Language Models into Universal Lexicaland Sentence Encoders

https://arxiv.org/abs/2104.08027

25、Effects ofParameter Norm Growth During Transformer Training: Inductive Bias from GradientDescent

https://arxiv.org/abs/2010.09697

26、On the Influenceof Masking Policies in Intermediate Pre-training

https://arxiv.org/abs/2104.08840

27、DyLex: IncoporatingDynamic Lexicons into BERT for Sequence Labeling

https://arxiv.org/abs/2109.08818

28、Filling the Gapsin Ancient Akkadian Texts: A Masked Language Modelling Approach

https://arxiv.org/abs/2109.04513

29、RuleBERT:Teaching Soft Rules to Pre-Trained Language Models

https://arxiv.org/abs/2109.13006

30、CodeT5:Identifier-aware Unified Pre-trained Encoder-Decoder Models for CodeUnderstanding and Generation

https://arxiv.org/abs/2109.00859

31、BARThez: aSkilled Pretrained French Sequence-to-Sequence Model

https://arxiv.org/abs/2010.12321

32、MTAdam:Automatic Balancing of Multiple Training Loss Terms

https://arxiv.org/abs/2006.14683

33、How muchpretraining data do language models need to learn syntax?

https://arxiv.org/abs/2109.03160

34、Discretized IntegratedGradients for Explaining Language Models

https://arxiv.org/abs/2108.13654

35、The Devil is inthe Detail: Simple Tricks Improve Systematic Generalization of Transformers

https://arxiv.org/abs/2108.12284

36、Stepmothers aremean and academics are pretentious: What do pretrained language models learnabout you?

https://arxiv.org/abs/2109.10052

37、Putting Words inBERT's Mouth: Navigating Contextualized Vector Spaces with Pseudowords

https://arxiv.org/abs/2109.11491

38、Sorting throughthe noise: Testing robustness of information processing in pre-trained languagemodels

https://arxiv.org/abs/2109.12393

39、EfficientNearest Neighbor Language Models

https://arxiv.org/abs/2109.04212

40、Self-SupervisedDetection of Contextual Synonyms in a Multi-Class Setting: Phenotype AnnotationUse Case

https://arxiv.org/abs/2109.01935

41、Fast WordPieceTokenization

https://arxiv.org/abs/2012.15524

42、FrequencyEffects on Syntactic Rule Learning in Transformers

https://arxiv.org/abs/2109.07020

43、You shouldevaluate your language model on marginal likelihood over tokenisations

https://arxiv.org/abs/2109.02550

44、Exploring theRole of BERT Token Representations to Explain Sentence Probing Results

https://arxiv.org/abs/2104.01477

45、BeliefBank:Adding Memory to a Pre-Trained Language Model for a Systematic Notion of Belief

https://arxiv.org/abs/2109.14723

46、ContrastiveExplanations for Model Interpretability

https://arxiv.org/abs/2103.01378

47、TADPOLE: TaskADapted Pre-Training via AnOmaLy DEtection

https://docplayer.net/217601099-Tadpole-task-adapted-pre-training-via.html

48、Do Long-RangeLanguage Models Actually Use Long-Range Context?

https://arxiv.org/abs/2109.09115

49、ECONET:Effective Continual Pretraining of Language Models for Event Temporal Reasoning

https://arxiv.org/abs/2012.15283

50、FastIF: ScalableInfluence Functions for Efficient Model Interpretation and Debugging

https://arxiv.org/abs/2012.15781

51、Phrase-BERT:Improved Phrase Embeddings from BERT with an Application to Corpus Exploration

https://arxiv.org/abs/2109.06304

52、FlexibleGeneration of Natural Language Deductions

https://arxiv.org/abs/2104.08825

53、Muppet: MassiveMulti-task Representations with Pre-Finetuning

https://arxiv.org/abs/2101.11038

54、Surface FormCompetition: Why the Highest Probability Answer Isn’t Always Right

https://arxiv.org/abs/2104.08315

55、Navigating theKaleidoscope of COVID-19 Misinformation Using Deep Learning

https://arxiv.org/abs/2110.15703

56、Pre-train orAnnotate? Domain Adaptation with a Constrained Budget

https://arxiv.org/abs/2109.04711

57、ReasonBERT:Pre-trained to Reason with Distant Supervision

https://arxiv.org/abs/2109.04912

58、The Stem CellHypothesis: Dilemma behind Multi-Task Learning with Transformer Encoders

https://arxiv.org/abs/2109.06939

59、ControlledEvaluation of Grammatical Knowledge in Mandarin Chinese Language Models

https://arxiv.org/abs/2109.11058

60、Compression,Transduction, and Creation: A Unified Framework for Evaluating Natural LanguageGeneration

https://arxiv.org/abs/2109.06379

61、RethinkingDenoised Auto-Encoding in Language Pre-Training

62、IncorporatingResidual and Normalization Layers into Analysis of Masked Language Models

https://arxiv.org/abs/2109.07152

63、Back-Trainingexcels Self-Training at Unsupervised Domain Adaptation of Question Generationand Passage Retrieval

64、PerturbationCheckLists for Evaluating NLG Evaluation Metrics

https://arxiv.org/abs/2109.05771

65、RevisitingSelf-training for Few-shot Learning of Language Model

https://arxiv.org/abs/2110.01256

66、CATE: AContrastive Pre-trained Model for Metaphor Detection with Semi-supervisedLearning

https://underline.io/lecture/38080-cate-a-contrastive-pre-trained-model-for-metaphor-detection-with-semi-supervised-learning

67、APIRecX:Cross-Library API Recommendation via Pre-Trained Language Model

68、SPARQLingDatabase Queries from Intermediate Question Decompositions

https://arxiv.org/abs/2109.06162


数据集、任务及评估

1、DWUG: A largeResource of Diachronic Word Usage Graphs in Four Languages

https://arxiv.org/abs/2104.08540

2、MLEC-QA: AChinese Multi-Choice Biomedical Question Answering Dataset

3、YASO: A TargetedSentiment Analysis Evaluation Dataset for Open-Domain Reviews

https://arxiv.org/abs/2012.14541

4、IndoNLG:Benchmark and Resources for Evaluating Indonesian Natural Language Generation

https://arxiv.org/abs/2104.08200

5、I Wish I WouldHave Loved This One, But I Didn't -- A Multilingual Dataset for CounterfactualDetection in Product Reviews

https://arxiv.org/abs/2104.06893

6、CLIPScore: AReference-free Evaluation Metric for Image Captioning

https://arxiv.org/abs/2104.08718

7、$Q^2$: EvaluatingFactual Consistency in Knowledge-Grounded Dialogues via Question Generation andQuestion Answering

https://arxiv.org/abs/2104.08202

8、Document-LevelText Simplification: Dataset, Criteria and Baseline

https://arxiv.org/abs/2110.05071

9、A Large-ScaleDataset for Empathetic Response Generation

10、MeasuringSentence-Level and Aspect-Level (Un)certainty in Science Communications

https://arxiv.org/abs/2109.14776

11、English MachineReading Comprehension Datasets: A Survey

https://arxiv.org/abs/2101.10421

12、AM2iCo:Evaluating Word Meaning in Context across Low-Resource Languages withAdversarial Examples

https://arxiv.org/abs/2104.08639

13、How Much CoffeeWas Consumed During EMNLP 2019? Fermi Problems: A New Reasoning Challenge forAI

https://arxiv.org/abs/2110.14207

14、TranslatingHeaders of Tabular Data: A Pilot Study of Schema Translation

15、Graphine: ADataset for Graph-aware Terminology Definition Generation

https://arxiv.org/abs/2109.04018

16、CSDS: AFine-Grained Chinese Dataset for Customer Service Dialogue Summarization

https://arxiv.org/abs/2108.13139

17、DuRecDial 2.0: ABilingual Parallel Corpus for Conversational Recommendation

https://arxiv.org/abs/2109.08877

18、IndoNLI: ANatural Language Inference Dataset for Indonesian

https://arxiv.org/abs/2110.14566

19、MassiveSumm: avery large-scale, very multilingual, news summarisation dataset

20、Classificationof hierarchical text using geometric deep learning: the case of clinical trialscorpus

21、XTREME-R:Towards More Challenging and Nuanced Multilingual Evaluation

https://arxiv.org/abs/2104.07412

22、Agreeing toDisagree: Annotating Offensive Language Datasets with Annotators' Disagreement

https://arxiv.org/abs/2109.13563

23、SIMMC 2.0: ATask-oriented Dialog Dataset for Immersive Multimodal Conversations

https://arxiv.org/abs/2104.08667

24、Constructing aPsychometric Testbed for Fair Natural Language Processing

https://arxiv.org/abs/2007.12969

25、MindCraft: Theoryof Mind Modeling for Situated Dialogue in Collaborative Tasks

https://arxiv.org/abs/2109.06275

26、ConvAbuse: Data,Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI

https://arxiv.org/abs/2109.09483

27、ESTER: A MachineReading Comprehension Dataset for Reasoning about Event Semantic Relations

https://arxiv.org/abs/2104.08350

28、ContrastiveOut-of-Distribution Detection for Pretrained Transformers

https://arxiv.org/abs/2104.08812

29、ExplainingAnswers with Entailment Trees

https://arxiv.org/abs/2104.08661

30、RICA: EvaluatingRobust Inference Capabilities Based on Commonsense Axioms

https://arxiv.org/abs/2005.00782

31、BiSECT: Learningto Split and Rephrase Sentences with Bitexts

https://arxiv.org/abs/2109.05006

32、DocumentingLarge Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus

https://arxiv.org/abs/2104.08758

33、Latent Hatred: ABenchmark for Understanding Implicit Hate Speech

https://arxiv.org/abs/2109.05322

34、BenchmarkingCommonsense Knowledge Base Population with an Effective Evaluation Dataset

https://arxiv.org/abs/2109.07679

35、WebSRC: ADataset for Web-Based Structural Reading Comprehension

https://arxiv.org/abs/2101.09465

36、WinoLogic: AZero-Shot Logic-based Diagnostic Dataset for Winograd Schema Challenge

37、FinQA: A Datasetof Numerical Reasoning over Financial Data

https://arxiv.org/abs/2109.00122


对话系统

1、ContextualizeKnowledge Bases with Transformer for End-to-end Task-Oriented Dialogue Systems

https://arxiv.org/abs/2010.05740

2、EfficientDialogue Complementary Policy Learning via Deep Q-network Policy and EpisodicMemory Policy

3、A Role-SelectedSharing Network for Joint Machine-Human Chatting Handoff and ServiceSatisfaction Analysis

https://arxiv.org/abs/2109.08412

4、Learning NeuralTemplates for Recommender Dialogue System

https://arxiv.org/abs/2109.12302

5、Neural PathHunter: Reducing Hallucination in Dialogue Systems via Path Grounding

https://arxiv.org/abs/2104.08455

6、Thinking Clearly,Talking Fast: Concept-Guided Non-Autoregressive Generation for Open-DomainDialogue Systems

https://arxiv.org/abs/2109.04084

7、CRFR: ImprovingConversational Recommender Systems via Flexible Fragments Reasoning onKnowledge Graphs

8、Proxy Indicatorsfor the Quality of Open-domain Dialogues

https://underline.io/lecture/37373-proxy-indicators-for-the-quality-of-open-domain-dialogues

9、GOLD: ImprovingOut-of-Scope Detection in Dialogues using Data Augmentation

https://arxiv.org/abs/2109.03079

10、MultiDoc2Dial:Modeling Dialogues Grounded in Multiple Documents

https://arxiv.org/abs/2109.12595

11、TowardsAutomatic Evaluation of Dialog Systems: A Model-Free Off-Policy EvaluationApproach

https://arxiv.org/abs/2102.10242

12、IntentionReasoning Network for Multi-Domain End-to-end Task-Oriented Dialogue

13、Code-switchedinspired losses for spoken dialog representations

https://arxiv.org/abs/2108.12465

14、Domain-LifelongLearning for Dialogue State Tracking via Knowledge Preservation Networks

15、Reference-CentricModels for Grounded Collaborative Dialogue

https://arxiv.org/abs/2109.05042

16、DifferentStrokes for Different Folks: Investigating Appropriate Further Pre-trainingApproaches for Diverse Dialogue Tasks

https://arxiv.org/abs/2109.06524

17、Graph BasedNetwork with Contextualized Representations of Turns in Dialogue

https://arxiv.org/abs/2109.04008

18、AutomaticallyExposing Problems with Neural Dialog Models

https://arxiv.org/abs/2109.06950

19、DetectingSpeaker Personas from Conversational Texts

https://arxiv.org/abs/2109.01330

20、End-to-EndConversational Search for Online Shopping with Utterance Transfer

https://arxiv.org/abs/2109.05460

21、Knowledge-AwareGraph-Enhanced GPT-2 for Dialogue State Tracking

https://arxiv.org/abs/2104.04466

22、Building andEvaluating Open-Domain Dialogue Corpora with Clarifying Questions

https://arxiv.org/abs/2109.05794

23、End-to-EndLearning of Flowchart Grounded Task-Oriented Dialogs

https://arxiv.org/abs/2109.07263

24、CR-Walker:Tree-Structured Graph Reasoning and Dialog Acts for ConversationalRecommendation

https://arxiv.org/abs/2010.10333

25、UnsupervisedConversation Disentanglement through Co-Training

https://arxiv.org/abs/2109.03199

26、Cross-lingualIntermediate Fine-tuning improves Dialogue State Tracking

https://arxiv.org/abs/2109.13620

27、GupShup:Summarizing Open-Domain Code-Switched Conversations

https://arxiv.org/abs/2104.08578

28、PRIDE:Predicting Relationships in Conversations

29、DIALKI:Knowledge Identification in Conversational Systems through Dialogue-DocumentContextualization

https://arxiv.org/abs/2109.04673

30、NDH-Full:Learning and Evaluating Navigational Agents on Full-Length Dialogue

https://underline.io/lecture/37949-ndh-full-learning-and-evaluating-navigational-agents-on-full-length-dialogue

31、Self-trainingImproves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

https://arxiv.org/abs/2108.12589

32、Don't beContradicted with Anything! CI-ToD: Towards Benchmarking Consistency forTask-oriented Dialogue System

https://arxiv.org/abs/2109.11292

33、ContinualLearning in Task-Oriented Dialogue Systems

https://arxiv.org/abs/2012.15504

34、Zero-ShotDialogue State Tracking via Cross-Task Transfer

https://arxiv.org/abs/2109.04655

35、MRF-Chat:Improving Dialogue with Markov Random Fields

36、Dialogue StateTracking with a Language Model using Schema-Driven Prompting

https://arxiv.org/abs/2109.07506

37、TransferablePersona-Grounded Dialogues via Grounded Minimal Edits

https://arxiv.org/abs/2109.07713

38、A ScalableFramework for Learning From Implicit User Feedback to Improve Natural LanguageUnderstanding in Large-Scale Conversational AI Systems

https://arxiv.org/abs/2010.12251

39、ConvFiT:Conversational Fine-Tuning of Pretrained Language Models

https://arxiv.org/abs/2109.10126

40、UncertaintyMeasures in Neural Belief Tracking and the Effects on Dialogue PolicyPerformance

https://arxiv.org/abs/2109.04349

41、DialogueCSE:Dialogue-based Contrastive Learning of Sentence Embeddings

https://arxiv.org/abs/2109.12599

42、ConversationalMulti-Hop Reasoning with Neural Commonsense Knowledge and Symbolic Logic Rules

https://arxiv.org/abs/2109.08544


信息抽取

1、UnsupervisedKeyphrase Extraction by Jointly Modeling Local and Global Context

https://arxiv.org/abs/2109.07293

2、TDEER: AnEfficient Translating Decoding Schema for Joint Extraction of Entities andRelations

https://underline.io/lecture/37297-tdeer-an-efficient-translating-decoding-schema-for-joint-extraction-of-entities-and-relations

3、DistantlySupervised Relation Extraction using Multi-Layer Revision Network andConfidence-based Multi-Instance Learning

4、Extracting EventTemporal Relations via Hyperbolic Geometry

https://arxiv.org/abs/2109.05527

5、Exploring TaskDifficulty for Few-Shot Relation Extraction

https://arxiv.org/abs/2109.05473

6、ChemNER:Fine-Grained Chemistry Named Entity Recognition with Ontology-Guided DistantSupervision

7、A PartitionFilter Network for Joint Entity and Relation Extraction

https://arxiv.org/abs/2108.12202

8、TEBNER: DomainSpecific Named Entity Recognition with Type Expanded Boundary-aware Network

9、Document-levelEntity-based Extraction as Template Generation

https://arxiv.org/abs/2109.04901

10、Distantly-SupervisedNamed Entity Recognition with Noise-Robust Learning and Language ModelAugmented Self-Training

https://arxiv.org/abs/2109.05003

11、Knowing FalseNegatives: An Adversarial Training Method for Distantly Supervised RelationExtraction

https://arxiv.org/abs/2109.02099

12、Fine-grainedEntity Typing via Label Reasoning

https://arxiv.org/abs/2109.05744

13、Back to theBasics: A Quantitative Analysis of Statistical and Graph-Based Term WeightingSchemes for Keyword Extraction

https://arxiv.org/abs/2104.08028

14、A Novel GlobalFeature-Oriented Relational Triple Extraction Model based on Table Filling

https://arxiv.org/abs/2109.06705

15、An EmpiricalStudy on Multiple Information Sources for Zero-Shot Fine-Grained Entity Typing

16、ModularSelf-Supervision for Document-Level Relation Extraction

https://arxiv.org/abs/2109.05362

17、MapRE: AnEffective Semantic Mapping Approach for Low-resource Relation Extraction

https://arxiv.org/abs/2109.04108

18、ProgressiveAdversarial Learning for Bootstrapping: A Case Study on Entity Set Expansion

https://arxiv.org/abs/2109.12082

19、Uncovering MainCausalities for Long-tailed Information Extraction

https://arxiv.org/abs/2109.05213

20、Machine ReadingComprehension as Data Augmentation: A Case Study on Implicit Event ArgumentExtraction

21、CodRED: ACross-Document Relation Extraction Dataset for Acquiring Knowledge in the Wild

22、AttentionRank:Unsupervised Keyphrase Extraction using Self and Cross Attentions

23、ProgressiveSelf-Training with Discriminator for Aspect Term Extraction

https://underline.io/lecture/37731-progressive-self-training-with-discriminator-for-aspect-term-extraction

24、ImportanceEstimation from Multiple Perspectives for Keyphrase Extraction

https://arxiv.org/abs/2110.09749

25、LabelVerbalization and Entailment for Effective Zero and Few-Shot RelationExtraction

https://arxiv.org/abs/2109.03659

26、Maximal CliqueBased Non-Autoregressive Open Information Extraction

27、UnsupervisedRelation Extraction: A Variational Autoencoder Approach

https://underline.io/lecture/37828-unsupervised-relation-extraction-a-variational-autoencoder-approach

28、DataAugmentation for Cross-Domain Named Entity Recognition

https://arxiv.org/abs/2109.01758

29、Incorporatingmedical knowledge in BERT for clinical relation extraction

30、Focus on whatmatters: Applying Discourse Coherence Theory to Cross Document Coreference

https://arxiv.org/abs/2110.05362

31、Learning fromNoisy Labels for Entity-Centric Information Extraction

https://arxiv.org/abs/2104.08656

32、RAST:Domain-Robust Dialogue Rewriting as Sequence Tagging

33、Everything IsAll It Takes: A Multipronged Strategy for Zero-Shot Cross-Lingual InformationExtraction

https://arxiv.org/abs/2109.06798

34、CrosslingualTransfer Learning for Relation and Event Extraction via Word Category and ClassAlignments

https://www.vinai.io/publication-posts/crosslingual-transfer-learning-for-relation-and-event-extraction-via-word-category-and-class-alignments

35、GradientImitation Reinforcement Learning for Low Resource Relation Extraction

https://arxiv.org/abs/2109.06415

36、Few-Shot NamedEntity Recognition: An Empirical Baseline Study

https://arxiv.org/abs/2012.14978

37、Corpus-basedOpen-Domain Event Type Induction

https://arxiv.org/abs/2109.03322

38、PASTE: ATagging-Free Decoding Framework Using Pointer Networks for Aspect SentimentTriplet Extraction

https://arxiv.org/abs/2110.04794

39、PDALN:Progressive Domain Adaptation over a Pre-trained Model for Low-ResourceCross-Domain Named Entity Recognition

40、A Relation-OrientedClustering Method for Open Relation Extraction

https://arxiv.org/abs/2109.07205

41、Zero-ShotInformation Extraction as a Unified Text-to-Triple Translation

https://arxiv.org/abs/2109.11171

42、Learning LogicRules for Document-Level Relation Extraction

https://underline.io/lecture/38676-learning-logic-rules-for-document-level-relation-extraction

43、Entity RelationExtraction as Dependency Parsing in Visually Rich Documents

https://arxiv.org/abs/2110.09915

44、Synchronous DualNetwork with Cross-Type Attention for Joint Entity and Relation Extraction

45、ComparativeOpinion Quintuple Extraction from Product Reviews


事件检测

1、Treasures OutsideContexts: Improving Event Detection via Global Statistics

2、UncertainLocal-to-Global Networks for Document-Level Event Factuality Identification

3、Lifelong EventDetection with Knowledge Transfer

4、Integrating DeepEvent-Level and Script-Level Information for Script Event Prediction

http://www.bigdatalab.ac.cn/~jinxiaolong/publications/EMNLP2021BaiG.pdf

5、ModelingDocument-Level Context for Event Detection via Important Context Selection

https://www.vinai.io/publication-posts/modeling-document-level-context-for-event-detection-via-important-context-selection

6、Salience-AwareEvent Chain Modeling for Narrative Understanding

https://arxiv.org/abs/2109.10475


图相关

1、A Semantic FilterBased on Relations for Knowledge Graph Completion

2、The Future is notOne-dimensional: Complex Event Schema Induction by Graph Modeling for EventPrediction

https://arxiv.org/abs/2104.06344

3、TimeTraveler:Reinforcement Learning for Temporal Knowledge Graph Forecasting

https://arxiv.org/abs/2109.04101

4、EfficientMind-Map Generation via Sequence-to-Graph and Reinforced Graph Refinement

https://arxiv.org/abs/2109.02457

5、Logic-levelEvidence Retrieval and Graph-based Verification Network for Table-based FactVerification

https://arxiv.org/abs/2109.06480

6、ImprovingGraph-based Sentence Ordering with Iteratively Predicted Pairwise Orderings

https://arxiv.org/abs/2110.06446

7、GraphMR: GraphNeural Network for Mathematical Reasoning

8、Time-dependentEntity Embedding is not All You Need: A Re-evaluation of Temporal KnowledgeGraph Completion Models under a Unified Framework

9、Learning NeuralOrdinary Equations for Forecasting Future Links on Temporal Knowledge Graphs

10、Weakly-supervisedText Classification Based on Keyword Graph

https://arxiv.org/abs/2110.02591

11、A DifferentiableRelaxation of Graph Segmentation and Alignment for AMR Parsing

https://arxiv.org/abs/2010.12676

12、Event Graphbased Sentence Fusion

13、HierarchicalHeterogeneous Graph Representation Learning for Short Text Classification

14、Argument PairExtraction with Mutual Guidance and Inter-sentence Relation Graph

15、A Graph-BasedNeural Model for End-to-End Frame Semantic Parsing

https://arxiv.org/abs/2109.12319

16、Deep AttentionDiffusion Graph Neural Networks for Text Classification

17、SYSML:StYlometry with Structure and Multitask Learning: Implications for DarknetForum Migrant Analysis

https://arxiv.org/abs/2104.00764

18、Extend, don’trebuild: Phrasing conditional graph modification as autoregressive sequencelabelling

19、HittER:Hierarchical Transformers for Knowledge Graph Embeddings

https://arxiv.org/abs/2008.12813

20、Knowledge GraphRepresentation Learning using Ordinary Differential Equations

21、Aligning ActionsAcross Recipe Graphs

22、Open KnowledgeGraphs Canonicalization using Variational Autoencoders


文本分类

1、HierarchicalMulti-label Text Classification with Horizontal and Vertical CategoryCorrelations

2、Not JustClassification: Recognizing Implicit Discourse Relation on Joint Modeling ofClassification and Generation

3、EffectiveConvolutional Attention Network for Multi-label Clinical Document Classification

https://underline.io/lecture/37529-effective-convolutional-attention-network-for-multi-label-clinical-document-classification

4、A LanguageModel-based Generative Classifier for Sentence-level Discourse Parsing

https://underline.io/lecture/37587-a-language-model-based-generative-classifier-for-sentence-level-discourse-parsing

5、Coarse2Fine:Fine-grained Text Classification on Coarsely-grained Annotated Data

https://arxiv.org/abs/2109.10856

6、Detect andClassify – Joint Span Detection and Classification for Health Outcomes

https://arxiv.org/abs/2104.07789

7、Tribrid: StanceClassification with Neural Inconsistency Detection

https://arxiv.org/abs/2109.06508

8、Softmax Tree: AnAccurate, Fast Classifier When the Number of Classes Is Large

https://faculty.ucmerced.edu/mcarreira-perpinan/papers/emnlp21.pdf

9、Re-embeddingDifficult Samples via Mutual Information Constrained Semantically Oversamplingfor Imbalanced Text Classification

https://underline.io/lecture/38044-re-embedding-difficult-samples-via-mutual-information-constrained-semantically-oversampling-for-imbalanced-text-classification

10、Beyond Text:Incorporating Metadata and Label Structure for Multi-Label DocumentClassification using Heterogeneous Graphs

11、MultitaskSemi-Supervised Learning for Class-Imbalanced Discourse Classification

https://arxiv.org/abs/2101.00389

12、FLiText: AFaster and Lighter Semi-Supervised Text Classification with ConvolutionNetworks

https://arxiv.org/abs/2110.11869

13、SELFEXPLAIN: ASelf-Explaining Architecture for Neural Text Classifiers

https://arxiv.org/abs/2103.12279

14、ClassifyingDyads for Militarized Conflict Analysis

https://arxiv.org/abs/2109.12860

15、TEMP: TaxonomyExpansion with Dynamic Margin Loss through Taxonomy-Paths


NLP基础

1、Cross-RegisterProjection for Headline Part of Speech Tagging

https://arxiv.org/abs/2109.07483

2、A Fine-GrainedDomain Adaption Model for Joint Word Segmentation and POS Tagging

3、Segment, Mask,and Predict: Augmenting Chinese Word Segmentation with Self-Supervision

4、UnderstandingPolitics via Contextualized Discourse Processing

5、Debiasing Methodsin Natural Language Understanding Make Bias More Accessible

https://arxiv.org/abs/2109.04095

6、Total Recall: aCustomized Continual Learning Method for Neural Semantic Parsers

https://arxiv.org/abs/2109.05186

7、On the Benefit ofSyntactic Supervision for Cross-lingual Transfer in Semantic Role Labeling

https://underline.io/lecture/37569-on-the-benefit-of-syntactic-supervision-for-cross-lingual-transfer-in-semantic-role-labeling

8、Predictingemergent linguistic compositions through time: Syntactic frame extension viamultimodal chaining

https://arxiv.org/abs/2109.04652

9、ControllableSemantic Parsing via Retrieval Augmentation

https://arxiv.org/abs/2110.08458

10、ConstrainedLanguage Models Yield Few-Shot Semantic Parsers

https://arxiv.org/abs/2104.08768

11、Chinese OpinionRole Labeling with Corpus Translation: A Pivot Study

12、Chinese OpinionRole Labeling with Corpus Translation: A Pivot Study

13、Syntactically-InformedUnsupervised Paraphrasing with Non-Parallel Data


文本风格改写

1、It Capture STEL?A Modular, Similarity-based Linguistic Style Evaluation Framework

https://arxiv.org/abs/2109.04817

2、Learning forUnsupervised Text Style Transfer

https://arxiv.org/abs/2109.07812

3、Style Pooling:Automatic Text Style Obfuscation for Improved Classification Fairness

https://arxiv.org/abs/2109.04624

4、Generic resourcesare what you need: Style transfer tasks without task-specific parallel trainingdata

https://arxiv.org/abs/2109.04543

5、Evaluating theEvaluation Metrics for Style Transfer: A Case Study in Multilingual FormalityTransfer

https://arxiv.org/abs/2110.10668

6、CollaborativeLearning of Bidirectional Decoders for Unsupervised Text Style Transfer

https://underline.io/lecture/38017-collaborative-learning-of-bidirectional-decoders-for-unsupervised-text-style-transfer

7、Mind the Style ofText! Adversarial and Backdoor Attacks Based on Text Style Transfer

https://arxiv.org/abs/2110.07139


推理

1、Causal Directionof Data Collection Matters: Implications of Causal and Anticausal Learning forNLP

https://arxiv.org/abs/2110.03618

2、Bayesian TopicRegression for Causal Inference

https://arxiv.org/abs/2109.05317

3、Case-basedReasoning for Natural Language Queries over Knowledge Bases

https://arxiv.org/abs/2104.08762

4、UniKER: A UnifiedFramework for Combining Embedding and Definite Horn Rule Reasoning forKnowledge Graph Inference

https://grlplus.github.io/papers/84.pdf

5、Is Multi-HopReasoning Really Explainable? Towards Benchmarking Reasoning Interpretability

https://arxiv.org/abs/2104.06751

6、Diagnosing theFirst-Order Logical Reasoning Ability Through LogicNLI

7、GMH: A GeneralMulti-hop Reasoning Model for KG Completion

https://arxiv.org/abs/2010.07620

8、On the Challengesof Evaluating Compositional Explanations in Multi-Hop Inference: Relevance,Completeness, and Expert Ratings

https://arxiv.org/abs/2109.03334

9、ShortcuttedCommonsense: Data Spuriousness in Deep Learning of Commonsense Reasoning

https://github.com/nlx-group/Shortcutted-Commonsense-Reasoning

10、Wino-X:Multilingual Winograd Schemas for Commonsense Reasoning and CoreferenceResolution

11、Moral Stories:Situated Reasoning about Norms, Intents, Actions, and their Consequences

https://arxiv.org/abs/2012.15738

12、ExplaGraphs: AnExplanation Graph Generation Task for Structured Commonsense Reasoning

https://arxiv.org/abs/2104.07644

13、Think about it!Improving defeasible reasoning by first modeling the question scenario.

https://arxiv.org/abs/2110.12349


模型鲁棒性及对抗

1、CertifiedRobustness to Programmable Transformations in LSTMs

https://arxiv.org/abs/2102.07818

2、AdversarialMixing Policy for Relaxing Locally Linear Constraints in Mixup

https://arxiv.org/abs/2109.07177

3、Backdoor Attackson Pre-trained Models by Layerwise Weight Poisoning

https://arxiv.org/abs/2108.13888

4、Achieving ModelRobustness through Discrete Adversarial Training

https://arxiv.org/abs/2104.05062

5、Instance-adaptivetraining with noise-robust losses against noisy labels

https://underline.io/lecture/37479-instance-adaptive-training-with-noise-robust-losses-against-noisy-labels

6、Multi-granularityTextual Adversarial Attack with Behavior Cloning

https://arxiv.org/abs/2109.04367

7、ImprovingZero-Shot Cross-Lingual Transfer Learning via Robust Training

https://arxiv.org/abs/2104.08645

8、Evaluating theRobustness of Neural Language Models to Input Perturbations

https://arxiv.org/abs/2108.12237

9、RAP:Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLPModels

https://arxiv.org/abs/2110.07831

10、Profanity-AvoidingTraining Framework for Seq2seq Models with Certified Robustness

11、AdversarialAttack against Cross-lingual Knowledge Graph Alignment

12、AdversarialRegularization as Stackelberg Game: An Unrolled Optimization Approach

https://arxiv.org/abs/2104.04886

13、Can We ImproveModel Robustness through Secondary Attribute Counterfactuals?

14、FAME:Feature-Based Adversarial Meta-Embeddings for Robust Input Representations

15、AdversarialAttacks on Knowledge Graph Embeddings via Instance Attribution Methods

16、A StrongBaseline for Query Efficient Attacks in a Black Box Setting

https://arxiv.org/abs/2109.04775

17、Gradient-basedAdversarial Attacks against Text Transformers

https://arxiv.org/abs/2104.13733

18、AdversarialScrubbing of Demographic Information for Text Classification

https://arxiv.org/abs/2109.08613

19、Searching for anEffective Defender: Benchmarking Defense against Adversarial Word Substitution

https://arxiv.org/abs/2108.12777

20、On the Transferabilityof Adversarial Attacks against Neural Text Classifier

https://arxiv.org/abs/2011.08558

21、ContrastingHuman- and Machine-Generated Word-Level Adversarial Examples for TextClassification

https://arxiv.org/abs/2109.04385


模型压缩

1、ConsistentAccelerated Inference via Confident Adaptive Transformers

https://arxiv.org/abs/2104.08803

2、Dynamic KnowledgeDistillation for Pre-trained Language Models

https://arxiv.org/abs/2109.11295

3、Layer-wise ModelPruning based on Mutual Information

https://arxiv.org/abs/2108.12594

4、HRKD:Hierarchical Relational Knowledge Distillation for Cross-domain Language ModelCompression

https://arxiv.org/abs/2110.08551

5、DistillingLinguistic Context for Language Model Compression

https://arxiv.org/abs/2109.08359

6、Understanding andOvercoming the Challenges of Efficient Transformer Quantization

https://arxiv.org/abs/2109.12948

7、Improving StanceDetection with Multi-Dataset Learning and Knowledge Distillation

8、Block Pruning ForFaster Transformers

https://arxiv.org/abs/2109.04838

9、When AttentionMeets Fast Recurrence: Training Language Models with Reduced Compute

https://arxiv.org/abs/2102.12459

10、Universal-KD:Attention-based Output-Grounded Intermediate Layer Knowledge Distillation


小样本

1、Meta-LMTC:Meta-Learning for Large-Scale Multi-Label Text Classification

2、A Label-AwareBERT Attention Network for Zero-Shot Multi-Intent Detection in Spoken LanguageUnderstanding

3、MetaTS: MetaTeacher-Student Network for Multilingual Sequence Labeling with MinimalSupervision

https://www.amazon.science/publications/metats-meta-teacher-student-network-for-multilingual-sequence-labeling-with-minimal-supervision

4、Meta DistantTransfer Learning for Pre-trained Language Models

https://underline.io/lecture/37379-meta-distant-transfer-learning-for-pre-trained-language-models

5、Genre as WeakSupervision for Cross-lingual Dependency Parsing

https://arxiv.org/abs/2109.04733

6、Learning fromMultiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken LanguageUnderstanding

https://arxiv.org/abs/2109.01583

7、On the Relationbetween Syntactic Divergence and Zero-Shot Performance

https://arxiv.org/abs/2110.04644

8、MultiEURLEX - Amulti-lingual and multi-label legal document classification dataset forzero-shot cross-lingual transfer

https://arxiv.org/abs/2109.00904

9、CrossFit: AFew-shot Learning Challenge for Cross-task Generalization in NLP

https://arxiv.org/abs/2104.08835

10、Learning withDifferent Amounts of Annotation: From Zero to Many Labels

https://arxiv.org/abs/2109.04408

11、Self-trainingwith Few-shot Rationalization

https://arxiv.org/abs/2109.08259

12、ContinualFew-Shot Learning for Text Classification

13、STraTA:Self-Training with Task Augmentation for Better Few-shot Learning

https://arxiv.org/abs/2109.06270

14、Semi-SupervisedExaggeration Detection of Health Science Press Releases

https://arxiv.org/abs/2108.13493

15、DiverseDistributions of Self-Supervised Tasks for Meta-Learning in NLP

16、Low-resourceTaxonomy Enrichment with Pretrained Language Models

17、Word Reorderingfor Zero-shot Cross-lingual Structured Prediction

18、TowardsZero-Shot Knowledge Distillation for Natural Language Processing

https://arxiv.org/abs/2012.15495

19、Robust RetrievalAugmented Generation for Zero-shot Slot Filling

https://arxiv.org/abs/2108.13934


知识表征

1、Relational WorldKnowledge Representation in Contextual Language Models: A Review

https://arxiv.org/abs/2104.05837

2、SimCSE: SimpleContrastive Learning of Sentence Embeddings

https://arxiv.org/abs/2104.08821

3、UniversalSentence Representation Learning with Conditional Masked Language Model

https://arxiv.org/abs/2012.14388

4、BiQUE:Biquaternionic Embeddings of Knowledge Graphs

https://arxiv.org/abs/2109.14401

5、Language-agnosticRepresentation from Multilingual Sentence Encoders for Cross-lingual SimilarityEstimation

6、DistillingRelation Embeddings from Pretrained Language Models

7、Learning groundedword meaning representations on similarity graphs

https://arxiv.org/abs/2109.03084

8、PAUSE: Positiveand Annealed Unlabeled Sentence Embedding

https://arxiv.org/abs/2109.03155

9、ContextualizedQuery Embeddings for Conversational Search

https://arxiv.org/abs/2104.08707

10、EnhancedLanguage Representation with Label Knowledge for Span Extraction

11、Label-EnhancedHierarchical Contextualized Representation for Sequential MetaphorIdentification

https://underline.io/lecture/37720-label-enhanced-hierarchical-contextualized-representation-for-sequential-metaphor-identification

12、A MassivelyMultilingual Analysis of Cross-linguality in Shared Embedding Space

https://arxiv.org/abs/2109.06324

13、Contrastive CodeRepresentation Learning

https://arxiv.org/abs/2007.04973

14、DisentanglingRepresentations of Text by Masking Transformers

https://arxiv.org/abs/2104.07155

15、Cross-lingualSentence Embedding using Multi-Task Learning

16、NarrativeEmbedding: Re-Contextualization Through Attention

17、All Bark and NoBite: Rogue Dimensions in Transformer Language Models Obscure RepresentationalQuality

https://arxiv.org/abs/2109.04404

18、ValNormQuantifies Semantics to Reveal Consistent Valence Biases Across Languages andOver Centuries

https://arxiv.org/abs/2006.03950

19、Comparing TextRepresentations: A Theory-Driven Approach

https://arxiv.org/abs/2109.07458

20、PairwiseSupervised Contrastive Learning of Sentence Representations

https://arxiv.org/abs/2109.05424

21、Analyzing theSurprising Variability in Word Embedding Stability Across Languages

https://arxiv.org/abs/2004.14876

22、A UnifiedEncoding of Structures in Transition Systems

23、OSCaR:Orthogonal Subspace Correction and Rectification of Biases in Word Embeddings

https://arxiv.org/abs/2007.00049

24、Mappingprobability word problems to executable representations

25、Monitoringgeometrical properties of word embeddings for detecting the emergence of newtopics

26、SPECTRA: SparseStructured Text Rationalization

https://arxiv.org/abs/2109.04552


多语言

1、UNKs Everywhere:Adapting Multilingual Language Models to New Scripts

https://arxiv.org/abs/2012.15562

2、Model Selectionfor Cross-lingual Transfer

https://arxiv.org/abs/2010.06127

3、EffectiveFine-Tuning Methods for Cross-lingual Adaptation

4、The Impact ofPositional Encodings on Multilingual Compression

https://arxiv.org/abs/2109.05388

5、Asurprisal--duration trade-off across and within the world's languages

https://arxiv.org/abs/2109.15000

6、Role of LanguageRelatedness in Multilingual Fine­-tuning of Language Models: A Case Study inIndo-­Aryan Languages

https://arxiv.org/abs/2109.10534


社会道德伦理偏见

1、UsingSociolinguistic Variables to Reveal Changing Attitudes Towards Sexuality andGender

https://arxiv.org/abs/2109.11061

2、Harms of GenderExclusivity and Challenges in Non-Binary Representation in LanguageTechnologies

https://arxiv.org/abs/2108.12084

3、AreGender-Neutral Queries Really Gender-Neutral? Mitigating Gender Bias in ImageSearch

https://arxiv.org/abs/2109.05433

4、IdentifyingMorality Frames in Political Tweets using Relational Learning

https://arxiv.org/abs/2109.04535

5、Assessing theReliability of Word Embedding Gender Bias Measures

https://arxiv.org/abs/2109.04732

6、Rumor Detectionon Twitter with Claim-Guided Hierarchical Graph Attention Networks

https://arxiv.org/abs/2110.04522

7、Lawyers areDishonest? Quantifying Representational Harms in Commonsense KnowledgeResources

https://arxiv.org/abs/2103.11320

8、Low FrequencyNames Exhibit Bias and Overfitting in Contextualizing Language Models

https://arxiv.org/abs/2110.00672

9、MitigatingLanguage-Dependent Ethnic Bias in BERT

https://arxiv.org/abs/2109.05704

10、(Mis)alignmentBetween Stance Expressed in Social Media Data and Public Opinion Surveys

https://arxiv.org/abs/2109.01762

11、The World of anOctopus: How Reporting Bias Influences a Language Model's Perception of Color

https://arxiv.org/abs/2110.08182

12、How DoesCounterfactually Augmented Data Impact Models for Social Computing Constructs?

https://arxiv.org/abs/2109.07022


虚假新闻检测

1、STANKER: StackingNetwork based on Level-grained Attention-masked BERT for Rumor Detection onSocial Media

2、Artificial TextDetection via Examining the Topology of Attention Maps

https://arxiv.org/abs/2109.04825


指代、链指、消歧及对齐

1、Low-RankSubspaces for Unsupervised Entity Linking

https://arxiv.org/abs/2104.08737

2、Pseudo ZeroPronoun Resolution Improves Zero Anaphora Resolution

3、Event CoreferenceData (Almost) for Free: Mining Hyperlinks from Online News

https://openreview.net/forum?id=485AXJD1fQ5

4、Time-aware GraphNeural Network for Entity Alignment between Temporal Knowledge Graphs

5、ActiveEA: ActiveLearning for Neural Entity Alignment

https://ielab.io/publications/bing-2021-al4ea

6、Moving on fromOntoNotes: Coreference Resolution Model Transfer

https://arxiv.org/abs/2104.08457

7、Exophoric PronounResolution in Dialogues with Topic Regularization

https://arxiv.org/abs/2109.04787

8、Conundrums inEvent Coreference Resolution: Making Sense of the State of the Art

9、VeeAlign:Multifaceted Context Representation Using Dual Attention for Ontology Alignment

https://arxiv.org/abs/2010.11721

10、RobustnessEvaluation of Entity Disambiguation Using Prior Probes: the Case of EntityOvershadowing

https://arxiv.org/abs/2108.10949

11、From Alignmentto Assignment: Frustratingly Simple Unsupervised Entity Alignment

https://arxiv.org/abs/2109.02363

12、ConSeC: WordSense Disambiguation as Continuous Sense Comprehension

https://underline.io/lecture/37804-consec-word-sense-disambiguation-as-continuous-sense-comprehension

13、QA-Align:Representing Cross-Text Content Overlap by Aligning Question-AnswerPropositions

https://arxiv.org/abs/2109.12655

14、Connect-the-Dots:Bridging Semantics between Words and Definitions via Aligning Word SenseInventories

https://arxiv.org/abs/2110.14091


ASR

1、Residual Adaptersfor Parameter-Efficient ASR Adaptation to Atypical and Accented Speech

https://arxiv.org/abs/2109.06952

2、A Unified SpeakerAdaptation Approach for ASR

https://arxiv.org/abs/2110.08545

3、SequentialRandomized Smoothing for Adversarially Robust Speech Recognition


数据增强

1、Text AutoAugment:Learning Compositional Augmentation Policy for Text Classification

https://arxiv.org/abs/2109.00523

2、Unsupervised DataAugmentation with Naive Augmentation and without Unlabeled Data

https://arxiv.org/abs/2010.11966

3、EfficientMulti-Task Auxiliary Learning: Selecting Auxiliary Data by Feature Similarity

https://underline.io/lecture/37521-efficient-multi-task-auxiliary-learning-selecting-auxiliary-data-by-feature-similarity

4、Learning BillSimilarity with Annotated and Augmented Corpora of Bills

https://arxiv.org/abs/2109.06527

5、Virtual DataAugmentation: A Robust and General Framework for Fine-tuning Pre-trained Models

https://arxiv.org/abs/2109.05793

6、HypMix:Hyperbolic Interpolative Data Augmentation

https://www.cc.gatech.edu/~dyang888/docs/emnlp21_hypermixup.pdf

7、ReinforcedCounterfactual Data Augmentation for Dual Sentiment Classification

8、Data Augmentationwith Hierarchical SQL-to-Question Generation for Cross-domain Text-to-SQLParsing

https://arxiv.org/abs/2103.02227


纠错

1、LM-Critic:Language Models for Unsupervised Grammatical Error Correction

https://arxiv.org/abs/2109.06822

2、Multi-ClassGrammatical Error Detection for Correction: A Tale of Two Systems

https://underline.io/lecture/37665-multi-class-grammatical-error-detection-for-correction-a-tale-of-two-systems


其他

1、LearningConstraints and Descriptive Segmentation for Subevent Detection

https://arxiv.org/abs/2109.06316

2、Types ofOut-of-Distribution Texts and How to Detect Them

https://arxiv.org/abs/2109.06827

3、Local WordDiscovery for Interactive Transcription

4、DefinitionModelling for Appropriate Specificity

5、Set GenerationNetworks for End-to-End Knowledge Base Population

6、Cross-DomainLabel-Adaptive Stance Detection

https://arxiv.org/abs/2104.07467

7、Idiosyncratic butnot Arbitrary: Learning Idiolects in Online Registers Reveals Distinctive yetConsistent Individual Styles

https://deepai.org/publication/idiosyncratic-but-not-arbitrary-learning-idiolects-in-online-registers-reveals-distinctive-yet-consistent-individual-styles

8、A BayesianFramework for Information-Theoretic Probing

https://arxiv.org/abs/2109.03853

9、SOM-NCSCM : AnEfficient Neural Chinese Sentence Compression Model Enhanced withSelf-Organizing Map

10、MeasuringAssociation Between Labels and Free-Text Rationales

https://arxiv.org/abs/2010.12762

11、A Root of aProblem: Optimizing Single-Root Dependency Parsing

12、Is Everything inOrder? A Simple Way to Order Sentences

https://arxiv.org/abs/2104.07064

13、Narrative Theoryfor Computational Narrative Understanding

https://www.semanticscholar.org/paper/Narrative-Theory-for-Computational-Narrative-Piper/44d2dc8e5d821c60c0adf531a55678ddf4658fcc

14、STaCK: SentenceOrdering with Temporal Commonsense Knowledge

https://arxiv.org/abs/2109.02247

15、Efficient-FedRec:Efficient Federated Learning Framework for Privacy-Preserving NewsRecommendation

https://arxiv.org/abs/2109.05446

16、Back to SquareOne: Artifact Detection, Training and Commonsense Disentanglement in theWinograd Schema

https://arxiv.org/abs/2104.08161

17、EfficientSampling of Dependency Structures

https://arxiv.org/abs/2109.06521

18、Fine-grained EntityTyping without Knowledge Base

19、Fix-Filter-Fix:Intuitively Connect Any Models for Effective Bug Fixing

20、MinimalSupervision for Morphological Inflection

https://arxiv.org/abs/2104.08512

21、$k$Folden:$k$-Fold Ensemble for Out-Of-Distribution Detection

https://arxiv.org/abs/2108.12731

22、RevisitingTri-training of Dependency Parsers

https://arxiv.org/abs/2109.08122

23、Knowledge BaseCompletion Meets Transfer Learning

https://arxiv.org/abs/2108.13073

24、LeveragingCapsule Routing to Associate Knowledge with Medical Literature Hierarchically

25、WassersteinSelective Transfer Learning for Cross-domain Text Mining

26、Foreseeing theBenefits of Incidental Supervision

https://arxiv.org/abs/2006.05500

27、NeuralizingRegular Expressions for Slot Filling

https://faculty.sist.shanghaitech.edu.cn/faculty/tukw/emnlp21.pdf

28、Come hither orgo away? Recognising pre-electoral coalition signals in the news

29、When is Wall aPared and when a Muro? -- Extracting Rules Governing Lexical Selection

https://arxiv.org/abs/2109.06014

30、#HowYouTagTweets:Learning User Hashtagging Preferences via Personalized Topic Attention

31、SignedCoreference Resolution

32、Weaklysupervised discourse segmentation for multiparty oral conversations

33、NeuralAttention-Aware Hierarchical Topic Model

https://arxiv.org/abs/2110.07161

34、Rationales forSequential Predictions

https://arxiv.org/abs/2109.06387

35、Active Learningby Acquiring Contrastive Examples

https://arxiv.org/abs/2109.03764

36、AligningMultidimensional Worldviews and Discovering Ideological Differences

37、Revisiting theUniform Information Density Hypothesis

https://arxiv.org/abs/2109.11635

38、ConditionalPoisson Stochastic Beams

https://arxiv.org/abs/2109.11034

39、InducingTransformer’s Compositional Generalization Ability via Auxiliary SequencePrediction Tasks

https://arxiv.org/abs/2109.15256

40、Improved LatentTree Induction with Distant Supervision via Span Constraints

https://arxiv.org/abs/2109.05112

41、Semantic NoveltyDetection in Natural Language Descriptions

42、How Do NeuralSequence Models Generalize? Local and Global Cues for Out-of-DistributionPrediction

https://underline.io/lecture/37865-how-do-neural-sequence-models-generalizequestion-local-and-global-cues-for-out-of-distribution-prediction

43、SyntheticTextual Features for the Large-Scale Detection of Basic-level Categories inEnglish and Mandarin

44、Do TransformerModifications Transfer Across Implementations and Applications?

45、ExtractingMaterial Property Measurement Data from Scientific Articles

46、Paired Examplesas Indirect Supervision in Latent Decision Models

https://arxiv.org/abs/2104.01759

47、CompetencyProblems: On Finding and Removing Artifacts in Language Data

https://arxiv.org/abs/2104.08646

48、DetectingContact-Induced Semantic Shifts: What Can Embedding-Based Methods Do inPractice?

49、Jump-StartingItem Parameters for Adaptive Language Tests

50、Human Rationalesas Attribution Priors for Explainable Stance Detection

https://underline.io/lecture/37967-human-rationales-as-attribution-priors-for-explainable-stance-detection

51、LinguisticDependencies and Statistical Dependence

https://arxiv.org/abs/2104.08685

52、SequentialCross-Document Coreference Resolution

https://arxiv.org/abs/2104.08413

53、Detecting HealthAdvice in Medical Research Literature

54、Structure-awareFine-tuning of Sequence-to-sequence Transformers for Transition-based AMRParsing

https://arxiv.org/abs/2110.15534

55、EvaluatingScholarly Impact: Towards Content-Aware Bibliometrics

56、Long-RangeModeling of Source Code Files with eWASH: Extended Window Access by SyntaxHierarchy

https://arxiv.org/abs/2109.08780

57、ModelingDisclosive Transparency in NLP Application Descriptions

https://arxiv.org/abs/2101.00433

58、MS-Mentions:Consistently Annotating Entity Mentions in Materials Science Procedural Text

59、Natural LanguageProcessing Meets Quantum Physics: A Survey and Categorization

60、How to LeverageMultimodal EHR Data for Better Medical Predictions?


整理不易,请多多关注、转发、点赞。也请多多关注本人知乎「刘聪NLP」,有问题的朋友也欢迎加我微信私聊。

往期推荐



EMNLP2021会议PaperList

常用预训练语言模型(PTMs)总结

一个使模型训练速度提升20%的Trick--BlockShuffle

EMNLP 2021之SF:一种预训练语言模型的片段微调(Span Fine-tuning)方法

EMNLP2021之AEDA:一种更简单的文本分类数据增强技术

回顾BART模型

授人以鱼不如授人以渔

ACL2021主会议论文汇总及分类

ACL2021 Findings论文汇总及分类

继续滑动看下一个

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存