Xipeng Qiu

Professor, School of Computer Science, Fudan University

 

Link

Email

Github

Weibo

Zhihu

Contact

Computer Building, No. 825, Zhangheng Road, Shanghai, China

    A more comprehensive publication list: Google Scholar

    [2020]

  1. GenWiki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation, COLING, 2020. [BibTeX][PDF][Abstract]
    Zhijing Jin, Qipeng Guo, Xipeng Qiu, Zheng Zhang.
  2. BibTeX:
    @inproceedings{jin-etal-2020-genwiki,
      author = {Jin, Zhijing and Guo, Qipeng and Qiu, Xipeng and Zhang, Zheng},
      title = {GenWiki: A Dataset of 1.3 Million Content-Sharing Text and Graphs for Unsupervised Graph-to-Text Generation},
      booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
      year = {2020},
      pages = {2398--2409}, 
      url = {https://www.aclweb.org/anthology/2020.coling-main.217}
    }
    
    Abstract: Data collection for the knowledge graph-to-text generation is expensive. As a result, research on unsupervised models has emerged as an active field recently. However, most unsupervised models have to use non-parallel versions of existing small supervised datasets, which largely constrain their potential. In this paper, we propose a large-scale, general-domain dataset, GenWiki. Our unsupervised dataset has 1.3M text and graph examples, respectively. With a human-annotated test set, we provide this new benchmark dataset for future research on unsupervised text generation from knowledge graphs.
  3. CoLAKE: Contextualized Language and Knowledge Embedding, COLING, 2020. [BibTeX][PDF][Abstract]
    Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, Zheng Zhang.
  4. BibTeX:
    @inproceedings{sun-etal-2020-colake,
      author = {Sun, Tianxiang and Shao, Yunfan and Qiu, Xipeng and Guo, Qipeng and Hu, Yaru and Huang, Xuanjing and Zhang, Zheng},
      title = {CoLAKE: Contextualized Language and Knowledge Embedding},
      booktitle = {Proceedings of the 28th International Conference on Computational Linguistics},
      year = {2020},
      pages = {3660--3670}, 
      url = {https://www.aclweb.org/anthology/2020.coling-main.327}
    }
    
    Abstract: With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models. Few works explore the potential of deep contextualized knowledge representation when injecting knowledge. In this paper, we propose the Contextualized Language and Knowledge Embedding (CoLAKE), which jointly learns contextualized representation for both language and knowledge with the extended MLM objective. Instead of injecting only entity embeddings, CoLAKE extracts the knowledge context of an entity from large-scale knowledge bases. To handle the heterogeneity of knowledge context and language context, we integrate them in a unified data structure, word-knowledge graph (WK graph). CoLAKE is pre-trained on large-scale WK graphs with the modified Transformer encoder. We conduct experiments on knowledge-driven tasks, knowledge probing tasks, and language understanding tasks. Experimental results show that CoLAKE outperforms previous counterparts on most of the tasks. Besides, CoLAKE achieves surprisingly high performance on our synthetic task called word-knowledge graph completion, which shows the superiority of simultaneously contextualizing language and knowledge representation.
  5. Text Information Aggregation with Centrality Attention, SCIENCE CHINA Information Sciences (SCIS) , 2020. [BibTeX][DOI][PDF]
    Jingjing Gong, Hang Yan, Yining Zheng, Qipeng Guo, Xipeng Qiu, XuanJing Huang.
  6. BibTeX:
    @article{gong2020text,
      author = {Jingjing Gong and Hang Yan and Yining Zheng and Qipeng Guo and Xipeng Qiu and XuanJing Huang},
      title = {Text Information Aggregation with Centrality Attention},
      journal = {SCIENCE CHINA Information Sciences},
      year = {2020},
      doi = {https://doi.org/10.1007/s11432-019-1519-6}
    }
    
  7. Syntax-Guided Text Generation via Graph Neural Network, SCIENCE CHINA Information Sciences (SCIS) , 2020. [BibTeX][DOI][PDF]
    Qipeng Guo, Xipeng Qiu, Xiangyang Xue, Zheng Zhang.
  8. BibTeX:
    @article{guo2020syntax-guided,
      author = {Qipeng Guo and Xipeng Qiu and Xiangyang Xue and Zheng Zhang},
      title = {Syntax-Guided Text Generation via Graph Neural Network},
      journal = {SCIENCE CHINA Information Sciences},
      year = {2020},
      doi = {https://doi.org/10.1007/s11432-019-2740-1}
    }
    
  9. CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems, EMNLP Findings, 2020. [BibTeX][PDF][Abstract]
    Yiran Chen, Pengfei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang.
  10. BibTeX:
    @inproceedings{chen-etal-2020-cdevalsumm,
      author = {Chen, Yiran and Liu, Pengfei and Zhong, Ming and Dou, Zi-Yi and Wang, Danqing and Qiu, Xipeng and Huang, Xuanjing},
      title = {CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
      year = {2020},
      pages = {3679--3691}, 
      url = {https://www.aclweb.org/anthology/2020.findings-emnlp.329}
    }
    
    Abstract: Neural network-based models augmented with unsupervised pre-trained knowledge have achieved impressive performance on text summarization. However, most existing evaluation methods are limited to an in-domain setting, where summarizers are trained and evaluated on the same dataset. We argue that this approach can narrow our understanding of the generalization ability for different summarization systems. In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora. A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways (i.e. abstractive and extractive) on model generalization ability. Further, experimental results shed light on the limitations of existing summarizers. Brief introduction and supplementary code can be found in https://github.com/zide05/CDEvalSumm.
  11. BERT-ATTACK: Adversarial Attack Against BERT Using BERT, EMNLP, 2020. [BibTeX][PDF][Abstract]
    Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, Xipeng Qiu.
  12. BibTeX:
    @inproceedings{li-etal-2020-bert-attack,
      author = {Li, Linyang and Ma, Ruotian and Guo, Qipeng and Xue, Xiangyang and Qiu, Xipeng},
      title = {BERT-ATTACK: Adversarial Attack Against BERT Using BERT},
      booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2020},
      pages = {6193--6202}, 
      url = {https://www.aclweb.org/anthology/2020.emnlp-main.500}
    }
    
    Abstract: Adversarial attacks for discrete data (such as texts) have been proved significantly more challenging than continuous data (such as images) since it is difficult to generate adversarial samples with gradient-based methods. Current successful attack methods for texts usually adopt heuristic replacement strategies on the character or word level, which remains challenging to find the optimal solution in the massive space of possible combinations of replacements while preserving semantic consistency and language fluency. In this paper, we propose BERT-Attack, a high-quality and effective method to generate adversarial samples using pre-trained masked language models exemplified by BERT. We turn BERT against its fine-tuned models and other deep neural models in downstream tasks so that we can successfully mislead the target models to predict incorrectly. Our method outperforms state-of-the-art attack strategies in both success rate and perturb percentage, while the generated adversarial samples are fluent and semantically preserved. Also, the cost of calculation is low, thus possible for large-scale generations. The code is available at https://github.com/LinyangLee/BERT-Attack.
  13. Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information, , 2020. [BibTeX][Abstract]
    Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, Lei Li.
  14. BibTeX:
    @inproceedings{lin2020mRASP,
      author = {Lin, Zehui and Pan, Xiao and Wang, Mingxuan and Qiu, Xipeng and Feng, Jiangtao and Zhou, Hao and Li, Lei},
      title = {Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information},
      booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
      year = {2020},
      pages = {2649--2663}, 
      url = {https://www.aclweb.org/anthology/2020.emnlp-main.210}
    }
    
    Abstract: We investigate the following question for machine translation (MT): can we develop a single universal MT model to serve as the common seed and obtain derivative and improved models on arbitrary language pairs? We propose mRASP, an approach to pre-train a universal multilingual neural machine translation model. Our key idea in mRASP is its novel technique of random aligned substitution, which brings words and phrases with similar meanings across multiple languages closer in the representation space. We pre-train a mRASP model on 32 language pairs jointly with only public datasets. The model is then fine-tuned on downstream language pairs to obtain specialized MT models. We carry out extensive experiments on 42 translation directions across a diverse settings, including low, medium, rich resource, and as well as transferring to exotic language pairs. Experimental results demonstrate that mRASP achieves significant performance improvement compared to directly training on those target pairs. It is the first time to verify that multiple lowresource language pairs can be utilized to improve rich resource MT. Surprisingly, mRASP is even able to improve the translation quality on exotic languages that never occur in the pretraining corpus. Code, data, and pre-trained models are available at https://github. com/linzehui/mRASP.
  15. A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder, EMNLP Findings, 2020. [BibTeX][PDF][Abstract]
    Xipeng Qiu, Hengzhi Pei, Hang Yan, Xuanjing Huang.
  16. BibTeX:
    @inproceedings{qiu-etal-2020-concise,
      author = {Qiu, Xipeng and Pei, Hengzhi and Yan, Hang and Huang, Xuanjing},
      title = {A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
      year = {2020},
      pages = {2887--2897}, 
      url = {https://www.aclweb.org/anthology/2020.findings-emnlp.260}
    }
    
    Abstract: Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion. Previous work usually regards MCCWS as different tasks, which are learned together under the multi-task learning framework. In this paper, we propose a concise but effective unified model for MCCWS, which is fully-shared for all the criteria. By leveraging the powerful ability of the Transformer encoder, the proposed unified model can segment Chinese text according to a unique criterion-token indicating the output criterion. Besides, the proposed unified model can segment both simplified and traditional Chinese and has an excellent transfer capability. Experiments on eight datasets with different criteria show that our model outperforms our single-criterion baseline model and other multi-criteria models. Source codes of this paper are available on Github.
  17. BERT for Monolingual and Cross-Lingual Reverse Dictionary, EMNLP Findings, 2020. [BibTeX][PDF][Abstract]
    Hang Yan, Xiaonan Li, Xipeng Qiu, Bocao Deng.
  18. BibTeX:
    @inproceedings{yan-etal-2020-bert,
      author = {Yan, Hang and Li, Xiaonan and Qiu, Xipeng and Deng, Bocao},
      title = {BERT for Monolingual and Cross-Lingual Reverse Dictionary},
      booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020},
      year = {2020},
      pages = {4329--4338}, 
      url = {https://www.aclweb.org/anthology/2020.findings-emnlp.388}
    }
    
    Abstract: Reverse dictionary is the task to find the proper target word given the word description. In this paper, we tried to incorporate BERT into this task. However, since BERT is based on the byte-pair-encoding (BPE) subword encoding, it is nontrivial to make BERT generate a word given the description. We propose a simple but effective method to make BERT generate the target word for this specific task. Besides, the cross-lingual reverse dictionary is the task to find the proper target word described in another language. Previous models have to keep two different word embeddings and learn to align these embeddings. Nevertheless, by using the Multilingual BERT (mBERT), we can efficiently conduct the cross-lingual reverse dictionary with one subword embedding, and the alignment between languages is not necessary. More importantly, mBERT can achieve remarkable cross-lingual reverse dictionary performance even without the parallel corpus, which means it can conduct the cross-lingual reverse dictionary with only corresponding monolingual data. Code is publicly available at https://github.com/yhcc/BertForRD.git.
  19. A Graph-based Model for Joint Chinese Word Segmentation and Dependency Parsing, Transactions of the Association for Computational Linguistics (TACL) , Vol. 8, pp. 78-92, 2020. [BibTeX][DOI][PDF]
    Hang Yan, Xipeng Qiu, Xuanjing Huang.
  20. BibTeX:
    @article{yan2020graph,
      author = {Yan, Hang and Qiu, Xipeng and Huang, Xuanjing},
      title = {A Graph-based Model for Joint Chinese Word Segmentation and Dependency Parsing},
      journal = {Transactions of the Association for Computational Linguistics},
      year = {2020},
      volume = {8},
      pages = {78--92},
      doi = {https://doi.org/10.1162/tacl_a_00301}
    }
    
  21. Pre-trained Models for Natural Language Processing: A Survey, SCIENCE CHINA Technological Sciences (SCTS) , Science China Press, 2020. [BibTeX][DOI][PDF]
    Xipeng Qiu, TianXiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang.
  22. BibTeX:
    @article{qiu2020:scts-ptms,
      author = {Xipeng Qiu and TianXiang Sun and Yige Xu and Yunfan Shao and Ning Dai and Xuanjing Huang},
      title = {Pre-trained Models for Natural Language Processing: A Survey},
      journal = {SCIENCE CHINA Technological Sciences},
      publisher = {Science China Press},
      year = {2020},
      doi = {https://doi.org/10.1007/s11431-020-1647-3}
    }
    
  23. Chinese word segmentation via BiLSTM+Semi-CRF with relay node, JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY (JCST), September, 2020 , Vol. 35(5), pp. 1115–1126, 2020. [BibTeX][DOI][PDF]
    Nuo Qun, Hang Yan, XiPeng Qiu, XuanJing Huang.
  24. BibTeX:
    @article{yan2020jcst-semicrf,
      author = {Nuo Qun and Hang Yan and XiPeng Qiu and XuanJing Huang},
      title = {Chinese word segmentation via BiLSTM+Semi-CRF with relay node},
      journal = {JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY},
      year = {2020},
      volume = {35},
      number = {5},
      pages = {1115–1126},
      doi = {https://doi.org/10.1007/s11390-020-9576-4}
    }
    
  25. FLAT: Chinese NER Using Flat-Lattice Transformer, ACL, 2020. [BibTeX][PDF][Code]
    Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang.
  26. BibTeX:
    @inproceedings{li-etal-2020-flat,
      author = {Li, Xiaonan and Yan, Hang and Qiu, Xipeng and Huang, Xuanjing},
      title = {FLAT: Chinese NER Using Flat-Lattice Transformer},
      booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
      year = {2020},
      pages = {6836--6842}, 
      url = {https://www.aclweb.org/anthology/2020.acl-main.611}
    }
    
  27. Improving Image Captioning with Better Use of Caption, ACL, 2020. [BibTeX][PDF][Code][Abstract]
    Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu.
  28. BibTeX:
    @inproceedings{shi-etal-2020-improving,
      author = {Shi, Zhan and Zhou, Xu and Qiu, Xipeng and Zhu, Xiaodan},
      title = {Improving Image Captioning with Better Use of Caption},
      booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
      year = {2020},
      pages = {7454--7464}, 
      url = {https://www.aclweb.org/anthology/2020.acl-main.664}
    }
    
    Abstract: Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community. In this paper, we present a novel image captioning architecture to better explore semantics available in captions and leverage that to enhance both image representation and caption generation. Our models first construct caption-guided visual relationship graphs that introduce beneficial inductive bias using weakly supervised multi-instance learning. The representation is then enhanced with neighbouring and contextual nodes with their textual and visual features. During generation, the model further incorporates visual relationships using multi-task learning for jointly predicting word and object/predicate tag sequences. We perform extensive experiments on the MSCOCO dataset, showing that the proposed framework significantly outperforms the baselines, resulting in the state-of-the-art performance under a wide range of evaluation metrics. The code of our paper has been made publicly available.
  29. Heterogeneous Graph Neural Networks for Extractive Document Summarization, ACL, 2020. [BibTeX][PDF][Code]
    Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang.
  30. BibTeX:
    @inproceedings{wang-etal-2020-heterogeneous,
      author = {Wang, Danqing and Liu, Pengfei and Zheng, Yining and Qiu, Xipeng and Huang, Xuanjing},
      title = {Heterogeneous Graph Neural Networks for Extractive Document Summarization},
      booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
      year = {2020},
      pages = {6209--6219}, 
      url = {https://www.aclweb.org/anthology/2020.acl-main.553}
    }
    
  31. Extractive Summarization as Text Matching, ACL, 2020. [BibTeX][PDF][Code]
    Ming Zhong, Pengfei Liu, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang.
  32. BibTeX:
    @inproceedings{zhong-etal-2020-extractive,
      author = {Zhong, Ming and Liu, Pengfei and Chen, Yiran and Wang, Danqing and Qiu, Xipeng and Huang, Xuanjing},
      title = {Extractive Summarization as Text Matching},
      booktitle = {Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
      year = {2020},
      pages = {6197--6208}, 
      url = {https://www.aclweb.org/anthology/2020.acl-main.552}
    }
    
  33. Multi-Scale Self-Attention for Text Classification, AAAI, 2020. [BibTeX] [PDF]
    Qipeng Guo, Xipeng Qiu, Pengfei Liu, Xiangyang Xue, Zheng Zhang.
  34. BibTeX:
    @inproceedings{guo2020multiscale,
      author = {Qipeng Guo and Xipeng Qiu and Pengfei Liu and Xiangyang Xue and Zheng Zhang},
      title = {Multi-Scale Self-Attention for Text Classification},
      booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
      year = {2020}, 
      url = {https://arxiv.org/abs/1912.00544}
    }
    
  35. Learning Sparse Sharing Architectures for Multiple Tasks, AAAI, 2020. [BibTeX] [PDF] [Code]
    Tianxiang Sun, Yunfan Shao, Xiaonan Li, Pengfei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang.
  36. BibTeX:
    @inproceedings{sun2020sparsing,
      author = {Tianxiang Sun and Yunfan Shao and Xiaonan Li and Pengfei Liu and Hang Yan and Xipeng Qiu and Xuanjing Huang},
      title = {Learning Sparse Sharing Architectures for Multiple Tasks},
      booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
      year = {2020}
    }
    

    [2019]

  37. How to Fine-Tune BERT for Text Classification?, CCL (Best Paper Award), 2019. [BibTeX][PDF]
    Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang.
  38. BibTeX:
    @inproceedings{sun2019finetune,
      author = {Chi Sun and Xipeng Qiu and Yige Xu and Xuanjing Huang},
      title = {How to Fine-Tune BERT for Text Classification?},
      booktitle = {Proceedings of China National Conference on Computational Linguistics},
      year = {2019},
      pages = {194--206}, 
      url = {https://arxiv.org/abs/1905.05583}
    }
    
  39. Star-Transformer, NAACL, 2019. [BibTeX][PDF]
    Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Xiangyang Xue, Zheng Zhang.
  40. BibTeX:
    @inproceedings{guo2019star,
      author = {Guo, Qipeng and Qiu, Xipeng and Liu, Pengfei and Shao, Yunfan and Xue, Xiangyang and Zhang, Zheng},
      title = {Star-Transformer},
      booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies},
      year = {2019},
      pages = {1315--1325}, 
      url = {https://www.aclweb.org/anthology/N19-1133}
    }
    
  41. VCWE: Visual Character-Enhanced Word Embeddings, NAACL, 2019. [BibTeX][PDF]
    Chi Sun, Xipeng Qiu, Xuanjing Huang.
  42. BibTeX:
    @inproceedings{sun2019vcwe,
      author = {Sun, Chi and Qiu, Xipeng and Huang, Xuanjing},
      title = {VCWE: Visual Character-Enhanced Word Embeddings},
      booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
      year = {2019},
      pages = {2710--2719}, 
      url = {https://www.aclweb.org/anthology/N19-1277}
    }
    
  43. Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation, ACL, 2019. [BibTeX][PDF][Code]
    Ning Dai, Jianze Liang, Xipeng Qiu, Xuanjing Huang.
  44. BibTeX:
    @inproceedings{dai2019style,
      author = {Ning Dai and Jianze Liang and Xipeng Qiu and Xuanjing Huang},
      title = {Style Transformer: Unpaired Text Style Transfer without Disentangled Latent Representation},
      booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
      year = {2019},
      pages = {5997--6007}, 
      url = {https://www.aclweb.org/anthology/P19-1601/}
    }
    
  45. Searching for Effective Neural Extractive Summarization: What Works and What's Next, ACL, 2019. [BibTeX][PDF][Code]
    Ming Zhong, Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  46. BibTeX:
    @inproceedings{zhong2019sum,
      author = {Ming Zhong and Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Searching for Effective Neural Extractive Summarization: What Works and What's Next},
      booktitle = {Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
      year = {2019},
      pages = {1049--1058}, 
      url = {https://www.aclweb.org/anthology/P19-1100/}
    }
    
  47. Switch-LSTMs for Multi-Criteria Chinese Word Segmentation, AAAI, 2019. [BibTeX][PDF]
    Jingjing Gong, Xinchi Chen, Tao Gui, Xipeng Qiu.
  48. BibTeX:
    @inproceedings{gong2019switch,
      author = {Jingjing Gong and Xinchi Chen and Tao Gui and Xipeng Qiu},
      title = {Switch-LSTMs for Multi-Criteria Chinese Word Segmentation},
      booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
      year = {2019},
      pages = {6457--6464}, 
      url = {https://arxiv.org/abs/1812.08033}
    }
    
  49. Learning Multi-Task Communication with Message Passing for Sequence Learning, AAAI, 2019. [BibTeX][PDF]
    Pengfei Liu, Jie Fu, Yue Dong, Xipeng Qiu, Jackie Chi Kit Cheung.
  50. BibTeX:
    @inproceedings{liu2019multi,
      author = {Pengfei Liu and Jie Fu and Yue Dong and Xipeng Qiu and Jackie Chi Kit Cheung},
      title = {Learning Multi-Task Communication with Message Passing for Sequence Learning},
      booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
      year = {2019},
      pages = {4360--4367}, 
      url = {https://aaai.org/ojs/index.php/AAAI/article/view/4346}
    }
    
  51. Text Information Aggregation with Centrality Attention, SCIENCE CHINA Information Sciences (SCIS) , 2019. [BibTeX]
    JingJing Gong, Hang Yan, Yining Zheng, Qipeng Guo, Xipeng Qiu, Xuanjing Huang.
  52. BibTeX:
    @article{gong2019centrality,
      author = {JingJing Gong and Hang Yan and Yining Zheng and Qipeng Guo and Xipeng Qiu and Xuanjing Huang},
      title = {Text Information Aggregation with Centrality Attention},
      journal = {SCIENCE CHINA Information Sciences},
      year = {2019}
    }
    
  53. Low-rank and Locality Constrained Self-Attention for Sequence Modeling, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), December, 2019 , Vol. 27(12), pp. 2213 - 2222, 2019. [BibTeX][DOI]
    Qipeng Guo, Xipeng Qiu, Xiangyang Xue, Zheng Zhang.
  54. BibTeX:
    @article{guo2019low,
      author = {Guo, Qipeng and Qiu, Xipeng and Xue, Xiangyang and Zhang, Zheng},
      title = {Low-rank and Locality Constrained Self-Attention for Sequence Modeling},
      journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
      year = {2019},
      volume = {27},
      number = {12},
      pages = {2213 - 2222},
      doi = {https://doi.org/10.1109/TASLP.2019.2944078}
    }
    
  55. Sequence Labeling with Deep Gated Dual Path CNN, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), December, 2019 , Vol. 27(12), pp. 2326 - 2335, 2019. [BibTeX][DOI]
    Lujun Zhao, Xipeng Qiu, Qi Zhang, Xuanjing Huang.
  56. BibTeX:
    @article{zhao2019sequence,
      author = {Zhao, Lujun and Qiu, Xipeng and Zhang, Qi and Huang, Xuanjing},
      title = {Sequence Labeling with Deep Gated Dual Path CNN},
      journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
      year = {2019},
      volume = {27},
      number = {12},
      pages = {2326 - 2335},
      doi = {https://doi.org/10.1109/TASLP.2019.2944563}
    }
    
  57. Sequence Labeling with Deep Gated Dual Path CNN, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) , 2019. [BibTeX]
    Lujun Zhao, Xipeng Qiu, Qi Zhang, Xuanjing Huang.
  58. BibTeX:
    @article{zhao2019sequence,
      author = {Zhao, Lujun and Qiu, Xipeng and Zhang, Qi and Huang, Xuanjing},
      title = {Sequence Labeling with Deep Gated Dual Path CNN},
      journal = {IEEE/ACM Transactions on Audio, Speech, and Language Processing},
      year = {2019}
    }
    

    [2018]

  59. Information Aggregation via Dynamic Routing for Sequence Encoding, COLING, 2018. [BibTeX][PDF]
    Jingjing Gong, Xipeng Qiu, Shaojing Wang, Xuanjing Huang.
  60. BibTeX:
    @inproceedings{gong2018information,
      author = {Jingjing Gong and Xipeng Qiu and Shaojing Wang and Xuanjing Huang},
      title = {Information Aggregation via Dynamic Routing for Sequence Encoding},
      booktitle = {Proceedings of the 27th International Conference on Computational Linguistics},
      year = {2018},
      url = {https://arxiv.org/abs/1806.01501}
    }
    
  61. Convolutional Interaction Network for Natural Language Inference, EMNLP, 2018. [BibTeX][PDF]
    Jingjing Gong, Xipeng Qiu, Xinchi Chen, Dong Liang, Xuanjing Huang.
  62. BibTeX:
    @inproceedings{gong2018convolutional,
      author = {Gong, Jingjing and Qiu, Xipeng and Chen, Xinchi and Liang, Dong and Huang, Xuanjing},
      title = {Convolutional Interaction Network for Natural Language Inference},
      booktitle = {Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing},
      year = {2018},
      pages = {1576--1585},
      url = {http://aclweb.org/anthology/D18-1186}
    }
    
  63. Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks, IJCAI, 2018. [BibTeX][PDF]
    Renjie Zheng, Junkun Chen, Xipeng Qiu.
  64. BibTeX:
    @inproceedings{zheng2018same,
      author = {Zheng, Renjie and Chen, Junkun and Qiu, Xipeng},
      title = {Same Representation, Different Attentions: Shareable Sentence Representation Learning from Multiple Tasks},
      booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence},
      year = {2018},
      url = {https://arxiv.org/abs/1804.08139}
    }
    
  65. Toward Diverse Text Generation with Inverse Reinforcement Learning, IJCAI, 2018. [BibTeX][PDF]
    Zhan Shi, Xinchi Chen, Xipeng Qiu, Xuanjing Huang.
  66. BibTeX:
    @inproceedings{shi2018towards,
      author = {Shi, Zhan and Chen, Xinchi and Qiu, Xipeng and Huang, Xuanjing},
      title = {Towards Diverse Text Generation with Inverse Reinforcement Learning},
      booktitle = {Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence},
      year = {2018},
      url = {https://arxiv.org/abs/1804.11258}
    }
    
  67. Incorporating Discriminator in Sentence Generation: a Gibbs Sampling Method, AAAI, 2018. [BibTeX][PDF]
    Jinyue Su, Jiacheng Xu, Xipeng Qiu, Xuanjing Huang.
  68. BibTeX:
    @inproceedings{su2018incorporating,
      author = {Su, Jinyue and Xu, Jiacheng and Qiu, Xipeng and Huang, Xuanjing},
      title = {Incorporating Discriminator in Sentence Generation: a Gibbs Sampling Method},
      booktitle = {Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence},
      year = {2018},
      url = {https://arxiv.org/abs/1802.08970}
    }
    
  69. Meta Multi-Task Learning for Sequence Modeling, AAAI, 2018. [BibTeX][PDF]
    Junkun Chen, Xipeng Qiu, Pengfei Liu, Xuanjing Huang.
  70. BibTeX:
    @inproceedings{chen2018meta,
      author = {Chen, Junkun and Qiu, Xipeng and Liu, Pengfei and Huang, Xuanjing},
      title = {Meta Multi-Task Learning for Sequence Modeling},
      booktitle = {Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence},
      year = {2018},
      url = {https://arxiv.org/abs/1802.08969}
    }
    

    [2017]

  71. Adaptive Semantic Compositionality for Sentence Modelling, IJCAI, 2017. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  72. BibTeX:
    @inproceedings{liu2017adaptive,
      author = {Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Adaptive Semantic Compositionality for Sentence Modelling},
      booktitle = {Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17},
      year = {2017},
      pages = {4061--4067},
      url = {https://www.ijcai.org/proceedings/2017/0567.pdf}
    }
    
  73. A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging, IJCAI, 2017. [BibTeX][PDF]
    Xinchi Chen, Xipeng Qiu, Xuanjing Huang.
  74. BibTeX:
    @inproceedings{chen2017feature,
      author = {Xinchi Chen and Xipeng Qiu and Xuanjing Huang},
      title = {A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging},
      booktitle = {Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17},
      year = {2017},
      pages = {3960--3966},
      url = {https://www.ijcai.org/proceedings/2017/0553.pdf}
    }
    
  75. Knowledge Graph Representation with Jointly Structural and Textual Encoding, IJCAI, 2017. [BibTeX][PDF]
    Jiacheng Xu, Xipeng Qiu, Kan Chen, Xuanjing Huang.
  76. BibTeX:
    @inproceedings{xu2017knowledge,
      author = {Jiacheng Xu and Xipeng Qiu and Kan Chen and Xuanjing Huang},
      title = {Knowledge Graph Representation with Jointly Structural and Textual Encoding},
      booktitle = {Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence},
      year = {2017},
      pages = {1318--1324},
      url = {https://www.ijcai.org/proceedings/2017/0183.pdf}
    }
    
  77. Dynamic Compositional Neural Networks over Tree Structure, IJCAI, 2017. [BibTeX] [DOI][PDF]
    Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  78. BibTeX:
    @inproceedings{liu2017dynamic,
      author = {Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Dynamic Compositional Neural Networks over Tree Structure},
      booktitle = {Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence},
      year = {2017},
      pages = {4054--4060},
      url = {https://www.ijcai.org/proceedings/2017/0566.pdf},
      doi = {https://doi.org/10.24963/ijcai.2017/566}
    }
    
  79. Adversarial Multi-task Learning for Text Classification, ACL, 2017. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  80. BibTeX:
    @inproceedings{liu2017adversarial,
      author = {Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Adversarial Multi-task Learning for Text Classification},
      booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
      year = {2017},
      pages = {1--10},
      url = {http://aclweb.org/anthology/P/P17/P17-1001.pdf}
    }
    
  81. Adversarial Multi-Criteria Learning for Chinese Word Segmentation, ACL (Outstanding Paper Award), 2017. [BibTeX][PDF]
    Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang.
  82. BibTeX:
    @inproceedings{chen2017adversarial,
      author = {Xinchi Chen and Zhan Shi and Xipeng Qiu and Xuanjing Huang},
      title = {Adversarial Multi-Criteria Learning for Chinese Word Segmentation},
      booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
      year = {2017},
      pages = {1193--1203},
      url = {http://aclweb.org/anthology/P/P17/P17-1110.pdf}
    }
    
  83. Idiom-Aware Compositional Distributed Semantics, EMNLP, 2017. [BibTeX][PDF]
    Pengfei Liu, Kaiyu Qian, Xipeng Qiu, Xuanjing Huang.
  84. BibTeX:
    @inproceedings{liu2017idiom,
      author = {Liu, Pengfei and Qian, Kaiyu and Qiu, Xipeng and Huang, Xuanjing},
      title = {Idiom-Aware Compositional Distributed Semantics},
      booktitle = {Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing},
      year = {2017},
      pages = {1215--1224},
      url = {http://www.aclweb.org/anthology/D17-1125}
    }
    

    [2016]

  85. Cached Long Short-Term Memory Neural Networks for Document-Level Sentiment Classification, EMNLP, 2016. [BibTeX][PDF]
    Jiacheng Xu, Danlu Chen, Xipeng Qiu, Xuanjing Huang.
  86. BibTeX:
    @inproceedings{xu2016cached,
      author = {Jiacheng Xu and Danlu Chen and Xipeng Qiu and Xuanjing Huang},
      title = {Cached Long Short-Term Memory Neural Networks for Document-Level Sentiment Classification},
      booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing},
      year = {2016},
      url = {https://aclweb.org/anthology/D16-1172}
    }
    
  87. Analyzing Linguistic Knowledge in Sequential Model of Sentence, EMNLP, 2016. [BibTeX][PDF]
    Peng Qian, Xipeng Qiu, Xuanjing Huang.
  88. BibTeX:
    @inproceedings{qian2016analyzing,
      author = {Peng Qian and Xipeng Qiu and Xuanjing Huang},
      title = {Analyzing Linguistic Knowledge in Sequential Model of Sentence},
      booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing},
      year = {2016},
      url = {https://aclweb.org/anthology/D16-1079}
    }
    
  89. Modelling Interaction of Sentence Pair with Coupled-LSTMs, EMNLP, 2016. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Yaqian Zhou, Jifan Chen, Xuanjing Huang.
  90. BibTeX:
    @inproceedings{liu2016modelling,
      author = {Liu, Pengfei and Qiu, Xipeng and Zhou, Yaqian and Chen, Jifan and Huang, Xuanjing},
      title = {Modelling Interaction of Sentence Pair with Coupled-LSTMs},
      booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing},
      year = {2016},
      url = {https://aclweb.org/anthology/D16-1176}
    }
    
  91. Deep Multi-Task Learning with Shared Memory, EMNLP, 2016. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  92. BibTeX:
    @inproceedings{liu2016deep-multitask,
      author = {Liu, Pengfei and Qiu, Xipeng and Huang, Xuanjing},
      title = {Deep Multi-Task Learning with Shared Memory},
      booktitle = {Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing},
      year = {2016},
      url = {https://aclweb.org/anthology/D16-1012}
    }
    
  93. Bridging LSTM Architecture and the Neural Dynamics during Reading, IJCAI, 2016. [BibTeX][PDF]
    Peng Qian, Xipeng Qiu, Xuanjing Huang.
  94. BibTeX:
    @inproceedings{qian2016bridge,
      author = {Peng Qian and Xipeng Qiu and Xuanjing Huang},
      title = {Bridging LSTM Architecture and the Neural Dynamics during Reading},
      booktitle = {Proceedings of International Joint Conference on Artificial Intelligence},
      year = {2016},
      url = {https://arxiv.org/abs/1604.06635}
    }
    
  95. Recurrent Neural Network for Text Classification with Multi-Task Learning, IJCAI, 2016. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  96. BibTeX:
    @inproceedings{liu2016recurrent,
      author = {Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Recurrent Neural Network for Text Classification with Multi-Task Learning},
      booktitle = {Proceedings of International Joint Conference on Artificial Intelligence},
      year = {2016},
      url = {https://arxiv.org/abs/1605.05101}
    }
    
  97. Investigating Language Universal and Specific in Word Embedding, ACL, 2016. [BibTeX][PDF]
    Peng Qian, Xipeng Qiu, Xuanjing Huang.
  98. BibTeX:
    @inproceedings{qian2016investigating,
      author = {Peng Qian and Xipeng Qiu and Xuanjing Huang},
      title = {Investigating Language Universal and Specific in Word Embedding},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2016},
      url = {http://aclweb.org/anthology/P/P16/P16-1140.pdf}
    }
    
  99. A New Psychometric-inspired Evaluation Metric for Chinese Word Segmentation, ACL, 2016. [BibTeX][PDF]
    Peng Qian, Xipeng Qiu, Xuanjing Huang.
  100. BibTeX:
    @inproceedings{qian2016new,
      author = {Peng Qian and Xipeng Qiu and Xuanjing Huang},
      title = {A New Psychometric-inspired Evaluation Metric for Chinese Word Segmentation},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2016},
      url = {http://aclweb.org/anthology/P/P16/P16-1206.pdf}
    }
    
  101. Deep Fusion LSTMs for Text Semantic Matching, ACL, 2016. [BibTeX][PDF]
    Pengfei Liu, Xipeng Qiu, Jifan Chen, Xuanjing Huang.
  102. BibTeX:
    @inproceedings{liu2016deep,
      author = {Pengfei Liu and Xipeng Qiu and Jifan Chen and Xuanjing Huang},
      title = {Deep Fusion LSTMs for Text Semantic Matching},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2016},
      url = {http://aclweb.org/anthology/P/P16/P16-1098.pdf}
    }
    
  103. Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network, ACL, 2016. [BibTeX][PDF]
    Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, Xuanjing Huang.
  104. BibTeX:
    @inproceedings{chen2016implicit,
      author = {Jifan Chen and Qi Zhang and Pengfei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2016},
      url = {http://aclweb.org/anthology/P/P16/P16-1163.pdf}
    }
    

    [2015]

  105. Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences and Documents, EMNLP, 2015. [BibTeX][PDF]
    PengFei Liu, Xipeng Qiu, Xinchi Chen, Shiyu Wu, Xuanjing Huang.
  106. BibTeX:
    @inproceedings{liu2015multitimescale,
      author = {PengFei Liu and Xipeng Qiu and Xinchi Chen and Shiyu Wu and Xuanjing Huang},
      title = {Multi-Timescale Long Short-Term Memory Neural Network for Modelling Sentences and Documents},
      booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
      year = {2015},
      url = {http://www.aclweb.org/anthology/D/D15/D15-1280.pdf}
    }
    
  107. Transition-based Dependency Parsing Using Two Heterogeneous Gated Recursive Neural Networks, EMNLP, 2015. [BibTeX][PDF]
    Xinchi Chen, Yaqian Zhou, Chenxi Zhu, Xipeng Qiu, Xuanjing Huang.
  108. BibTeX:
    @inproceedings{chen2015transition,
      author = {Xinchi Chen and Yaqian Zhou and Chenxi Zhu and Xipeng Qiu and Xuanjing Huang},
      title = {Transition-based Dependency Parsing Using Two Heterogeneous Gated Recursive Neural Networks},
      booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
      year = {2015},
      url = {http://www.aclweb.org/anthology/D/D15/D15-1215.pdf}
    }
    
  109. Sentence Modeling with Gated Recursive Neural Network, EMNLP, 2015. [BibTeX][PDF]
    Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Shiyu Wu, Xuanjing Huang.
  110. BibTeX:
    @inproceedings{chen2015sentence,
      author = {Xinchi Chen and Xipeng Qiu and Chenxi Zhu and Shiyu Wu and Xuanjing Huang},
      title = {Sentence Modeling with Gated Recursive Neural Network},
      booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
      year = {2015},
      url = {http://www.aclweb.org/anthology/D/D15/D15-1092.pdf}
    }
    
  111. Long Short-Term Memory Neural Networks for Chinese Word Segmentation, EMNLP, 2015. [BibTeX][PDF]
    Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Pengfei Liu, Xuanjing Huang.
  112. BibTeX:
    @inproceedings{chen2015long,
      author = {Xinchi Chen and Xipeng Qiu and Chenxi Zhu and Pengfei Liu and Xuanjing Huang},
      title = {Long Short-Term Memory Neural Networks for Chinese Word Segmentation},
      booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
      year = {2015},
      url = {http://www.aclweb.org/anthology/D/D15/D15-1141.pdf}
    }
    
  113. Convolutional Neural Tensor Network Architecture for Community-based Question Answering, IJCAI, 2015. [BibTeX][PDF]
    Xipeng Qiu, Xuanjing Huang.
  114. BibTeX:
    @inproceedings{qiu2015convolutional,
      author = {Xipeng Qiu and Xuanjing Huang},
      title = {Convolutional Neural Tensor Network Architecture for Community-based Question Answering},
      booktitle = {Proceedings of International Joint Conference on Artificial Intelligence},
      year = {2015},
      url = {http://ijcai.org/papers15/Papers/IJCAI15-188.pdf}
    }
    
  115. Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model, IJCAI, 2015. [BibTeX][PDF]
    PengFei Liu, Xipeng Qiu, Xuanjing Huang.
  116. BibTeX:
    @inproceedings{liu2015learning,
      author = {PengFei Liu and Xipeng Qiu and Xuanjing Huang},
      title = {Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model},
      booktitle = {Proceedings of International Joint Conference on Artificial Intelligence},
      year = {2015},
      url = {http://ijcai.org/papers15/Papers/IJCAI15-185.pdf}
    }
    
  117. A Re-Ranking Model For Dependency Parser With Recursive Convolutional Neural Network, ACL, 2015. [BibTeX][PDF]
    Chenxi Zhu, Xipeng Qiu, Xinchi Chen, Xuanjing Huang.
  118. BibTeX:
    @inproceedings{zhu2015reranking,
      author = {Chenxi Zhu and Xipeng Qiu and Xinchi Chen and Xuanjing Huang},
      title = {A Re-Ranking Model For Dependency Parser With Recursive Convolutional Neural Network},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2015},
      url = {http://www.aclweb.org/anthology/P/P15/P15-1112.pdf}
    }
    
  119. Gated Recursive Neural Network For Chinese Word Segmentation, ACL, 2015. [BibTeX][PDF]
    Xinchi Chen, Xipeng Qiu, Chenxi Zhu, Xuanjing Huang.
  120. BibTeX:
    @inproceedings{chen2015gated,
      author = {Xinchi Chen and Xipeng Qiu and Chenxi Zhu and Xuanjing Huang},
      title = {Gated Recursive Neural Network For Chinese Word Segmentation},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2015},
      url = {http://www.aclweb.org/anthology/P/P15/P15-1168.pdf}
    }
    

    [2014 and before]

  121. Automatic Corpus Expansion for Chinese Word Segmentation by Exploiting the Redundancy of Web Information, COLING, 2014. [BibTeX][PDF]
    Xipeng Qiu, ChaoChao Huang, Xuanjing Huang.
  122. BibTeX:
    @inproceedings{qiu2014automatic,
      author = {Qiu, Xipeng and Huang, ChaoChao and Huang, Xuanjing},
      title = {Automatic Corpus Expansion for Chinese Word Segmentation by Exploiting the Redundancy of Web Information},
      booktitle = {Proceedings of the 25th International Conference on Computational Linguistics},
      year = {2014},
      pages = {1154--1164},
      url = {http://anthology.aclweb.org/C/C14/C14-1109.pdf}
    }
    
  123. Learning Topical Translation Model for Microblog Hashtag Suggestion, IJCAI, 2013. [BibTeX]
    Zhuoye Ding, Xipeng Qiu, Qi Zhang, Xuanjing Huang.
  124. BibTeX:
    @inproceedings{ding2013learning,
      author = {Ding, Zhuoye and Qiu, Xipeng and Zhang, Qi and Huang, Xuanjing},
      title = {Learning Topical Translation Model for Microblog Hashtag Suggestion},
      booktitle = {Proceedings of the Twenty-Third international joint conference on Artificial Intelligence},
      year = {2013}
    }
    
  125. Joint Chinese Word Segmentation and POS Tagging on Heterogeneous Annotated Corpora with Multiple Task Learning, EMNLP, 2013. [BibTeX][PDF]
    Xipeng Qiu, Jiayi Zhao, Xuanjing Huang.
  126. BibTeX:
    @inproceedings{qiu2013joint,
      author = {Qiu, Xipeng and Zhao, Jiayi and Huang, Xuanjing},
      title = {Joint Chinese Word Segmentation and POS Tagging on Heterogeneous Annotated Corpora with Multiple Task Learning},
      booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
      year = {2013},
      pages = {658--668},
      url = {http://www.aclweb.org/anthology/D/D13/D13-1062.pdf}
    }
    
  127. FudanNLP: A Toolkit for Chinese Natural Language Processing, ACL, 2013. [BibTeX]
    Xipeng Qiu, Qi Zhang, Xuanjing Huang.
  128. BibTeX:
    @inproceedings{Qiu:2013,
      author = {Xipeng Qiu and Qi Zhang and Xuanjing Huang},
      title = {FudanNLP: A Toolkit for Chinese Natural Language Processing},
      booktitle = {Proceedings of Annual Meeting of the Association for Computational Linguistics},
      year = {2013}
    }
    
  129. Recognizing Inference in Texts with Markov Logic Networks, ACM Transactions on Asian Language Information Processing (TALIP) , Vol. 11(4), pp. 15:1-15:23, 2012. [BibTeX]
    Xipeng Qiu, Ling Cao, Zhao Liu, Xuanjing Huang.
  130. BibTeX:
    @article{Qiu:2012:RIT:2382593.2382597,
      author = {Qiu, Xipeng and Cao, Ling and Liu, Zhao and Huang, Xuanjing},
      title = {Recognizing Inference in Texts with Markov Logic Networks},
      journal = {ACM Transactions on Asian Language Information Processing},
      year = {2012},
      volume = {11},
      number = {4},
      pages = {15:1--15:23}
    }
    
  131. Part-of-Speech Tagging for Chinese-English Mixed Texts with Dynamic Features, EMNLP-CONLL, 2012. [BibTeX][PDF]
    Jiayi Zhao, Xipeng Qiu, Shu Zhang, Feng Ji, Xuanjing Huang.
  132. BibTeX:
    @inproceedings{zhao2012partofspeech,
      author = {Zhao, Jiayi and Qiu, Xipeng and Zhang, Shu and Ji, Feng and Huang, Xuanjing},
      title = {Part-of-Speech Tagging for Chinese-English Mixed Texts with Dynamic Features},
      booktitle = {Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning},
      year = {2012},
      url = {http://www.aclweb.org/anthology/D/D12/D12-1126}
    }
    
  133. Joint Segmentation and Tagging with Coupled Sequences Labeling, COLING, 2012. [BibTeX][PDF]
    Xipeng Qiu, Feng Ji, Jiayi Zhao, Xuanjing Huang.
  134. BibTeX:
    @inproceedings{Qiu:2012,
      author = {Qiu, Xipeng and Ji, Feng and Zhao, Jiayi and Huang, Xuanjing},
      title = {Joint Segmentation and Tagging with Coupled Sequences Labeling},
      booktitle = {Proceedings of International Conference on Computational Linguistics},
      year = {2012},
      pages = {951--964},
      url = {http://www.aclweb.org/anthology/C12-2093}
    }
    
  135. Hierarchical Text Classification with Latent Concepts, ACL-HLT, 2011. [BibTeX]
    Xipeng Qiu, Xuanjing Huang, Zhao Liu, Jinlong Zhou.
  136. BibTeX:
    @inproceedings{qiu2011hierarchical,
      author = {Xipeng Qiu and Xuanjing Huang and Zhao Liu and Jinlong Zhou},
      title = {Hierarchical Text Classification with Latent Concepts},
      booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies},
      year = {2011}
    }
    
  137. An Effective Feature Selection Method for Text Categorization, PAKDD, 2011. [BibTeX]
    Xipeng Qiu, Jinlong Zhou, Xuanjing Huang.
  138. BibTeX:
    @inproceedings{Qiu:2011a,
      author = {Xipeng Qiu and Jinlong Zhou and Xuanjing Huang},
      title = {An Effective Feature Selection Method for Text Categorization},
      booktitle = {Proceedings of the 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining},
      year = {2011}
    }
    
  139. Detecting Hedge Cues and their Scopes with Average Perceptron, CONLL, 2010. [BibTeX][PDF]
    Feng Ji, Xipeng Qiu, Xuanjing Huang.
  140. BibTeX:
    @inproceedings{ji2010detecting,
      author = {Feng Ji and Xipeng Qiu and Xuanjing Huang},
      title = {Detecting Hedge Cues and their Scopes with Average Perceptron},
      booktitle = {Fourteenth Conference on Computational Natural Language Learning},
      year = {2010}, 
      url = {https://www.aclweb.org/anthology/W10-3005/}
    }
    
  141. Info-Margin Maximization for Feature Extraction, Pattern Recognition Letters (PRL) , Vol. 30, pp. 1516-1522, 2009. [BibTeX]
    Xipeng Qiu, Lide Wu.
  142. BibTeX:
    @article{qiu2009infomargin,
      author = {Xipeng Qiu and Lide Wu},
      title = {Info-Margin Maximization for Feature Extraction},
      journal = {Pattern Recognition Letters},
      year = {2009},
      volume = {30},
      pages = {1516--1522}
    }
    
  143. Hierarchical Multi-Class Text Categorization with Global Margin Maximization, ACL-IJCNLP, 2009. [BibTeX]
    Xipeng Qiu, Wenjun Gao, Xuanjing Huang.
  144. BibTeX:
    @inproceedings{Qiu:2009,
      author = {Qiu, Xipeng and Gao, Wenjun and Huang, Xuanjing},
      title = {Hierarchical Multi-Class Text Categorization with Global Margin Maximization},
      booktitle = {Proceedings of the Joint Conference of the Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP},
      year = {2009},
      pages = {165--168}
    }
    
  145. Two-dimensional nearest neighbor discriminant analysis, Neurocomputing (NeuCom) , Vol. 70(13-15), pp. 2572-2575, 2007. [BibTeX]
    Xipeng Qiu, Lide Wu.
  146. BibTeX:
    @article{qiu2007two,
      author = {Xipeng Qiu and Lide Wu},
      title = {Two-dimensional nearest neighbor discriminant analysis},
      journal = {Neurocomputing},
      year = {2007},
      volume = {70},
      number = {13-15},
      pages = {2572-2575},
    }
    
  147. Stepwise Nearest Neighbor Discriminant Analysis, IJCAI, 2005. [BibTeX][PDF]
    Xipeng Qiu, Lide Wu.
  148. BibTeX:
    @inproceedings{qiu2005stepwise,
      author = {Xipeng Qiu and Lide Wu},
      title = {Stepwise Nearest Neighbor Discriminant Analysis},
      booktitle = {Proceedings of the international joint conference on Artificial Intelligence},
      year = {2005},
      pages = {829-834},
    }
    
  149. Face Recognition by Stepwise Nonparametric Margin Maximum Criterion, ICCV, 2005. [BibTeX][PDF]
    Xipeng Qiu, Lide Wu.
  150. BibTeX:
    @inproceedings{Qiu:2005c,
      author = {Xipeng Qiu and Lide Wu},
      title = {Face Recognition by Stepwise Nonparametric Margin Maximum Criterion},
      booktitle = {Proc. of IEEE Conf. on Comput. Vision},
      year = {2005},
      pages = {1567-1572},
    }