Kaili Huang, Thejas Venkatesh, Uma Dingankar, and 9 more authors
In Advances in Information Retrieval: 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part IV, Lucca, Italy, 2025
We study serving retrieval models, particularly late interaction retrievers like ColBERT, to many concurrent users at once and under a small budget, in which the index may not fit in memory. We present ColBERT-serve, a serving system that applies a memory-mapping strategy to the ColBERT index, reducing RAM usage by 90% and permitting its deployment on cheap servers, and incorporates a multi-stage architecture with hybrid scoring, reducing ColBERT’s query latency and supporting many concurrent queries in parallel.
@inproceedings{10.1007/978-3-031-88717-8_3,selected=true,author={Huang, Kaili and Venkatesh, Thejas and Dingankar, Uma and Mallia, Antonio and Campos, Daniel and Jiao, Jian and Potts, Christopher and Zaharia, Matei and Boahen, Kwabena and Khattab, Omar and Sarup, Saarthak and Santhanam, Keshav},title={ColBERT-Serve: Efficient Multi-stage Memory-Mapped Scoring},year={2025},isbn={978-3-031-88716-1},publisher={Springer-Verlag},address={Berlin, Heidelberg},url={https://doi.org/10.1007/978-3-031-88717-8_3},doi={10.1007/978-3-031-88717-8_3},booktitle={Advances in Information Retrieval: 47th European Conference on Information Retrieval, ECIR 2025, Lucca, Italy, April 6-10, 2025, Proceedings, Part IV},pages={21-30},numpages={10},keywords={Information Retrieval, ColBERT, Efficiency},location={Lucca, Italy}}
2024
TASLP
Overview of the Ninth Dialog System Technology Challenge: DSTC9
Chulaka Gunasekara, Seokhwan Kim, Luis Fernando D’Haro, and 35 more authors
IEEE/ACM Transactions on Audio, Speech, and Language Processing, Jul 2024
@article{10595468,author={Gunasekara, Chulaka and Kim, Seokhwan and D'Haro, Luis Fernando and Rastogi, Abhinav and Chen, Yun-Nung and Eric, Mihail and Hedayatnia, Behnam and Gopalakrishnan, Karthik and Liu, Yang and Huang, Chao-Wei and Hakkani-Tür, Dilek and Li, Jinchao and Zhu, Qi and Luo, Lingxiao and Liden, Lars and Huang, Kaili and Shayandeh, Shahin and Liang, Runze and Peng, Baolin and Zhang, Zheng and Shukla, Swadheen and Huang, Minlie and Gao, Jianfeng and Mehri, Shikib and Feng, Yulan and Gordon, Carla and Alavi, Seyed Hossein and Traum, David and Eskenazi, Maxine and Beirami, Ahmad and Cho, Eunjoon and Crook, Paul A. and De, Ankita and Geramifard, Alborz and Kottur, Satwik and Moon, Seungwhan and Poddar, Shivani and Subba, Rajen},journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},title={Overview of the Ninth Dialog System Technology Challenge: DSTC9},year={2024},month=jul,volume={32},number={},pages={4066-4076},keywords={Task analysis;Measurement;Oral communication;Accuracy;Proposals;Mathematical models;Speech processing;Artificial intelligence;computational intelligence;chatbots;computational linguistics;human-machine systems;human computer interaction;natural language processing;natural language generation;personal voice assistants},doi={10.1109/TASLP.2024.3426331},}
2021
AAAI
Multi-domain Task-oriented Dialog Challenge II at DSTC9
Jinchao Li, Qi Zhu, Lingxiao Luo, and 10 more authors
In AAAI-2021 Dialog System Technology Challenge 9 Workshop, Feb 2021
The paper provides an overview of the “Multi-Domain Task Completion Dialog Challenge II” track at the 9th Dialog System Technology Challenge (DSTC9). Two tasks are introduced in this track. One is end-to-end multi-domain task completion, which focuses on building end-to-end task completion dialog systems. The other is cross-lingual dialog state tracking, which seeks to build a tracker for the target language using the source language resources. We describe the task settings, baselines, evaluation methods, and submission results for both tasks.
2020
ACL
KdConv: A Chinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation
Hao Zhou, Chujie Zheng, Kaili Huang, and 2 more authors
In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), Jul 2020
The research of knowledge-driven conversational systems is largely limited due to the lack of dialog data which consist of multi-turn conversations on multiple topics and with knowledge annotations. In this paper, we propose a Chinese multi-domain knowledge-driven conversation dataset, KdConv, which grounds the topics in multi-turn conversations to knowledge graphs. Our corpus contains 4.5K conversations from three domains (film, music, and travel), and 86K utterances with an average turn number of 19.0. These conversations contain in-depth discussions on related topics and natural transition between multiple topics. To facilitate the following research on this corpus, we provide several benchmark models. Comparative results show that the models can be enhanced by introducing background knowledge, yet there is still a large space for leveraging knowledge to model multi-turn conversations for further research. Results also show that there are obvious performance differences between different domains, indicating that it is worth to further explore transfer learning and domain adaptation. The corpus and benchmark models are publicly available.
@inproceedings{zhou-etal-2020-kdconv,title={{K}d{C}onv: A {C}hinese Multi-domain Dialogue Dataset Towards Multi-turn Knowledge-driven Conversation},author={Zhou, Hao and Zheng, Chujie and Huang, Kaili and Huang, Minlie and Zhu, Xiaoyan},booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL)},year={2020},month=jul,address={Online},publisher={Association for Computational Linguistics},url={https://www.aclweb.org/anthology/2020.acl-main.635},selected=true,doi={10.18653/v1/2020.acl-main.635},pages={7098--7108}}
TACL
CrossWOZ: A Large-Scale Chinese Cross-Domain Task-Oriented Dialogue Dataset
Qi Zhu, Kaili Huang, Zheng Zhang, and 2 more authors
Transactions of the Association for Computational Linguistics (TACL), Jun 2020
To advance multi-domain (cross-domain) dialogue modeling as well as alleviate the shortage of Chinese task-oriented datasets, we propose CrossWOZ, the first large-scale Chinese Cross-Domain Wizard-of-Oz task-oriented dataset. It contains 6K dialogue sessions and 102K utterances for 5 domains, including hotel, restaurant, attraction, metro, and taxi. Moreover, the corpus contains rich annotation of dialogue states and dialogue acts on both user and system sides. About 60% of the dialogues have cross-domain user goals that favor inter-domain dependency and encourage natural transition across domains in conversation. We also provide a user simulator and several benchmark models for pipelined task-oriented dialogue systems, which will facilitate researchers to compare and evaluate their models on this corpus. The large size and rich annotation of CrossWOZ make it suitable to investigate a variety of tasks in cross-domain dialogue modeling, such as dialogue state tracking, policy learning, user simulation, etc.
@article{a-zhu-etal-2020-crosswoz,title={{C}ross{WOZ}: A Large-Scale {C}hinese Cross-Domain Task-Oriented Dialogue Dataset},author={Zhu, Qi and Huang, Kaili and Zhang, Zheng and Zhu, Xiaoyan and Huang, Minlie},journal={Transactions of the Association for Computational Linguistics (TACL)},volume={8},year={2020},month=jun,address={Cambridge, MA},publisher={MIT Press},url={https://aclanthology.org/2020.tacl-1.19},selected=true,doi={10.1162/tacl_a_00314},pages={281--295}}
NLPCC
A Large-Scale Chinese Short-Text Conversation Dataset
Yida Wang, Pei Ke, Yinhe Zheng, and 4 more authors
In Natural Language Processing and Chinese Computing (NLPCC). Best Student Paper Award , Oct 2020
The advancements of neural dialogue generation models show promising results on modeling short-text conversations. However, training such models usually needs a large-scale high-quality dialogue corpus, which is hard to access. In this paper, we present a large-scale cleaned Chinese conversation dataset LCCC, which contains a base version (6.8 million dialogues) and a large version (12.0 million dialogues). The quality of our dataset is ensured by a rigorous data cleaning pipeline, which is built based on a set of rules and a classifier that is trained on manually annotated 110K dialogue pairs. We also release pre-training dialogue models which are trained on LCCC-base and LCCC-large respectively. The cleaned dataset and the pre-training models will facilitate the research of short-text conversation modeling. All the models and datasets are available at https://github.com/thu-coai/CDial-GPT.
@inproceedings{wang2020chinese,title={A Large-Scale Chinese Short-Text Conversation Dataset},author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},booktitle={Natural Language Processing and Chinese Computing (NLPCC)},year={2020},month=oct,doi={10.1007/978-3-030-60450-9_8},selected=true,}