Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Trainer

1 minute read

Published:

Trainer学习笔记

LLMBox

7 minute read

Published:

LLMBox学习笔记

Accelerate

2 minute read

Published:

Accelerate学习笔记

Scrapy

less than 1 minute read

Published:

爬虫学习笔记

Linux配置VPN

less than 1 minute read

Published:

在笔记本上配置vpn很方便,只需要一个科学账号(飞机)与一个配置软件(机场)。但在没有可视化界面的linux服务器上,很难通过交互界面操作配置软件。本文提供了一个在linux服务器上配置vpn的通用方式。

Pytorch

10 minute read

Published:

Pytorch学习笔记

Git笔记

less than 1 minute read

Published:

git命令行语句

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

MCP: Self-supervised Pre-training for Personalized Chatbots with Multi-level Contrastive Sampling

Published in Findings of EMNLP, 2022

EMNLP 2022 received 3964 papers and finally accepted 828 main conference papers (20.91%) and 548 findings (13.85%). 34.76% papers were accepted in total.

Recommended citation: Zhaoheng Huang, Zhicheng Dou, Yutao Zhu, and Zhengyi Ma. 2022. MCP: Self-supervised Pre-training for Personalized Chatbots with Multi-level Contrastive Sampling. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1030–1042, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. https://aclanthology.org/2022.findings-emnlp.73/

CAGS: Context-Aware Document Ranking with Contrastive Graph Sampling

Published in TKDE, 2024

TKDE 2024 published 12 issues with a total of 666 papers.

Recommended citation: Z. Huang, Z. Dou, Y. Zhu and J. -R. Wen, CAGS: Context-Aware Document Ranking with Contrastive Graph Sampling, in IEEE Transactions on Knowledge and Data Engineering, doi: 10.1109/TKDE.2024.3491996. keywords: {Data augmentation;Context modeling;Data models;Search engines;Contrastive learning;Training;Search problems;Encoding;Bidirectional control;Predictive models;Context-aware document ranking;contrastive learning;graph sampling} https://ieeexplore.ieee.org/document/10742917

One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models

Published in AAAI, 2025

AAAI 2025 received 12,957 submissions (not including desk rejected papers), and among these we have accepted 3,032 submissions for presentation at the conference, an acceptance rate of 23.4%.

Recommended citation: Zhu, Y., Huang, Z., Dou, Z., & Wen, J.-R. (2025). One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 26166-26174. https://doi.org/10.1609/aaai.v39i24.34813 https://ojs.aaai.org/index.php/AAAI/article/view/34813

reading

Reading Format

less than 1 minute read

Published:

Reading Time: 23/07/11

Reading Format

less than 1 minute read

Published:

Reading Time: 23/07/31

LLM Survey

less than 1 minute read

Published:

Reading Time: 23/08/03

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.