This is the Beta version of our User Manual and Datasets.
Feel free to contact Yisong if you have any questions to use our system. (miaoyisong [AT] gmail.com)
This is possibly the first open-source system for conversational recommendation in recent years.
Wish you find it helpful! 😛
Please kindly cite our paper if you use our codes or dataset.
Table of Content:
EAR System -- User Manual and Datasets (Beta Version)1. System Overview1.1 Dependencies2. Datasets3. Estimation Stage -- Factorization Machine Model4. Action Stage & Reflection Stage -- User Simulator4.1 Action Stage Training and Evaluation4.2 Ablation Study4.3 Baselines5. Licence and Patent
TODO: Yisong: Need to draw an overview figure of our system here.
This system is developed in Python and PyTorch. It is composed of two components:
Note: Due to different settings in Yelp (enumerated questions) and LastFM (binary questions), their codes are stored in different directory with minor differences. However, the command lines below can be used interchangeably.
Run following codes to have a quick check if you have all packages ready.
We currently put our data on Google Drive, please contact Yisong if you can't download it ( miaoyisong [AT] gmail.com).
All Functions of our FM model can be realized by changing the parameter in
FM_old_train.py as shown below:
The code to run experiment:
Both Action Stage and Reflection Stage is implemented in the
This user simulator obeys our Multi-Round Conversational Recommendation Scenario, it can be easily customised to your own usage.
We have a user-friendly interface, you only need to change the parameter below in
run.py to do all experiments.
To see how the systems works:
You will see the printed results in this format:
All baselines can all be easily implemented in our user simulator.
The training and Evaluation of this model can also be done following a simple
Abs-Greedy Algorithm can be easily realized through our system. It is equivalent to Recommending Only option plus Update mechanism.