This is the official codebase of the paper Pre-Trained Embeddings For Enhancing Multi-Hop Reasoning.
This experiments were made based on the SalesForce MultiHopKG repository, which contains the code for the paper Multi-Hop Knowledge Graph Reasoning with Reward Shaping.
To only train the KGE embedding model (ComplEx or ConvE) run the following command:
./experiment-emb.sh configs/<dataset>-<emb_model>.sh --train <gpu-ID>
To train MultiHopKG using the pre-trained embeddings from ConvE or ComplEx, run:
./experiment-rs.sh configs/<dataset>-<pre-trained_model>.sh --train <gpu-ID>
To train MultiHopKG only using pre-trained embeddings from ConvE or ComplEx, run:
./experiment.sh configs/<dataset>.sh --train <gpu-ID>
To select which pre-trained model to use, go to the config file associated with the dataset, e.g for FB15K-237:
configs/fb15K-237.sh and change the argument pretrained into conve or complex.
By default, pre-trained embeddings are part of the learnable parameters of the model. To freeze the pre-trained embeddings ans exclude them from the learnable parameters, add the argument --freeze at the end of your command.
To evaluate an already trained model, remplace the --train flag by the --inference flag in the above commands.
To save the search paths during the inference, add the `--save_beam_search_paths``flag.
If you use this work, please cite our paper:
@inproceedings{drance2023pre, title={Pre-Trained Embeddings for Enhancing Multi-Hop Reasoning}, author={Dranc{'e}, Martin and Mougin, Fleur and Zemmari, Akka and Diallo, Gayo}, booktitle={International Joint Conference on Artificial Intelligence 2023 Workshop on Knowledge-Based Compositional Generalization}, year={2023} }