Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 43 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,17 +71,6 @@ uv run data/master_dataset_and_filter.py
```
which will download all the necessary data and create a file `data/master_dataset.ftr` containing the full ~3.59 million dataset and a file `data/filtered_dataset.txt` containing the same subset of sequences as above. A rendered version of this code is provided at `notebooks/marimo_master_dataset_and_filter.ipynb`.


To run data curation process as a notebook, a marimo notebook file can be found at `notebooks/marimo_master_dataset_and_filter.py`.

This notebook can be opened/run with the following command:
```bash
uvx marimo edit notebooks/marimo_master_dataset_and_filter.py
```

All of the data processing files make use of uv to manage dependencies and so all libraries are installed when you run the above commands. See [uv documentation](https://docs.astral.sh/uv/guides/scripts/) for more information on how to run uv scripts and [marimo documentation](https://docs.marimo.io/) for more information on how to run marimo notebooks.


### Training
To train the DNA-Diffusion model, we provide a basic config file for training the diffusion model on the same subset of chromatin accessible regions described in the data section above.

Expand All @@ -91,6 +80,8 @@ To train the model call:
uv run train.py
```

This runs the model with our predefined config file `configs/train/default.yaml`, which is set to train the model for a minimum of 2000 epochs. The training script will save model checkpoints for the lowest 2 validation loss values in the `checkpoints/` directory. The path to this checkpoint will need to be updated in the sampling config file for sequence generation, as described in the Model Checkpoint section below.

We also provide a base config for debugging that will use a single sequence for training. You can override the default training script to use this debugging config by calling:

```bash
Expand All @@ -100,7 +91,12 @@ uv run train.py -cn train_debug
### Model Checkpoint
We have uploaded the model checkpoint to [HuggingFace](https://huggingface.co/ssenan/DNA-Diffusion). Below we provide an example script that handles downloading the model checkpoint and loading it for sequence generation.

If you would like to use a model checkpoint generated from the training script above, ensure you update the `checkpoint_path` within the config file `configs/sampling/default.yaml` to point to the location of the model checkpoint. By default, this is set to `checkpoints/model.safetensors`, so you will need to ensure that the model checkpoint is saved in this location. Both `pt` and `safetensors` formats are supported, so you can use either format for the model checkpoint. An example of overriding the checkpoint path from the command line is described in the sequence generation section below.

### Sequence Generation

#### Generate using Hugging Face Checkpoint

We provide a basic config file for generating sequences using the diffusion model resulting in 1000 sequences made per cell type. To generate sequences using the trained model, you can run the following command:

```bash
Expand All @@ -119,6 +115,20 @@ Base generation utilizes a guidance scale 1.0, however this can be tuned within
uv run sample_hf.py sampling.guidance_scale=7.0 sampling.number_of_samples=1 sampling.sample_batch_size=1
```

Both above examples will generate sequences for all cell types in the dataset. If you would like to generate sequences for a specific cell type, you can do so by specifying the `sampling.cell_type` parameter in the command line. For example, to generate a sequence for the K562 cell type, you can run:

```bash
uv run sample_hf.py data.cell_types=K562 sampling.number_of_samples=1 sampling.sample_batch_size=1
```
or for both K562 and GM12878 cell types, you can run:

```bash
uv run sample_hf.py 'data.cell_types="K562,GM12878"' sampling.number_of_samples=1 sampling.sample_batch_size=1
```
Cell types can be specified as a comma separated string or as a list.

#### Generate using Local Checkpoint

If you would prefer to download the model checkpoint from Hugging Face and use it directly, you can run the following command to download the model and save it in the checkpoint directory:
```bash
wget https://huggingface.co/ssenan/DNA-Diffusion/resolve/main/model.safetensors -O checkpoints/model.safetensors
Expand All @@ -129,6 +139,11 @@ Then you can run the sampling script with the following command:
uv run sample.py
```

If you would like to override the checkpoint path from the command line, you can do so with the following command (replacing `checkpoints/model.pt` with the path to your model checkpoint):
```bash
uv run sample.py sampling.checkpoint_path=checkpoints/model.pt
```

## Examples

### Training Notebook
Expand All @@ -152,14 +167,29 @@ Both examples were run on Google Colab using a T4 GPU.

DNA-Diffusion is designed to be flexible and can be adapted to your own data. To use your own data, you will need to follow these steps:

* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file with the following columns:
* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file contains at least the following columns:
* `chr`: the chromosome of the regulatory element (e.g. chr1, chr2, etc.)
* `sequence`: the DNA sequence of the regulatory element
* `TAG`: the cell type of the regulatory element (e.g. K562, hESCT0, HepG2, GM12878, etc.)

* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences and so the results may not be as good if you use a different length.
additional metadata columns like start, end, continuous accessibility are allowed but not required.

* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences so the results may not be as good if you use a different length.
* The model is designed to work with discrete class labels for the cell types, so you will need to ensure that your data is in the same format. If you have continuous labels, you can binarize them into discrete classes using a threshold or some other method. This value is contained within the `TAG` column of the dataset.

The data loading config can be found at `configs/data/default.yaml`, and you can override the default data loading config by passing the `data` parameter to the command line. For example, to use a custom data file, you can run:

```bash
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False
```

It is important to set `data.load_saved_data=False` to ensure that cached data is not used, and instead is regenerated from the provided data file. This will ensure that the model is trained on your own data. This will overwrite the default pkl file, so if you would like to keep the original data, you can set `data.saved_data_path` to a different path. For example:

```bash
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False data.saved_data_path=path/to/your/saved_data.pkl
```


## Contributors ✨

Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
Expand Down
6 changes: 6 additions & 0 deletions configs/data/sampling.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
_target_: src.dnadiffusion.data.dataloader.get_dataset_for_sampling
data_path: "data/K562_hESCT0_HepG2_GM12878_12k_sequences_per_group.txt"
saved_data_path: "data/encode_data.pkl"
load_saved_data: True
debug: False
cell_types: null # null means all cell types, or specify like ["K562", "HepG2"]
2 changes: 1 addition & 1 deletion configs/sample.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
defaults:
- model: unet
- data: default
- data: sampling
- diffusion: default
- sampling: default
2 changes: 1 addition & 1 deletion configs/sample_hf.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
defaults:
- model: unet_pretrained
- data: default
- data: sampling
- diffusion: default
- sampling: default_hf
3 changes: 2 additions & 1 deletion configs/sampling/default.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
checkpoint_path: "model.safetensors"
checkpoint_path: "checkpoints/model.safetensors"
# checkpoint_path: "checkpoints/DNA-Diffusion.pt"
sample_batch_size: 10
number_of_samples: 1000
guidance_scale: 1.0
Loading
Loading