Skip to content

Commit b139873

Browse files
authored
feat(sequence generation): add ability to generate specific cell types (#359)
* feat(docs/sampling): add ability to generate for specific cell types * feat(examples): provide example of CLI override of specific cell type generation * chore(docs): update docs to cover new dataset entries
1 parent fd1e574 commit b139873

11 files changed

Lines changed: 424 additions & 123 deletions

File tree

README.md

Lines changed: 43 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -71,17 +71,6 @@ uv run data/master_dataset_and_filter.py
7171
```
7272
which will download all the necessary data and create a file `data/master_dataset.ftr` containing the full ~3.59 million dataset and a file `data/filtered_dataset.txt` containing the same subset of sequences as above. A rendered version of this code is provided at `notebooks/marimo_master_dataset_and_filter.ipynb`.
7373

74-
75-
To run data curation process as a notebook, a marimo notebook file can be found at `notebooks/marimo_master_dataset_and_filter.py`.
76-
77-
This notebook can be opened/run with the following command:
78-
```bash
79-
uvx marimo edit notebooks/marimo_master_dataset_and_filter.py
80-
```
81-
82-
All of the data processing files make use of uv to manage dependencies and so all libraries are installed when you run the above commands. See [uv documentation](https://docs.astral.sh/uv/guides/scripts/) for more information on how to run uv scripts and [marimo documentation](https://docs.marimo.io/) for more information on how to run marimo notebooks.
83-
84-
8574
### Training
8675
To train the DNA-Diffusion model, we provide a basic config file for training the diffusion model on the same subset of chromatin accessible regions described in the data section above.
8776

@@ -91,6 +80,8 @@ To train the model call:
9180
uv run train.py
9281
```
9382

83+
This runs the model with our predefined config file `configs/train/default.yaml`, which is set to train the model for a minimum of 2000 epochs. The training script will save model checkpoints for the lowest 2 validation loss values in the `checkpoints/` directory. The path to this checkpoint will need to be updated in the sampling config file for sequence generation, as described in the Model Checkpoint section below.
84+
9485
We also provide a base config for debugging that will use a single sequence for training. You can override the default training script to use this debugging config by calling:
9586

9687
```bash
@@ -100,7 +91,12 @@ uv run train.py -cn train_debug
10091
### Model Checkpoint
10192
We have uploaded the model checkpoint to [HuggingFace](https://huggingface.co/ssenan/DNA-Diffusion). Below we provide an example script that handles downloading the model checkpoint and loading it for sequence generation.
10293

94+
If you would like to use a model checkpoint generated from the training script above, ensure you update the `checkpoint_path` within the config file `configs/sampling/default.yaml` to point to the location of the model checkpoint. By default, this is set to `checkpoints/model.safetensors`, so you will need to ensure that the model checkpoint is saved in this location. Both `pt` and `safetensors` formats are supported, so you can use either format for the model checkpoint. An example of overriding the checkpoint path from the command line is described in the sequence generation section below.
95+
10396
### Sequence Generation
97+
98+
#### Generate using Hugging Face Checkpoint
99+
104100
We provide a basic config file for generating sequences using the diffusion model resulting in 1000 sequences made per cell type. To generate sequences using the trained model, you can run the following command:
105101

106102
```bash
@@ -119,6 +115,20 @@ Base generation utilizes a guidance scale 1.0, however this can be tuned within
119115
uv run sample_hf.py sampling.guidance_scale=7.0 sampling.number_of_samples=1 sampling.sample_batch_size=1
120116
```
121117

118+
Both above examples will generate sequences for all cell types in the dataset. If you would like to generate sequences for a specific cell type, you can do so by specifying the `sampling.cell_type` parameter in the command line. For example, to generate a sequence for the K562 cell type, you can run:
119+
120+
```bash
121+
uv run sample_hf.py data.cell_types=K562 sampling.number_of_samples=1 sampling.sample_batch_size=1
122+
```
123+
or for both K562 and GM12878 cell types, you can run:
124+
125+
```bash
126+
uv run sample_hf.py 'data.cell_types="K562,GM12878"' sampling.number_of_samples=1 sampling.sample_batch_size=1
127+
```
128+
Cell types can be specified as a comma separated string or as a list.
129+
130+
#### Generate using Local Checkpoint
131+
122132
If you would prefer to download the model checkpoint from Hugging Face and use it directly, you can run the following command to download the model and save it in the checkpoint directory:
123133
```bash
124134
wget https://huggingface.co/ssenan/DNA-Diffusion/resolve/main/model.safetensors -O checkpoints/model.safetensors
@@ -129,6 +139,11 @@ Then you can run the sampling script with the following command:
129139
uv run sample.py
130140
```
131141

142+
If you would like to override the checkpoint path from the command line, you can do so with the following command (replacing `checkpoints/model.pt` with the path to your model checkpoint):
143+
```bash
144+
uv run sample.py sampling.checkpoint_path=checkpoints/model.pt
145+
```
146+
132147
## Examples
133148

134149
### Training Notebook
@@ -152,14 +167,29 @@ Both examples were run on Google Colab using a T4 GPU.
152167

153168
DNA-Diffusion is designed to be flexible and can be adapted to your own data. To use your own data, you will need to follow these steps:
154169

155-
* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file with the following columns:
170+
* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file contains at least the following columns:
156171
* `chr`: the chromosome of the regulatory element (e.g. chr1, chr2, etc.)
157172
* `sequence`: the DNA sequence of the regulatory element
158173
* `TAG`: the cell type of the regulatory element (e.g. K562, hESCT0, HepG2, GM12878, etc.)
159174

160-
* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences and so the results may not be as good if you use a different length.
175+
additional metadata columns like start, end, continuous accessibility are allowed but not required.
176+
177+
* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences so the results may not be as good if you use a different length.
161178
* The model is designed to work with discrete class labels for the cell types, so you will need to ensure that your data is in the same format. If you have continuous labels, you can binarize them into discrete classes using a threshold or some other method. This value is contained within the `TAG` column of the dataset.
162179

180+
The data loading config can be found at `configs/data/default.yaml`, and you can override the default data loading config by passing the `data` parameter to the command line. For example, to use a custom data file, you can run:
181+
182+
```bash
183+
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False
184+
```
185+
186+
It is important to set `data.load_saved_data=False` to ensure that cached data is not used, and instead is regenerated from the provided data file. This will ensure that the model is trained on your own data. This will overwrite the default pkl file, so if you would like to keep the original data, you can set `data.saved_data_path` to a different path. For example:
187+
188+
```bash
189+
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False data.saved_data_path=path/to/your/saved_data.pkl
190+
```
191+
192+
163193
## Contributors ✨
164194

165195
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

configs/data/sampling.yaml

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
_target_: src.dnadiffusion.data.dataloader.get_dataset_for_sampling
2+
data_path: "data/K562_hESCT0_HepG2_GM12878_12k_sequences_per_group.txt"
3+
saved_data_path: "data/encode_data.pkl"
4+
load_saved_data: True
5+
debug: False
6+
cell_types: null # null means all cell types, or specify like ["K562", "HepG2"]

configs/sample.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
defaults:
22
- model: unet
3-
- data: default
3+
- data: sampling
44
- diffusion: default
55
- sampling: default

configs/sample_hf.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
defaults:
22
- model: unet_pretrained
3-
- data: default
3+
- data: sampling
44
- diffusion: default
55
- sampling: default_hf

configs/sampling/default.yaml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
checkpoint_path: "model.safetensors"
1+
checkpoint_path: "checkpoints/model.safetensors"
2+
# checkpoint_path: "checkpoints/DNA-Diffusion.pt"
23
sample_batch_size: 10
34
number_of_samples: 1000
45
guidance_scale: 1.0

0 commit comments

Comments
 (0)