You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(sequence generation): add ability to generate specific cell types (#359)
* feat(docs/sampling): add ability to generate for specific cell types
* feat(examples): provide example of CLI override of specific cell type generation
* chore(docs): update docs to cover new dataset entries
Copy file name to clipboardExpand all lines: README.md
+43-13Lines changed: 43 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -71,17 +71,6 @@ uv run data/master_dataset_and_filter.py
71
71
```
72
72
which will download all the necessary data and create a file `data/master_dataset.ftr` containing the full ~3.59 million dataset and a file `data/filtered_dataset.txt` containing the same subset of sequences as above. A rendered version of this code is provided at `notebooks/marimo_master_dataset_and_filter.ipynb`.
73
73
74
-
75
-
To run data curation process as a notebook, a marimo notebook file can be found at `notebooks/marimo_master_dataset_and_filter.py`.
76
-
77
-
This notebook can be opened/run with the following command:
All of the data processing files make use of uv to manage dependencies and so all libraries are installed when you run the above commands. See [uv documentation](https://docs.astral.sh/uv/guides/scripts/) for more information on how to run uv scripts and [marimo documentation](https://docs.marimo.io/) for more information on how to run marimo notebooks.
83
-
84
-
85
74
### Training
86
75
To train the DNA-Diffusion model, we provide a basic config file for training the diffusion model on the same subset of chromatin accessible regions described in the data section above.
87
76
@@ -91,6 +80,8 @@ To train the model call:
91
80
uv run train.py
92
81
```
93
82
83
+
This runs the model with our predefined config file `configs/train/default.yaml`, which is set to train the model for a minimum of 2000 epochs. The training script will save model checkpoints for the lowest 2 validation loss values in the `checkpoints/` directory. The path to this checkpoint will need to be updated in the sampling config file for sequence generation, as described in the Model Checkpoint section below.
84
+
94
85
We also provide a base config for debugging that will use a single sequence for training. You can override the default training script to use this debugging config by calling:
95
86
96
87
```bash
@@ -100,7 +91,12 @@ uv run train.py -cn train_debug
100
91
### Model Checkpoint
101
92
We have uploaded the model checkpoint to [HuggingFace](https://huggingface.co/ssenan/DNA-Diffusion). Below we provide an example script that handles downloading the model checkpoint and loading it for sequence generation.
102
93
94
+
If you would like to use a model checkpoint generated from the training script above, ensure you update the `checkpoint_path` within the config file `configs/sampling/default.yaml` to point to the location of the model checkpoint. By default, this is set to `checkpoints/model.safetensors`, so you will need to ensure that the model checkpoint is saved in this location. Both `pt` and `safetensors` formats are supported, so you can use either format for the model checkpoint. An example of overriding the checkpoint path from the command line is described in the sequence generation section below.
95
+
103
96
### Sequence Generation
97
+
98
+
#### Generate using Hugging Face Checkpoint
99
+
104
100
We provide a basic config file for generating sequences using the diffusion model resulting in 1000 sequences made per cell type. To generate sequences using the trained model, you can run the following command:
105
101
106
102
```bash
@@ -119,6 +115,20 @@ Base generation utilizes a guidance scale 1.0, however this can be tuned within
119
115
uv run sample_hf.py sampling.guidance_scale=7.0 sampling.number_of_samples=1 sampling.sample_batch_size=1
120
116
```
121
117
118
+
Both above examples will generate sequences for all cell types in the dataset. If you would like to generate sequences for a specific cell type, you can do so by specifying the `sampling.cell_type` parameter in the command line. For example, to generate a sequence for the K562 cell type, you can run:
119
+
120
+
```bash
121
+
uv run sample_hf.py data.cell_types=K562 sampling.number_of_samples=1 sampling.sample_batch_size=1
122
+
```
123
+
or for both K562 and GM12878 cell types, you can run:
124
+
125
+
```bash
126
+
uv run sample_hf.py 'data.cell_types="K562,GM12878"' sampling.number_of_samples=1 sampling.sample_batch_size=1
127
+
```
128
+
Cell types can be specified as a comma separated string or as a list.
129
+
130
+
#### Generate using Local Checkpoint
131
+
122
132
If you would prefer to download the model checkpoint from Hugging Face and use it directly, you can run the following command to download the model and save it in the checkpoint directory:
@@ -129,6 +139,11 @@ Then you can run the sampling script with the following command:
129
139
uv run sample.py
130
140
```
131
141
142
+
If you would like to override the checkpoint path from the command line, you can do so with the following command (replacing `checkpoints/model.pt` with the path to your model checkpoint):
143
+
```bash
144
+
uv run sample.py sampling.checkpoint_path=checkpoints/model.pt
145
+
```
146
+
132
147
## Examples
133
148
134
149
### Training Notebook
@@ -152,14 +167,29 @@ Both examples were run on Google Colab using a T4 GPU.
152
167
153
168
DNA-Diffusion is designed to be flexible and can be adapted to your own data. To use your own data, you will need to follow these steps:
154
169
155
-
* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file with the following columns:
170
+
* Prepare your data in the same format as the DHS Index dataset. The data should be a tab separated text file contains at least the following columns:
156
171
*`chr`: the chromosome of the regulatory element (e.g. chr1, chr2, etc.)
157
172
*`sequence`: the DNA sequence of the regulatory element
158
173
*`TAG`: the cell type of the regulatory element (e.g. K562, hESCT0, HepG2, GM12878, etc.)
159
174
160
-
* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences and so the results may not be as good if you use a different length.
175
+
additional metadata columns like start, end, continuous accessibility are allowed but not required.
176
+
177
+
* It's expected that your sequences are 200bp long, however the model can be adapted to work with different sequence lengths by the dataloading code at `src/dnadiffusion/data/dataloader.py`. You can change the `sequence_length` parameter in the function `load_data` to the desired length, but keep in mind that the original model is trained on 200bp sequences so the results may not be as good if you use a different length.
161
178
* The model is designed to work with discrete class labels for the cell types, so you will need to ensure that your data is in the same format. If you have continuous labels, you can binarize them into discrete classes using a threshold or some other method. This value is contained within the `TAG` column of the dataset.
162
179
180
+
The data loading config can be found at `configs/data/default.yaml`, and you can override the default data loading config by passing the `data` parameter to the command line. For example, to use a custom data file, you can run:
181
+
182
+
```bash
183
+
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False
184
+
```
185
+
186
+
It is important to set `data.load_saved_data=False` to ensure that cached data is not used, and instead is regenerated from the provided data file. This will ensure that the model is trained on your own data. This will overwrite the default pkl file, so if you would like to keep the original data, you can set `data.saved_data_path` to a different path. For example:
187
+
188
+
```bash
189
+
uv run train.py data.data_path=path/to/your/data.txt data.load_saved_data=False data.saved_data_path=path/to/your/saved_data.pkl
190
+
```
191
+
192
+
163
193
## Contributors ✨
164
194
165
195
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
0 commit comments