Deformation-Recovery Diffusion Model (DRDM):
Instance Deformation for Image Manipulation and Synthesis

University of Oxford
*/†Indicates Equal Contribution / Corresponding Author
Preprint
sym

Abstract

In medical imaging, the diffusion models have shown great potential in synthetic image generation tasks. However, these models often struggle with the interpretable connections between the generated and existing images and could create illusions. To address these challenges, our research proposes a novel diffusion-based generative model based on deformation diffusion and recovery. This model, named Deformation-Recovery Diffusion Model (DRDM), diverges from traditional score/intensity and latent feature-based approaches, emphasizing morphological changes through deformation fields rather than direct image synthesis. This is achieved by introducing a topological-preserving deformation field generation method, which randomly samples and integrates a set of multi-scale Deformation Vector Fields (DVF). DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution. These innovations facilitate the generation of diverse and anatomically plausible deformations, enhancing data augmentation and synthesis for further analysis in downstream tasks, such as few-shot learning and image registration. Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale), and high-quality (negative ratio of folding rate is lower than 1%) deformation fields. The further experimental results in downstream tasks, 2D image segmentation and 3D image registration, indicate significant improvements resulting from DRDM, showcasing the potential of our model to advance image manipulation and synthesis in medical imaging and beyond.

sym sym

Examples of deformation diffusion-and-recovery in pulmonary CT scans (Green/Blue: Original/Deformed shape)

Algorithm 1: Training DRDM

Input: Training set of source domain images DsrcH x W x D
Output: DRDM weights θ

1. Initialize the DRDM parameters θ;
2. While the loss diff not converge:

// randomly sample the data
2.1. Sample the original images: I0Dsrc;
2.2. Sample time steps: t ∼ U(0,T) ∩ Z
2.3. Sample random DVFs ψt and DDFs ψt:1;
// compute the prediction and the loss
2.4. Deform original images from I0 to It;
2.5. Use DRDM Dθ to estimate recovering deformation ψt;
2.6. Update gradient descent step θdiff;

3. Return model weights θ.
Algorithm 2: Instance Deformation via DRDM

Input: Images for deformation I0 ∈ ℝH × W × D
Output: Generated DDF φ

1. Import the DRDM parameters θ from Algorithm 1;
2. Set the deformation level T' ≤ T;
// deformation diffusion process
3. Sample a random DDF ψT':1;
4. Set the initial DDF for deformation recovery: φ ← ψT':1;
5. Deform original images from I0 to IT';
6. Set the initial image for deformation recovery: φ ← IT';
// deformation recovery process
7. For t = T', T'-1, ..., 1:
7.1. Use DRDM Dθ to estimate recovering deformation ψt;
7.2. Update the deformation: φ ← ψt ∘ φ;
7.3. Deform original images from I0 to It-1;
8. Return the generated deformation φ.
Algorithm 3: Data Augmentation (for segmentation) via DRDM

Input: Images and labels Dtgt ⊂ ℝH x W x D × ℝH x W x D x C
Output: Deformed images and labels Daug ⊂ ℝH x W x D × ℝH x W x D x C

1. Import the DRDM parameters θ from Algorithm 1;
2. Set a set of deformation levels + ∩ [1,T];
3. Initialize the output set Daug ← ∅;
// Sample a pair of image and label
4. ForEach (I0, L0)Dtgt:

// Sample a deformation level number
4.1. ForEach T' ∈ ℑ:

4.1.1. Generate DDF φ using Algorithm 2;
4.1.2. Deform the sampled image: φ(I0);
4.1.3. Deform the sampled label: φ(L0);
4.1.4. Append the deformed image and label into the output set: Daug ← Daug ∪ {(φ(I0), φ(L0))};

5. Return the output set Daug.
Algorithm 4: Data Synthesis (for registration) via DRDM

Input: Images Dtgt ⊂ ℝH x W x D
Output: Paired images & DDF Dsyn ⊂ ℝH x W x D × ℝH x W x D × ℝH x W x D x 3

1. Import the DRDM parameters θ from Algorithm 1;
2. Set a set of deformation levels + × ℤ+;
3. Initialize the output set Dsyn ← ∅;
// Sample an image
4. ForEach I0Dtgt:

// Sample deformation level numbers
4.1. ForEach (T'aug,T'syn):

// Create the moving image
4.1.1. Generate DDF φaug based on (I0,T'aug) using Algorithm 2;
4.1.2. Deform the sampled image: Imv ← φaug(I0);
// Create the fixed image and DDF
4.1.3. Generate DDF φsyn based on (Imv,T'syn) using Algorithm 2;
4.1.4. Deform the sampled image: Ifx ← φsyn ∘ φaug(I0);
4.1.5. Append the deformed images and the DDF: Dsyn ← Dsyn ∪ {(Imv, Ifx, φsyn)};

5. Return the output set Dsyn.

BibTeX


    @article{zheng2024deformation,
      title={Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis},
      author={Zheng, Jian-Qing and Mo, Yuanhan and Sun, Yang and Li, Jiahua and Wu, Fuping and Wang, Ziyang and Vincent, Tonia and Papie{\.z}, Bart{\l}omiej W},
      journal={arXiv preprint arXiv:2407.07295},
      doi = {https://doi.org/10.48550/arXiv.2407.07295},
      url = {https://doi.org/10.48550/arXiv.2407.07295},
      keywords = {Image Synthesis, Generative Model, Data Augmentation, Segmentation, Registration}
      year={2024}
    }