You can deal with vanishing or exploding gradients when training deep generative models GANs, VAEs by referring to the steps below:
- 
Gradient Clipping: Cap gradients to a maximum norm. 
- 
Proper Weight Initialization: Use initialization techniques like Xavier or He initialization. 
- 
Batch Normalization: Normalize layer inputs to stabilize training. 
- 
Adaptive Optimizers: Use optimizers like Adam or RMSprop to handle gradient scaling. 
- 
Smaller Learning Rates: Reduce the learning rate to control gradient updates. 
- 
Spectral Normalization (GANs): Normalize weights to constrain Lipschitz constant. 
- 
Skip Connections (VAEs): Use residual connections to mitigate gradient issues. 
Here are the code references for the above steps which you can refer:
Hence, referring to the above, you deal with vanishing or exploding gradients when training deep generative models GANs VAEs.