Use Gradient Clipping, Layer Normalization, or Batch Normalization to stabilize LSTM training and prevent exploding gradients.
Here is the code snippet you can refer to:

In the above code, we are using the following approaches:
- Gradient Clipping (clipnorm or clipvalue): Prevents extreme gradient values.
- Layer Normalization (LayerNormalization): Stabilizes activations inside LSTM cells.
- Batch Normalization (BatchNormalization): Normalizes inputs to control variance.
- Smaller Learning Rate: Reduces large weight updates causing instability.
- Use return_sequences=True in LSTM: Helps distribute gradient flow in deep networks.
Hence, applying gradient clipping and normalization techniques effectively prevents exploding gradients when training LSTM models on text sequences.