Diffusers documentation

Diffuser Value-guided Planning

You are viewing v0.16.0 version. A newer version v0.31.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Using Diffusers for reinforcement learning

Support for one RL model and related pipelines is included in the experimental source of diffusers. More models and examples coming soon!

Diffuser Value-guided Planning

You can run the model from Planning with Diffusion for Flexible Behavior Synthesis with Diffusers. The script is located in the RL Examples folder.

Or, run this example in Colab Open In Colab

class diffusers.experimental.ValueGuidedRLPipeline

< >

( value_function: UNet1DModel unet: UNet1DModel scheduler: DDPMScheduler env )

Parameters

  • value_function (UNet1DModel) — A specialized UNet for fine-tuning trajectories base on reward.
  • unet (UNet1DModel) — U-Net architecture to denoise the encoded trajectories.
  • scheduler (SchedulerMixin) — A scheduler to be used in combination with unet to denoise the encoded trajectories. Default for this application is DDPMScheduler. env — An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) Pipeline for sampling actions from a diffusion model trained to predict sequences of states.

Original implementation inspired by this repository: https://github.com/jannerm/diffuser.