Sulphur 2 Hugging Face
Open-weights release · 2026-05-03

Sulphur 2 —
Open-Source 9B Video Model

A community-distributed text-to-video and image-to-video model built on LTX 2.3. Run cinematic generation locally with open weights, distill LoRAs, and ComfyUI workflows.

Parameters
9B
Base
LTX 2.3
Modalities
T2V · I2V
Weights
Open

Advanced video generation, fully local

Four pillars define how Sulphur 2 plugs into a modern open-source video stack — every component is downloadable, inspectable, and replaceable.

T2V

Text-to-Video

Generate continuous motion from a natural-language prompt. Sulphur 2 inherits LTX 2.3’s temporal consistency stack, so subjects, lighting, and camera motion stay stable across the full clip.

I2V

Image-to-Video

Animate a single still frame into a coherent shot. Useful for turning concept art, character renders, or storyboards into living reference footage without re-keying every frame.

OSS

Open Weights

The full 9B-parameter base model is published on Hugging Face under SulphurAI/Sulphur-2-base, alongside distill LoRAs that reduce sampling steps for faster iteration on local hardware.

UI

ComfyUI Ready

Ships with official ComfyUI workflows and a prompt enhancer. Drag the JSON into ComfyUI, load the weights, and you have a working text-to-video and image-to-video pipeline.

Technical overview

Sulphur 2 sits on the LTX 2.3 video diffusion stack and adds community fine-tuning on roughly 125,000 video clips. The release bundle is self-contained: base safetensors, distillation LoRAs for faster sampling, ComfyUI workflows, and a prompt enhancer that pre-processes prompts before they hit the generator.

The result is a model that behaves like LTX 2.3 in tooling but ships ready to run, with reproducible defaults and a single canonical Hugging Face repository for downloads.

Parameters
9B
Base architecture
LTX 2.3 video diffusion
Modalities
Text-to-Video, Image-to-Video
Training clips
~125,000
Release
Sulphur-2-base, 2026-05-03
License model
Open weights (see model card)
Recommended VRAM
24–32 GB
Bundled assets
Distill LoRAs, ComfyUI workflows, prompt enhancer

What people are building

Open-weights video generation opens use cases that closed APIs cover unevenly: local iteration, private input, and customisable pipelines. Common patterns seen in the Sulphur 2 community include:

Independent creators

Storyboard a concept film without a render farm. Iterate on cinematic looks in minutes on a single workstation GPU.

Short-form social media

Generate stylised b-roll for vertical video, reaction overlays, and animated thumbnails directly from a written brief.

Game cinematics

Draft cutscene shots and environment fly-throughs before committing engine time. I2V keeps art-direction in the loop using existing key frames.

Concept-art animation

Turn static character renders, environment paintings, or moodboards into short motion studies for pitch decks and pre-production.

Educational explainers

Produce supporting visuals for tutorials, lecture clips, and explainer videos when stock footage does not match the subject closely enough.

Visual research

Probe how a 9B open-weights video model handles motion priors, camera behaviour, and prompt adherence relative to closed APIs.

Research foundation

Sulphur 2 builds on the LTX line of video diffusion models, which combine spatiotemporal transformer blocks with a learned motion prior. The Sulphur 2 release adds an additional fine-tuning stage on roughly 125,000 video clips, then packages distillation LoRAs so the same base can run at lower step counts for faster iteration.

  • Maintainer
    SulphurAI · FusionCow

    Single-maintainer community release. Updates land on the Hugging Face repository.

  • Distribution
    Hugging Face · SulphurAI/Sulphur-2-base

    Weights, distill LoRAs, ComfyUI workflows, and prompt enhancer published in a single repository.

  • Community traction
    ~158k monthly downloads (May 2026)

    Places Sulphur 2 among the most-downloaded open-weights video models on Hugging Face at release.

Frequently asked questions

Quick answers about the model, hardware, and how to run it locally.

What is Sulphur 2? +

Sulphur 2 (model id SulphurAI/Sulphur-2-base) is a community-distributed 9B-parameter video generation model built on top of LTX 2.3. It supports both text-to-video and image-to-video and was released on 2026-05-03 with open weights, distill LoRAs, ComfyUI workflows, and a prompt enhancer.

How is it different from LTX 2.3? +

LTX 2.3 is the underlying base model. Sulphur 2 is a community release built on that base, additionally trained on roughly 125,000 video clips and distributed as a self-contained bundle (weights + LoRAs + ComfyUI workflows). If you already run LTX 2.3 locally, Sulphur 2 plugs into the same workflow with minimal changes.

What hardware do I need to run Sulphur 2? +

A discrete GPU with 24–32 GB of VRAM runs the base safetensors cleanly. 32 GB+ is recommended for higher resolutions or longer clips. Apple Silicon and lower-VRAM cards can still run distilled or quantised variants via ComfyUI, but expect longer per-clip times.

Is the model free to download? +

Yes. The weights, LoRAs, and ComfyUI workflows are published on Hugging Face at SulphurAI/Sulphur-2-base and can be downloaded without payment. Always check the current model card for the up-to-date license and usage terms.

Can I use the generated videos commercially? +

Commercial use depends on the model card license and the upstream LTX 2.3 terms in effect at the time you download. Review both before publishing outputs in a commercial product. This site does not provide legal advice.

How do I run Sulphur 2 in ComfyUI? +

Install a recent ComfyUI build with video-diffusion nodes, download the Sulphur 2 weights and the bundled workflow JSON from Hugging Face, drop the JSON into ComfyUI, point the loader nodes at the downloaded weights, then queue the prompt. The prompt enhancer is included as part of the workflow.

Does it support image-to-video as well as text-to-video? +

Yes. The base release ships with both pipelines. Text-to-video takes a prompt only; image-to-video conditions the generation on a starting frame so the output preserves composition, character identity, and styling from the source image.

Where is the official model page? +

The canonical source is the Hugging Face repository SulphurAI/Sulphur-2-base. This site (sulphur2.online) is an independent informational reference and links out to that repository for downloads and the latest model card.

Try Sulphur 2

The canonical source for weights and workflows is the official Hugging Face repository.