Hunyuan Video 1.5
Editor's ChoiceTencent's 8.3B open-source video model — the current permissiveness leader.
Verdict
If you want one honest answer to 'best uncensored AI video generator,' Hunyuan Video 1.5 is it. Released in late 2025, the 8.3B-parameter open model is less filtered at the base level than Wan 2.2, significantly cheaper to run (roughly 14 GB VRAM for the Q8 GGUF build), and Reddit consensus through early 2026 has consistently rated its motion and anatomy among the best in the open-source tier.
What makes Hunyuan 1.5 special is what Tencent didn't do — they didn't train out anatomical concepts the way competing closed models have. The result is that a permissive LoRA is almost optional rather than mandatory. r/unstable_diffusion users report getting usable NSFW output with base prompts that Wan 2.2 would fight them on. Image-to-video mode is the mode most people use, and it holds character identity noticeably better than Wan across multi-second clips.
The catch is ComfyUI. There is no one-click web app for Hunyuan 1.5 (yet); you're installing ComfyUI, downloading the GGUF weights from Hugging Face, and wiring up the default workflow. It's about a two-hour first-install on a 4070 Ti or better, fifteen minutes on a fresh Runpod image. Once set up, you're generating 5-second clips at 720p in 60-180 seconds per generation depending on quantization.
Pros
- +Most permissive base model on this list — no LoRA required for many NSFW prompts
- +Lower VRAM requirement than Wan 2.2 (14 GB Q8, 10 GB Q5)
- +Strong image-to-video identity preservation
- +Completely free — weights on Hugging Face, no subscription
- +Active LoRA ecosystem on Civitai (realism, anime, specific styles)
Cons
- –Requires ComfyUI install — no hosted web UI from Tencent
- –Still needs a discrete GPU or Colab Pro for comfortable use
- –LoRA ecosystem is younger than Wan's
Pricing
- Free: Open-source weights, unlimited self-hosted use
- Cloud GPU (optional): ~$0.25-0.60/hour on Runpod or Vast.ai
Best For: Enthusiasts who want the most permissive open-source video quality in 2026 and don't mind a ComfyUI setup afternoon.
