CASE STUDY
X
AI CHARACTER CONSISTENCY
Character consistency in AI video: from LoRA training to narrative control
This case study presents a practical approach to character consistency in AI-generated video using a custom-trained LoRA and a controlled generation pipeline in ComfyUI.
The workflow was developed in stages, prioritizing character identity and lighting control. A fixed base image defined the visual style, while reference images of a real person were used to train a custom LoRA to anchor identity. Once consistency was achieved in still images, a Qwen-based workflow generated the first and last frames to ensure temporal coherence, and any identity drift was corrected by reprocessing those frames through a Wan image workflow using the identity LoRA.
The corrected frames were then used as anchors in a Wan 2.1 first & last frame video workflow, supported by an additional LoRA to reinforce character personality.
The final output was generated from this controlled setup, resulting in a video with stable identity, lighting, and character presence.