If you trained a LoRA on LTX Video 2 and upgraded to LTX 2.3, your weights stopped working. This is not a configuration error. The models are architecturally incompatible, and there is no migration path — you have to retrain. This guide explains what changed, why it matters, and how to retrain efficiently using Grix LoRA Trainer.

What Changed Between LTX 2 and LTX 2.3

The core issue is the VAE — the Variational Autoencoder that encodes video frames into latent space and decodes them back into pixels. Lightricks completely rebuilt the VAE for LTX 2.3, training it on higher-quality data with a redesigned architecture.

A LoRA does not store video frames. It stores adjustments to the model's weights, expressed in the internal latent space that the VAE defines. When the VAE changes, the latent space changes with it. An LTX 2 LoRA encodes your training data in LTX 2's latent coordinate system. When you run that LoRA through LTX 2.3's different latent space, the adjustments point in wrong directions — the model produces incoherent output or ignores the LoRA entirely.

There is no numerical conversion. You need to re-encode your training data into LTX 2.3's latent space, which means running the LTX 2.3 VAE over your source videos and training fresh weights from scratch.

What You Need to Retrain

The same training dataset that worked for LTX 2 will work for LTX 2.3. The data itself does not change — only the encoding pipeline. If you still have your source videos or images, you have everything you need.

Key parameter differences for LTX 2.3:

Migrating with Grix LoRA Trainer: No-Code Workflow

The Grix LoRA Trainer handles LTX 2.3 training through a 4-step wizard. You do not need to configure training parameters manually — the recipe system pre-sets rank, learning rate, step count, and resolution based on your use case.

To migrate an LTX 2 LoRA to 2.3:

  1. Step 1 — Recipe selection: Choose the recipe type that matches your original training goal. If you trained a character LoRA in LTX 2, choose the Character recipe. If you trained a style LoRA, choose Style. The recipe sets all parameters automatically for LTX 2.3.
  2. Step 2 — Dataset upload: Upload your original source videos or images. Grix handles captioning automatically using an integrated vision model — you do not need to caption clips manually. Upload 10 to 50 clips for motion/character training, or 20 to 50 images for style and identity training.
  3. Step 3 — Configuration review: The Grix AI sidekick explains each training setting in plain English. You can see the rank, step count, and estimated training time before launching. No parameter tuning required for standard use cases.
  4. Step 4 — Launch and download: Training runs on fal.ai GPU infrastructure. When complete, you receive a .safetensors file and a trigger word. The output is compatible with any LTX 2.3 inference endpoint.

Testing the Retrained LoRA

After training, test the LoRA in the Grix LoRA Studio before integrating it into a production pipeline. The Studio lets you run inference directly in the browser — paste your LoRA URL or upload the .safetensors file, enter a prompt with your trigger word, and generate a test clip. No separate inference setup required.

Evaluation criteria for a successful migration:

If the LoRA is weak, increase training steps by 20 percent and retrain. If it over-fits (every generation looks identical to your training data), reduce steps or lower the rank to 16.

IC-LoRA in LTX 2.3

LTX 2.3 added IC-LoRA support — Identity-Consistent LoRA training that uses an input video as a visual reference during inference. This is a new capability not available in LTX 2. If you are training a character or face LoRA, IC-LoRA gives significantly better consistency than standard LoRA because the model conditions on visual identity at inference time, not just during training.

The Grix LoRA Trainer Face recipe uses IC-LoRA configuration by default. When generating with an IC-LoRA, you provide both a text prompt and a reference image or video clip. The model maintains visual consistency with the reference throughout the generation.

Timeline Comparison: Manual vs. Grix No-Code

To set expectations: a standard LTX 2.3 LoRA retraining run takes approximately 30 to 60 minutes of wall-clock time depending on dataset size and step count, on the GPU infrastructure fal.ai provides. The actual hands-on time using Grix is under 15 minutes — uploading files, reviewing settings, and launching the job. Most of the 30 to 60 minutes is unattended compute time.

For context, the equivalent API-level workflow (local dataset preparation, caption generation, config file editing, running the training script, monitoring loss) typically takes 2 to 4 hours for an experienced practitioner working on a familiar model, and significantly longer for a new model version where parameter choices have not been validated.

Frequently Asked Questions

Can I use my LTX 2 LoRA as a starting point for LTX 2.3 training?

No. The weight structure is incompatible. You cannot initialize LTX 2.3 LoRA training from LTX 2 LoRA weights. You must train from the LTX 2.3 base model.

Do I need the same number of training clips I used for LTX 2?

Not necessarily. LTX 2.3 is generally more sample-efficient than LTX 2 for standard LoRA types. If you used 80 clips in LTX 2, try 30 to 50 in 2.3 first. You can always add more if the result is weak.

Will WaveSpeedAI LTX 2 LoRAs work in LTX 2.3?

No. The VAE incompatibility applies to all LTX 2 LoRAs regardless of which platform trained them — WaveSpeedAI, ComfyUI, local training scripts, or any other pipeline. Any LTX 2 LoRA must be retrained for 2.3.

How long does retraining take with Grix?

Approximately 30 to 60 minutes of compute time depending on dataset size. Hands-on setup time is under 15 minutes. Start a trial at grixai.com/try.

Is the Grix LoRA Trainer only for LTX 2.3?

LTX 2.3 is the current supported model. Wan and Hunyuan support are on the roadmap. See the full Grix LoRA Trainer page for current model support.