Hi all,
This thread will cover Hoon Kim’s and my workshop on AI-powered virtual production. We’re looking forward to showing people the power of this combination!
I’ll put some intro links here. Our doc system at docs.lightcraft.pro has a LLM front end (currently Claude), so you can ask it natural language questions and it will generate full answers trained on our docs & tutorials!
Jetset intro:
Blender workflow intro:
The workshop will cover an approach we think combines the best of the virtual production and AI worlds:
- Using a quickly modeled ‘block out’ 3D scene to provide consistent scale and framing reference
- Shooting greenscreen shots with Jetset, Lightcraft’s iOS virtual production app
- Rendering a ‘clean’ background pass with the rough 3D model (but correct camera tracking)
- Restyling the first frame of that clean background pass in AI (using Midjourney’s Retexture in this case, but many tools can do this.)
- Using that first restyled frame to drive a video to video model (Runway 3 in this case) to regenerate the background pass
- Re-lighting and compositing the foreground greenscreen pass in Beeble.ai to best match the newly created background.
I think people will be shocked at how simple this process can be, while maintaining control over the results – please let us know what you’d like us to cover in this thread!