In my AI workflows, I like to reduce randomness to a minimum. For years, I’ve relied on a mood generator - a hybrid process using image-to-image and ControlNet - to guide both geometry and light. Until recently, this was possible only with SDXL, because FLUX simply wasn’t delivering the consistency I needed. But after months of testing, I had a breakthrough: FLUX is now my new standard. The process is straightforward but powerful: 1_Load an image for geometry control 2_Write the prompt 3_Load a second image to control the light The outcome? Consistently nailing the reference mood, every single time. This clip shows how pairing a mood-generating workflow with a style reference can take architectural visualization - and creative AI work in general - to the next level.
Have you thought about sharing or selling some of your workflows? And I agree, a multistage process, building an image up is essential for amazing results.
God mode 🚀
Thanks for sharing, Maurizio, Flux is moving so fast
👌
wow!
Instructor and ASSETS Specialist -- COMFY UI -- Modelling/TEXTURING/LOOKDEV (character/props)
2moVery interesting but I dont understand how you force flux to replicate the lighting in your workflow ? Ipadapter ? Ic light ? Redux ? I'm looking for this kind of tricks without using redux ou ip adapter which are a bit slow on my rtx 4090, very curious to see it working if you want to share it somewhere :-) thx