My First LoRA Training — And How It Changed the Way I Think About AI
This was my first LoRA training.
I didn’t approach it as a technical experiment alone. I approached it as a question:
If I compress my own visual language into a diffusion model, what remains mine?
There’s a common assumption around AI image generation — that once the system runs, human agency fades. That the machine “creates” and the artist merely triggers.
What I discovered was the opposite.
The more I understood the system, the more deliberate my authorship became.
Starting With a Coherent Visual Language
I selected nine hand-drawn portraits from my own series.
They shared a clear structural logic:
• Large abstract color-wash backgrounds
• Soft, pencil-like rendering
• High-contrast topographic contour lines
• Controlled but expressive color palettes
I wasn’t aiming for dataset diversity.
I was aiming for stylistic consistency.
If the internal language of the series was strong enough, the model would not learn subjects — it would learn structure.
Building and Training Locally
The training ran locally on:
•Apple M1 Max (64GB RAM)
•ComfyUI
•Z-Image base model
•Musubi Tuner for LoRA training
Final training configuration:
•1000 steps
•Learning rate: 0.0001
•LoRA rank: 16
•fp16 precision
•~53 hours training time
It required environment tuning, precision adjustments, and model compatibility fixes. Nothing about it was one-click.
But that technical depth was important.
It made the process intentional.
Designing the Dataset to Teach Style, Not Content
All image captions were nearly identical.
Only the subject label changed: woman / man / child.
Everything else remained stable.
This forced the model to extract: line behavior, color layering, facial abstraction, background logic
The LoRA was named: sportrait-laz
And when I first applied it during generation, I wasn’t looking for replication.
I was looking for structural resonance.
The First Outputs
At LoRA strength values around 1.2–1.3, the influence became visible.
The generated images began to show: dominant pencil-like line work, contour mapping along facial planes, layered atmospheric backgrounds
It wasn’t copying specific drawings.
It was reconstructing tendencies embedded across the series.
That moment changed how I saw the system.
It wasn’t autonomous.
It was reactive to the structure I had curated.
Where Agency Actually Lives
The real work started after training.
Control emerged through:
•LoRA strength (how dominant the trained style is)
•CFG (how strongly the model follows prompts)
•Prompt structure
•Iterative comparison
•Lower CFG + moderate LoRA created openness and variation.
•Higher CFG + stronger LoRA sharpened structure — sometimes too much.
Prompt phrasing became a compositional tool:
•“line-driven portrait”
•“drawing-first composition”
•“high-contrast luminous contour lines”
Excluding:
•photorealistic
•smooth shading
•airbrushed
prevented stylistic drift.
The workflow became:
Generate → Evaluate → Adjust → Repeat.
This was not passive triggering.
It was art direction.
Sculpting the Image Space
Once line dominance stabilized, I explored chromatic control:
•Blue-dominant backgrounds
•Muted violet atmospheres
•Desaturated tonal fields
•Adjusted brightness levels
Small textual changes led to significant visual shifts.
Some variations failed.
Some revealed unexpected potential.
Iteration wasn’t correction — it was refinement.
What Changed in My Understanding of AI
Before this project, AI felt like a generator.
After this training, it felt like a structured environment.
Agency did not disappear. It relocated.
It exists in:
•Dataset curation
•Caption strategy
•Parameter balancing
•Iterative selection
•Visual comparison
•Constraint setting
The model doesn’t replace authorship.
It responds to it.
The outputs reflect the coherence — or incoherence — of the input logic.
What This First Training Taught Me
Style can be compressed into a model if the dataset is coherent.
Parameter control is a creative instrument.
Iteration is where authorship becomes visible.
The system amplifies structural decisions.
Human agency is not diminished — it becomes procedural.
This was my first LoRA training.
It didn’t make AI feel more powerful.
It made artistic control feel more explicit.
And that changed how I think about working with these systems entirely.
If you’re interested in a deeper breakdown of the technical setup, parameter decisions, visual comparisons, and the full art direction process behind this project, I’ve documented everything in detail in the essay published in the Writings section on my website. The paper traces the training configuration, iteration logic, and selection process step by step.
The work is still ongoing: I’m currently expanding the portrait series, creating additional drawings to strengthen dataset consistency, and refining the LoRA through further training runs to improve stylistic coherence and facial variation. This isn’t a finished system — it’s an evolving visual language.