AI METHODOLOGY
AI is Borges' Library of Babel. Every book written and every book that betrayed itself unwritten. Together they are not wisdom — they are its blander, more persistent obstacle. To find art one does not master the infinite.
One refuses it. This word. Not that word. Where babble was — silence. The noise that was there before and is no longer, and whose absence is the whole point. Only this. Only now. Only once.
Welcome to my methodology.
AI IMAGE METHODOLOGY: MULTIREFERENCE MODEL
AI VIDEO METHODOLOGY: VIDEO GENERATION MODEL
DIRECTORIAL METHODOLOGY
AI IMAGE METHODOLOGY: MULTI-REFERENCE MODEL
THE ORCHESTRATION PROMPT
The multi-reference model requires a prompt that binds all elements together and specifies how each one should be used. To arrive at this prompt, I execute several steps — notably an Exchange Prompt, the AI Distillation Process and the Approximative Generations.
The Exchange Prompt is produced when I submit a visual instruction plate to a professional LLM account for analysis (see Plates, below) — translating directorial intention into precise language. AI converts visual thinking into text with an accuracy that manual description cannot match. However, in this stage I build the full technical register using my own knowledge about the desired quality of light orientation, temperature, lens, focus plane, camera, film, and colour palette. Each element added by explicit command. Never incurring style theft.
The Ai Distillation Process is a far more intensive informational step. I engage several paid LLMs simultaneously and subject them to a proprietary system of questions, among which I pursue Keller's Brand Equity Pyramid, the Prism method, and the brand's strategic dreams, nightmares, obstacles, threats, and opportunities. Once I have the responses, I distill the information through a second series of highly precise questions, eliminating any hallucinations generated in the first pass. Once I have the Ai Distillation accomplished, I visually confirm every assertion made by the LLMs
— a personal data curation process to eliminate any residual hallucination.
The Approximative Generations are prior generations used as progressive references
— guiding the model toward the intended result in accord with the orchestration prompt. A strong finalist generation, properly identified and precisely corrected in text, becomes the final instrument through which the work is achieved.
This process matters because it generates a vocabulary precisely calibrated to the brand, useful both for producing content within the brand's canon and for making me conscious of its particularities. I do not subscribe to automated or generalised processes because they tend to prevent genuine awareness of the work. I use AI to deepen my consciousness of the brand and of my own decisions regarding the campaign.
THE DRAWING
The drawing can be produced in two ways: by hand or on a graphics tablet. A hand-drawn sketch has the advantage of being an unquestionable personal creative reference — I own the drawing and it can only have been made by me. This anchors copyright ownership to the work. I can equally produce a drawing on a computer to accelerate production, then print it, date it, and
sign it — establishing from the outset that I have placed my intention on an artistic commitment.

THE DEPTH MAP
Another essential drawing is the depth map. Rather than indicating light as in a pencil or digital sketch, depth is indicated through a greyscale map where the lightest areas are closest and the darkest are farthest. Executed by hand on the computer, it establishes a clear intentionality of depth without passing through 3D modelling or extracting it from another photographer's work — both of which would compromise the originality of the process. If there were to be a wall or a landscape behind the subject, this is not a matter of chance. It is a consensual decision.

THE PLATES
All information bundles are organised into plates. There is the character plate, a texture plate, an environment/space plate, an exchange plate, and a limited construction/minor correction plate. This is perhaps the most modifiable part of the process — generation is produced by feeding these plates to the model, and if for example the decision is made to change the character's clothing, that element is changed independently of all others.
CHARACTER PLATE
We generate the AI character that corresponds to the ideal casting, proxy actor, or chosen presenter for whom we have rights — produced full body in frontal, three-quarter, profile, and back views. We also produce a grid for the face showing frontal, profile, three-quarter, and overhead angle views, giving us the jawline. The character is dressed in the clothing required for the piece. If the clothing is made from a special material, this is specified in the text prompt.

TEXTURE PLATE
The texture plate is one of the most important elements in the fashion world because it allows us to inform the model of particular materials. Textures can equally be environmental. Macro photography of content is used to achieve better model adaptation.

ENVIRONMENT OR SPACE PLATE
This tells us where we are. It can include a small map or a terrain elevation, but its primary function is to indicate the exact environment in which the character exists. It carries strong texture information, but its most important contribution is spatial — how much air exists between each element, are we in an enclosed space or an open desert, what is the sky like, are we near a cliff edge, or inside a house in the desert? Perhaps we decide we are in an abstract space and show the model a reality that we then radically crop so it is only visible in the reflection of a mirror or through a small window. The light entering through a window is not the same when there is snow outside. Profusion of detail in this method is essential to arrive at genuinely particular generations.

LIMITED CONSTRUCTION OR CORRECTION PLATE
The limited construction plate instructs the model to add specific elements to a sketch and alter it. Not every image requires an intensive plate process, and not every image requires every plate. For singular images it is simpler to use a process called red pen — placing a sketch at the centre and using arrows to indicate elements added to the composition. If the character is not being repeated and this is only a scene modification, this plate establishes a record of that correction or minor construction.
THE EXCHANGE PLATE
This plate details elements such as camera focus position, camera height relative to the ground shown in a profile elevation, and the position of the light or sun relative to the subject. It should be noted that as of today no image generation model can read this plate with full accuracy. However Claude can describe the intentions of the exchange plate in precise language. This plate also contains the depth map as well as the pencil or digital drawing. It is important because it allows us to arrive at a very precise instruction for exactly what we want — something that would otherwise be extremely difficult to express in pure words. When asking an LLM to create a prompt, the LLM hierarchises it with a more ideal syntax or even in JSON programming language.


MULTIPLE VALIDATION PROCESS
Drawings, depth maps, and plates are bundles of documented choice — made before the machine is consulted, traceable after the work is done. They can be mine alone. They can belong to a crew.
The make-up artist's vision of a face. The costume designer's understanding of how a fabric moves. The cinematographer's instinct about where the light should die. Each of these becomes a plate. Each plate enters the generation. The more precisely a human intention is documented, the more precisely it survives the process.
Curation and guidance remain irreplaceable — not as control, but as authorship. This method is as intimate as a single director working alone, and as open as a full production that decides, together, what the image must become.
AI VIDEO METHODOLOGY: VIDEO GENERATION MODEL
SPATIOTEMPORAL PROMPT — IMAGE IN
Once the desired image has been achieved, it can be used alone as a starting point for video generation. We instruct something to begin happening to the subject in that image that contains no variation in its forms. For example, the image may be set in a desert and one may decide it begins to snow. What the video model will not do with precision is replace the desert we see with one in which snow has already fallen. The video model takes the image as a strict starting point.
Regarding emotions — if the character changes emotional state and we want to leave open the degree of interpretation, exploring how far or in what direction the change can go, it is better not to create an output image. Any event we want to explore requires that openness. I say openness because without an output image the model may hallucinate more. So restrictive as it may seem, it is better to plan the micro-emotions.
SPATIOTEMPORAL PROMPT — IMAGE IN & OUT
When we have a definitive image created through the plate process, we can create another moment assisted by the final image. This image becomes a strict matrix telling the model to reproduce it exactly but vary a specific aspect in space-time.
If the character moves to another space and we have that other image, we can directly create the character in that other space and use that image as the OUT image.
If the character experiences emotions, ideally we should be able to show an IN and OUT image with the level of emotional register change we are requesting. The prompt also requires genuine actor direction.
SPATIOTEMPORAL VIDEO PROMPT — IMAGE IN & OUT
Once we have our IN and OUT images, we produce a prompt describing what happens between those two images. Here we have eliminated the model's capacity to vary what occurs because we have defined an output point. Our prompt must therefore be in perfect coordination with what image A and image B narrate. The video model will not invent a snowing desert if there is no falling snow in the final image.
If the model receives a character in one space and then in another, the spatiotemporal prompt explains how the character has moved from one point to the other. These prompts can be written explaining what happens second by second. If the character must act, the performance must be present in those IN and OUT frames.


UPSCALING
The images I create are generated directly in 4K. However video models still work in Full HD, so the process requires amplification through Topaz. This upscaling is executed once a first selection of generated videos has been made.
EDITING & SOUND
With the 4K content, I work a 4K edit in DaVinci Resolve Studio. This goes hand in hand with a spotting session, where I identify the sonic opportunities within the work. I then source sounds from a sound bank with long term rights or use those provided by the production. I position the sounds first, then execute the panning and mixing.
COLOUR GRADING
Also in DaVinci Resolve, I execute a colour correction pass since generations tend to carry very strong contrast. Topaz helps by providing a greater number of pixels, but over that process a colour curation pass is still needed to refine the final product.
DELIVERY AND REVISIONS
Creating a controlled simulated audiovisual production is substantial work if seriously attempted. Fortunately I have a process that accommodates changes through its modular elements. The ideal process begins with a simple hand-drawn storyboard. Revisions are inevitable, necessary, and negotiable.
I have chosen to share my process because many professionals mistakenly believe they can generate results as many times as necessary until something good emerges. Or worse, they curate images in an automated way — having an LLM judge their images against parameters before they themselves have looked. This is not serious work. Careful work requires a high degree of volition. A rejection of the mediocrity that all models tend toward and a high degree of restraint. To make this difference visible and legally sound, I deliver each modular part of my process together with the final work.
DIRECTORIAL METHODOLOGY
MY PERSONAL WORK
I originally worked as a storyboard artist, animatic director and roughman. AI arrived and changed the dynamic of things. It turns out I am also a director — I direct modest mid-length action and science fiction films. I am a storyboard artist because I understand cinema cameras, acting coaching, lighting, and filming myself when necessary. I know how to produce clean sound, foley or sound fx among the many other things an independent director must know how
to do. To this day, storyboarding silently benefited from all of these aspects of myself. And yet AI compels everybody to move forward, to take the place one can actually take and hold.
I leave the door open to those who wish to give me the opportunity to direct or to create images according to my own judgment. The advantage of giving me that opportunity is having AI work that is publishable — because I have a working system of my own, a document for every step of the way and a process where everything is altered before and after each generation, the resulting audiovisual work is mine.
If this were the case, there is a plate that comes before all other plates.
THE INTENTIONS PLATE
Before creating a single serious drawing, I create the characters in the most basic form possible and execute a storyboard with three things per image:
The scheme of the image in its most abstract form.
The scheme of the movement within the image.
A small note that says what I feel.
This plate does not feed any LLM. But I need it to know what I am actually doing — and it is the strongest proof that everything else, however complex, came after.
This is a secret. But since you have read this far, I will tell you. In reality, everything — all of the above — comes after the small note.
























