How to Build an Automated AI Creative Pipeline with Python, ComfyUI, Flux, LoRA, & Adobe

Introduction

Building a reliable pipeline that blends Flux‑powered ComfyUI with your custom LoRA model doesn’t have to feel overwhelming. In this post, you’ll learn how to set up everything in VS Code—from workspace layout and virtual environments to scripting, vectorization, packaging, and optimization. Follow these 13 steps, and you’ll have a fully wired system ready for voice or text invocation by your AI agent.

We’ll be building a tightly‑integrated, end‑to‑end clipart‑set pipeline in VS Code that starts with your custom LoRA‑powered ComfyUI‑flux flows to generate raw PNGs, then automatically vectorises and exports them to AI, EPS, SVG and high‑res PNG formats in Illustrator via a JSX script. Once the vectors are ready, a small Python module will batch‑rename each file to your naming scheme and bundle everything—illustrations plus a Help file—into a neatly packaged ZIP. All the code lives under C:/AI_Agent/, dependencies are isolated in two virtual environments, and the entire workflow can be kicked off with a single Python call—typed or spoken—so you get a polished graphics set with zero manual tedium.

High‑Level Overview

  1. Workspace setup
  2. Virtual environments
  3. Dependencies
  4. GPU & CUDA check
  5. Clone ComfyUI‑flux
  6. Install your LoRA
  7. Build & save the Flux flow
  8. Helper module update
  9. Scaffold other Python modules
  10. JSX vectorization script
  11. End‑to‑end pipeline script
  12. Test & debug
  13. Optimize, log & document

Detailed Build Plan

1. Set up your VS Code workspace

  • Create a folder at C:/AI_Agent/ and open it in VS Code.
  • Inside, add these subfolders:
    • scripts/ – for your JSX automation files
    • modules/ – for Python helper modules
    • flows/ – to store saved Flux flow JSONs
    • models/ – for LLM checkpoints and LoRA files
    • outputs/ – for raw, vector, renamed, and zipped results
  • In .vscode/settings.json, point to your main virtual environment’s Python interpreter.

2. Create & activate your virtual environments

cd C:/AI_Agent

python -m venv ai_agent_env

python -m venv deepseek_env

.\ai_agent_env\Scripts\Activate.ps1

Switch to deepseek_env when you work on code_generator.py.

3. Declare & install your dependencies

  • In requirements.txt, list at least:

torch

transformers

bitsandbytes

langchain

openai-whisper

SpeechRecognition

sounddevice

comfyui-cli

requests

comtypes

pillow

python-docx

python-pptx


  • Then, in each venv, run:

pip install -r requirements.txt

4. Verify your GPU & CUDA setup

Run in Python or terminal:

import torch

print(torch.cuda.is_available(), torch.cuda.get_device_name(0))

Or simply:

nvidia-smi

Confirm your RTX 5080 (16 GB VRAM) is up and running.

5. Clone & configure the Flux branch of ComfyUI

cd C:/AI_Agent

git clone –branch flux https://github.com/comfyanonymous/ComfyUI.git ComfyUI-flux

cd ComfyUI-flux

pip install -r requirements.txt

If you spot requirements-flux.txt, install it too.

6. Install your trained LoRA

Copy your .safetensors or .pt file into:

C:/AI_Agent/ComfyUI-flux/models/Lora/

7. Build & save your Flux flow

  1. Launch ComfyUI‑flux:

cd C:/AI_Agent/ComfyUI-flux

python main.py


  1. In the UI, assemble a flow that:
    • Loads your base checkpoint
    • Applies your LoRA node with the desired weight
    • Sends output to your text‑to‑image sampler
  2. Save it as, for example, flows/clipart_with_lora.json.

8. Update your ComfyUI helper module

In modules/comfyui_module.py, add:

from comfyui import Engine

import os

def generate_images(prompts: list[str], output_dir: str,

                    flow_path: str, lora_name: str, lora_weight: float=0.8):

    os.makedirs(output_dir, exist_ok=True)

    engine = Engine(

        comfyui_dir=”C:/AI_Agent/ComfyUI-flux”,

        flow=flow_path,

        loras={lora_name: lora_weight}

    )

    for i, prompt in enumerate(prompts, start=1):

        img = engine.generate(prompt=prompt)

        img.save(os.path.join(output_dir, f”raw_{i}.png”))

9. Scaffold your remaining Python modules

  • modules/illustrator_module.py → vectorize_and_export(input_png, output_dir) (calls your JSX via COM).
  • modules/file_ops.py → rename_outputs(folder, prefix) to batch‑rename vector exports.
  • modules/package_module.py → package_with_help(folder, zip_path, help_file) to copy the help file and zip everything.

10. Write your JSX vectorization script

In scripts/vectorize_and_export.jsx, make sure it will:

  • Open the PNG
  • Image‑trace and expand
  • Save as .ai, .eps, plus export .svg and .png
  • Close without extra changes

11. Compose the end‑to‑end pipeline

Create ai_agent_general.py in your workspace root:

from modules.comfyui_module     import generate_images

from modules.illustrator_module import vectorize_and_export

from modules.file_ops           import rename_outputs

from modules.package_module     import package_with_help

import glob, os

def process_clipart_set(prompts, workdir, help_file, prefix):

    raw_dir    = os.path.join(workdir, “raw”)

    vector_dir = os.path.join(workdir, “vector”)

    # 1. AI → raw PNGs

    generate_images(prompts, raw_dir,

                    flow_path=”flows/clipart_with_lora.json”,

                    lora_name=”my_trained_lora”,

                    lora_weight=0.8)

    # 2. PNG → vector exports

    for png in glob.glob(f”{raw_dir}/*.png”):

        vectorize_and_export(png, vector_dir)

    # 3. Batch rename

    rename_outputs(vector_dir, prefix)

    # 4. Zip with Help file

    zip_path = os.path.join(workdir, prefix)

    return package_with_help(vector_dir, zip_path, help_file)

if __name__ == “__main__”:

    prompts   = [“Cute cat icon”, “Smiling sun icon”, …]

    workdir   = r”C:\AI_Agent\outputs\my_icon_set”

    help_file = r”C:\AI_Agent\Help.pdf”

    prefix    = “my_icon_set”

    print(“Created:”, process_clipart_set(prompts, workdir, help_file, prefix))

12. Test & debug in VS Code

  • Add a launch.json entry for ai_agent_general.py.
  • Hit F5 and watch your folders fill up:
    • outputs/raw/ → LoRA‑enhanced PNGs
    • outputs/vector/ → AI‑generated AI/EPS/SVG/PNG files
    • Final ZIP in outputs/

13. Optimize, log & document

  • Tweak your JSX trace settings for crisper vectors.
  • If VRAM spikes, lower lora_weight or quantize your models to INT8.
  • Wrap each stage in try/except blocks and use Python’s logging for clear diagnostics.
  • Update your README.md with these exact steps and note when to switch between ai_agent_env and deepseek_env.

Conclusion

You’ve now laid the groundwork for a smooth, Flux‑powered clipart pipeline in VS Code. It scales from simple icon sets to large batches and hooks right into your AI agent for voice or text control. Dive in, tweak those settings, and let your creativity flow—your next batch of vector clipart is just a prompt away.