Project 2: Images Created Using the ComfyGUI Interface

Post Reply
glegrady
Posts: 227
Joined: Wed Sep 22, 2010 12:26 pm

Project 2: Images Created Using the ComfyGUI Interface

Post by glegrady » Sun Jan 04, 2026 6:40 pm

Project 2: Images Created Using the ComfyGUI Interface

Due dates:
Project 1: January 29,
Project 2: February 5,
Project 3: February 12, 2026

Project Details: Create a series of images that demonstrate an understanding of the options available in the ComfyUI interface. As part of the work, discuss how you arrived at your results. We will do new images each week. For future works, just add a new reply.

Tips:
Organize Your Models: Keep your models, VAEs, and other resources well-organized in the models directory. This makes it easier to find and load them into your workflows.

Save Your Workflows: Save your node setups as JSON files. This allows you to reuse and share complex workflows without having to recreate them each time.

Experiment with Settings: Don’t hesitate to try different configurations in your nodes. Adjust parameters like strength, guidance scale, and others to see how they affect your results.
George Legrady
legrady@mat.ucsb.edu

zixuan241
Posts: 14
Joined: Wed Oct 01, 2025 2:41 pm

Re: Project 2: Images Created Using the ComfyGUI Interface

Post by zixuan241 » Thu Jan 29, 2026 11:12 am

Practice 1
This image explores the transformation of familiar food objects into soft felt-like materials, blurring the boundary between edible textures and handcrafted textiles.
By combining realistic lighting with woolen surfaces, the work invites viewers to reconsider material perception and tactile expectation.

ComfyUI_02499_.png
ComfyUI_02496_.png
ComfyUI_02495_.png
截屏2026-01-29 11.08.53.png

zixuan241
Posts: 14
Joined: Wed Oct 01, 2025 2:41 pm

Re: Project 2: Images Created Using the ComfyGUI Interface

Post by zixuan241 » Tue Feb 03, 2026 10:53 am

These images were achieved through iterative refinement of model selection, prompt structure, and ControlNet parameters. A realistic checkpoint was chosen to support cinematic lighting and material continuity, while overly illustrative models were excluded to avoid narrative or painterly effects. Prompt language was progressively simplified to emphasize atmosphere, silence, and a single human presence rather than explicit storytelling.

ControlNet was used selectively to stabilize spatial composition without over-constraining the scene. Edge-based guidance was applied at moderate strength and limited to the early stages of generation, allowing structural coherence while preserving organic lighting and texture. Sampling parameters were kept intentionally conservative to maintain visual continuity and avoid stylization. Through repeated testing and small parameter adjustments, the workflow converged toward quiet, film-like scenes where the environment carries the primary emotional weight and the human figure remains understated.

Workflow Parameter Summary
Model
Checkpoint: SD1.5 / DreamShaper
Chosen for its balance between realism and cinematic stylization.
Avoids pixel-art or illustrative bias while maintaining soft lighting transitions

Prompt Structure
Positive Prompt Focus:
1. Atmospheric environment
2. Single, solitary human figure
3. Cinematic lighting and silence
4. Film still realism (explicitly excluding illustration)

Negative Prompt Constraints:
1. Illustration, watercolor, painting
2. Narrative or storytelling scenes
3. Groups or crowds
4. Anime, cartoon, stylized aesthetics
5. Text and watermark artifacts

Sampling Parameters (KSampler)
Sampler: DPM++ 2M
Scheduler: Normal
Steps: 32
CFG Scale: 4.5
Denoise: 1.0

Resolution
Latent Size: 768 × 512

ControlNet Configuration
Preprocessor: PyraCanny
Used to extract structural edges while suppressing fine texture noise.
ControlNet Model: control_v11p_sd15_canny
Strength: 0.6
Start Percent: 0.0
End Percent: 0.6
截屏2026-02-03 10.40.44.png
Screenshot of Workflow
ComfyUI_02523_.png
ComfyUI_02527_.png
ComfyUI_02526_.png

glegrady
Posts: 227
Joined: Wed Sep 22, 2010 12:26 pm

Re: Project 2: Images Created Using the ComfyGUI Interface

Post by glegrady » Tue Feb 03, 2026 5:51 pm

MOTIVATION: This ComfyUI workflow takes two images to influence the outcome of a new image. I chose two unrelated images, one of a truck and one of a beach scene to test how the two unrelated images will work together.

METHODOLOGY: Both images are at high resolution. They are unrelated and the idea was to test to see how they would interact. As it turns out IMAGE BLEND is a basic blend function rather then a sophisticated integrator of the two images. I will need to try and see if there are any other ways to integrate multiple images.
IMG_1884.JPG
IMG_1887.JPG
two_inputs_influence_output_00016_.png
I also tried with two related previously generated images with the text prompt of "Tree on a truck in a lab"

EVALUATION / ANALYSIS: The results are not interesting enough. I am aiming for either
1) Style Transfer, an artificial neural network where the style of one image influences the content of the other image (see https://github.com/Westlake-AGI-Lab/Awe ... ion-Models and Wikipedia: https://en.wikipedia.org/wiki/Neural_style_transfer)

2) or where the two images would combine their influence to generate a new one.
Attachments
Screenshot 2026-02-03 at 5.19.14 PM.png
Screenshot 2026-02-03 at 5.47.41 PM.png
George Legrady
legrady@mat.ucsb.edu

Post Reply