Skip to content

[RFC]: OpenAI Image Edit API Interface for ComfyUI #510

@lishunyang12

Description

@lishunyang12

Motivation.

While ComfyUI supports local inpainting and OpenAI text-to-image, it lacks a native or well-standardized way to leverage OpenAI's cloud-based editing. Users currently have to write custom scripts or use fragmented third-party nodes. Adding this enables:

Proposed Change.

Propose the implementation of a standardized node and backend interface to support the OpenAI v1/images/edit endpoint. This will allow users to perform prompt-based image editing (inpainting and outpainting) using DALL-E 2 or GPT-Image models directly within ComfyUI workflows.

High-quality inpainting for users without powerful local GPUs.

Consistency with the existing OpenAI T2I implementation.

Hybrid Workflows: Using local Stable Diffusion for base generation and DALL-E/GPT-Image for final "magic" edits.

Feedback Period.

No response

CC List.

@david6666666 @dougbtv

Any Other Things.

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions