Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
karolist
36 days ago
|
parent
|
context
|
favorite
| on:
Qwen-Image: Crafting with native text rendering
do you mean
https://github.com/pollockjj/ComfyUI-MultiGPU
? One GPU would do the computation, but others could pool in for VRAM expansion, right? (I've not used this node)
AuryGlenz
36 days ago
[–]
Nah, that won’t gain you much (if anything?) over just doing the layer swaps on RAM. You can put the text encoder on the second card but you can also just put it in your RAM without much for negatives.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: