this post was submitted on 20 Oct 2023
1341 points (100.0% liked)

196

16423 readers
2882 users here now

Be sure to follow the rule before you head out.

Rule: You must post before you leave.

^other^ ^rules^

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 1 year ago (1 children)

Jesus Christ who called in the tech bro cavalry? Get a fucking life losers you're not artists and nobody is proud of you for doing the artistic equivalent of commissioning an artist (which you should be doing instead of stealing their art and mashing it into a shitty approximation of art)

[–] [email protected] 8 points 1 year ago (1 children)

It's like photography. Photography + photoshop for some workflows. There's a low barrier to entry.

Would you say the same thing to someone proud of how their tracing came out?

[–] [email protected] 6 points 1 year ago (2 children)

These are not comparable to AI image generation.

Even tracing has more artistic input than typing "artist name cool thing I like lighting trending on artstation" into a text box.

[–] [email protected] 7 points 1 year ago (1 children)

So about the same as a photograph then

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Aside from the fact that your comment applies to photography as well, I think it's fair to point out image generation can also be a complex pipeline instead of a simple prompt.

I use ComfyUI on my own hardware and frequently include steps for control net, depth maps, canny edge detection, segmentation, loras, and more. The text prompts, both positive and negative, are the least important parts in my workflow personally.

Hell sometimes I use my own photos as one of the dozens of inputs for the workflow, so in a sense photography was included.