The most complicated comfyUI-whatever is worth less to me than a child’s drawing of their parents because the child’s drawing is communicating love while the generated one is communicating nothing.
How can you say what the output of that workflow communicates or doesn’t communicate without seeing it?
The lot of them can’t even explain their own work—at best they can explain their comfyUI workflow because that’s the thing they actually put effort into.
That statement is unsubstantiated. Without knowing the creator of that workflow I venture the following proposition: If the creator put in hours of effort into constructing it, so the AI would produce just the right output, then they clearly had a vision of what they were going for. And If they tried to get a detail just right, then that detail must have meaning to them, or else they wouldn’t bother.
I see another issue with the statement “The lot of them can’t even explain their own work”. Do you think every stroke of the brush has a meaning for a painter? Is every note carefully chosen in a piece of music? Or is it rather a case of “doing what feels right at the moment”? I ask that because I don’t see the difference in playing a few chord progressions on the piano and seeing what fits best, and letting AI generate a few outputs and seeing what fits best.
How can you say what the output of that workflow communicates or doesn’t communicate without seeing it?
I’ve seen plenty.
Is every note carefully chosen in a piece of music?
Are you… being serious?
Look, I’ve been a musician longer than I’ve been any other kind of artist, and yes, I pick all of my notes. That’s the fun part, actually. There is a lot of deliberation over where they should go.
This is what I mean about you people not understanding the artistic process. Music is a language. People in a jam session are speaking words and phrases to each other. There are grammar rules to this language that work one way but in way another not.
If you’re using an LLM, then your jam partners aren’t speaking to you, they’re speaking to a robot. You may as well not even be there. And uh… I dunno, that just seems really fucking lonely.
There is a lot of deliberation over where they should go.
If you say that, then I’ll have to believe you. My standpoint on this is more akin to your next statement,
Music is a language. People in a jam session are speaking words and phrases to each other. There are grammar rules
That means that once the piece of music turns out to be in, say G major, then there is little (I even dare say no) deliberation on whether you’re playing a G natural or a G sharp. There may be deliberation in the phrases you play or in the general direction the piece you’re playing will go. There may even be deliberation in not playing a G natural from time to time. But by default you don’t think about these things. You just play.
This is what I’m comparing to using an AI tool. One doesn’t think about every single note every single time, just like one doesn’t think about ever single filler word that’s only there for grammatical purposes in writing prose, and just like one doesn’t think about every single color channel of every single pixel when creating a digital painting. So why does using an AI tool invalidate the art? Because there is no deliberation over every single tiny aspect of the output?
If you’re using an LLM, then your jam partners aren’t speaking to you, they’re speaking to a robot.
This is just plain wrong. It’s like saying you cannot be a musician if your instrument is MIDI. Or you can’t be a singer if you’re using an artificial voicebox. The LLM still produces what its user intends it to produce. The LLM doesn’t produce output on its own. It is operated by the user. Just like a guitar is operated by its player.
The LLM still produces what its user intends it to produce.
The lot of you people always front this idea, figuratively, that the way the robot works is it connects to the user’s brain and then downloads the picture they want to produce, and then simply displays it for them. The act of writing a prompt is merely the interface by which this brain-to-brain connection happens.
I don’t know how else to say this: that is fucking ridiculous. It’s so mind meltingly stupid, I think you’re lying to me.
I have to ask: is AI not supposed to revolutionize the working world? You don’t have to bother yourself with the particulars of writing a good text summary anymore, but also, you’re somehow in full control of what it does? You no longer need to be a trained artist with a good understanding of color theory to produce great works—this is the great democratization of art—but also, the colors chosen were naturally the ones you would have chosen anyway; you are a big, smart boy, after all.
just like one doesn’t think about ever single filler word that’s only there for grammatical purposes in writing
If you had ever written anything worth talking about, you’d recognize that your filler words, or their absense, add a lot of color to your writing.
I think you let the robot write filler words for you because you don’t actually know what they mean. You’ve never thought about them.
This is a phenomenal example because, when I write comments, this one even, I reread them 4, 5, 6, 7 times checking for syntax and grammar errors, brevity, tone, voice, whether I’m being too aggressive, whether I’m not being aggressive enough; but you imagine the filler is just busy work keeping you from jerking off a seventh time.
You seem not to understand that, if you think this filler is so beneath caring about, I’d prefer you just cut it out entirely. Speak like a caveman for all I care. Why would I want you to generate a bunch of filler words when neither you nor I have any idea what they’re doing there?
I’m not even really talking about AI anymore: why are you writing filler words in your responses? Stop. Like, seriously. Cut that shit out. I’ll slap you with the ruler, I swear.
How can you say what the output of that workflow communicates or doesn’t communicate without seeing it?
That statement is unsubstantiated. Without knowing the creator of that workflow I venture the following proposition: If the creator put in hours of effort into constructing it, so the AI would produce just the right output, then they clearly had a vision of what they were going for. And If they tried to get a detail just right, then that detail must have meaning to them, or else they wouldn’t bother.
I see another issue with the statement “The lot of them can’t even explain their own work”. Do you think every stroke of the brush has a meaning for a painter? Is every note carefully chosen in a piece of music? Or is it rather a case of “doing what feels right at the moment”? I ask that because I don’t see the difference in playing a few chord progressions on the piano and seeing what fits best, and letting AI generate a few outputs and seeing what fits best.
I’ve seen plenty.
Are you… being serious?
Look, I’ve been a musician longer than I’ve been any other kind of artist, and yes, I pick all of my notes. That’s the fun part, actually. There is a lot of deliberation over where they should go.
This is what I mean about you people not understanding the artistic process. Music is a language. People in a jam session are speaking words and phrases to each other. There are grammar rules to this language that work one way but in way another not.
If you’re using an LLM, then your jam partners aren’t speaking to you, they’re speaking to a robot. You may as well not even be there. And uh… I dunno, that just seems really fucking lonely.
If you say that, then I’ll have to believe you. My standpoint on this is more akin to your next statement,
That means that once the piece of music turns out to be in, say G major, then there is little (I even dare say no) deliberation on whether you’re playing a G natural or a G sharp. There may be deliberation in the phrases you play or in the general direction the piece you’re playing will go. There may even be deliberation in not playing a G natural from time to time. But by default you don’t think about these things. You just play.
This is what I’m comparing to using an AI tool. One doesn’t think about every single note every single time, just like one doesn’t think about ever single filler word that’s only there for grammatical purposes in writing prose, and just like one doesn’t think about every single color channel of every single pixel when creating a digital painting. So why does using an AI tool invalidate the art? Because there is no deliberation over every single tiny aspect of the output?
This is just plain wrong. It’s like saying you cannot be a musician if your instrument is MIDI. Or you can’t be a singer if you’re using an artificial voicebox. The LLM still produces what its user intends it to produce. The LLM doesn’t produce output on its own. It is operated by the user. Just like a guitar is operated by its player.
The lot of you people always front this idea, figuratively, that the way the robot works is it connects to the user’s brain and then downloads the picture they want to produce, and then simply displays it for them. The act of writing a prompt is merely the interface by which this brain-to-brain connection happens.
I don’t know how else to say this: that is fucking ridiculous. It’s so mind meltingly stupid, I think you’re lying to me.
I have to ask: is AI not supposed to revolutionize the working world? You don’t have to bother yourself with the particulars of writing a good text summary anymore, but also, you’re somehow in full control of what it does? You no longer need to be a trained artist with a good understanding of color theory to produce great works—this is the great democratization of art—but also, the colors chosen were naturally the ones you would have chosen anyway; you are a big, smart boy, after all.
If you had ever written anything worth talking about, you’d recognize that your filler words, or their absense, add a lot of color to your writing.
I think you let the robot write filler words for you because you don’t actually know what they mean. You’ve never thought about them.
This is a phenomenal example because, when I write comments, this one even, I reread them 4, 5, 6, 7 times checking for syntax and grammar errors, brevity, tone, voice, whether I’m being too aggressive, whether I’m not being aggressive enough; but you imagine the filler is just busy work keeping you from jerking off a seventh time.
You seem not to understand that, if you think this filler is so beneath caring about, I’d prefer you just cut it out entirely. Speak like a caveman for all I care. Why would I want you to generate a bunch of filler words when neither you nor I have any idea what they’re doing there?
I’m not even really talking about AI anymore: why are you writing filler words in your responses? Stop. Like, seriously. Cut that shit out. I’ll slap you with the ruler, I swear.