Photography, digital painting, 3D rendering -- these all went through a phase of being panned as "not real art" before they were accepted, but they were all eventually accepted and they all turned out to have their own type of merit. It will be the same for AI tools.
I'll be blunt, all of those images look comically generic and extremely "AI".
> Photography, digital painting, 3D rendering
Those are not the same as AI. Using AI is akin to standing beside a great pianist and whispering into his ear that you want "something sad and slow" and then waiting for him to play your request. You might continue to give him prompts but you're just doing that. In time, you might be called a "collaborator" but your involvement begins at bare minimum and you have to justify that you're more involved --- the pianist doesn't, the pianist is making the music.
You could record the song and do more to the recording, or improv along with your own instrument. But just taking the raw output again and again is simply getting a response to your prompt again and again.
The prompt themselves are actually more artistic as they venture into surrealist poetry and prose, but the images are almost always much less interesting artistically than the prompts would suggest.
> I'll be blunt, all of those images look comically generic and extremely "AI".
Ok, now I know you're watching through hate goggles. Fortunately, not everyone will bring those to the party.
> Using AI is akin... [goes on to describe a clueless iterative prompting process that wouldn't get within a mile of the front page]
You've really outed yourself here. If you think it's all just iterative prompting, you are about 3 years behind the tools and workflows that allow the level of quality and consistency you see in the best AI work.
I scrolled through and...have to agree with their impression. I'm confused as to what you thought is being demonstrated by images on https://civitai.com/images of all things, since it's all very high-concept/low-intentionality, to put it nicely. Did you mix it up with a different link?
My litmus test is to simply lie. It weeds out the people hating AI simply because they know or think it is AI. If you link directly to an AI site they're already going to say they hate it or that it all "looks like AI slop". You won't get anywhere trying to meet them at a middle ground because they simply aren't interested in any kind of a middle ground.
Which is exactly the opposite of what the artists claim to want. But god is it hilarious following the anti-AI artists on Twitter who end up having to apologize for liking an AI-generated artwork pretty much as a daily occurrence. I just grab my popcorn and enjoy the show.
Every passing day the technologies making all of this possible get a little bit better and every single day continues to be the worst it will ever be. They'll point to today's imperfections or flaws as evidence of something being AI-generated and those imperfections will be trained out with fine tuning or LoRA models until there is no longer any way to tell.
E: A lot of them also don't realize that besides text-to-image there is image-to-image for more control over composition as well as ControlNet for controlling poses. More LoRA models than you can imagine for controlling the style. Their imagination is limited to strictly text-to-image prompts with no human input afterwards.
AI is a tool not much different than Photoshop was back when "digital artists aren't real artists" was the argument. And in case anyone has forgotten: "You can't Ctrl+Z real art".
Ask any fractal artists the names they were called for "adjusting a few settings" in Apophysis.
E2:
We need more tests such as this. The vast majority of people can't identify AI nearly as well as they think they can identify AI - even people familiar with AI who "know what to look for".
> Respondents who felt confident about their answers had worse results than those who weren’t so sure
> Survey respondents who believed they answered most questions correctly had worse results than those with doubts. Over 78% of respondents who thought their score is very likely to be high got less than half of the answers right. In comparison, those who were most pessimistic did significantly better, with the majority of them scoring above the average.
You still make these. You sit down and form the art.
When you use AI you don't make anything, you ask someone else to make it, i.e. you've commissioned it. It doesn't really matter if I sit down for a portrait and describe in excruciating detail what I want, I'm still not a painter.
It doesn't even matter, in my eyes, how good or how shit the art is. It can be the best art ever, but the only reason art, as a whole, has value is because of the human aspect.
Picasso famously said he spent his childhood learning how to paint professionally, and then spent the rest of his life learning how to paint like a child. And I think that really encapsulates the meaning of art. It's not so much about the end product, it's about the author's intention to get there. Anybody can paint like a child, very few have the inclination and inspiration to think of that.
You can see this a lot in contemporary art. People say it looks really easy. Sure, it looks easy now, because you've already seen it and didn't come up with it. The coming up with it part is the art, not the thing.
When I make 3D art I instruct a lot of things, how the renderer is configured, lighting details, various systems that need to be tweaked to get the final render to look good.
Using the AI tool chains, you'd start with some generation either via text or image input, then modify various settingas, model, render steps, sampler, loras, then a generative upscaling pass, control nets to extract and apply depth, pose, outlines all etc. A colourful mix of systems and config, not unlike working 3D tool chains.
Its also not unusual to mix and match, handcrafted geometry but projection mapped generated textures and then a final pass in Photoshop or what have you.
Typing "awesome art piece" into ChatGPT is like rendering a donut.
> You still make these. You sit down and form the art.
When you use a camera you don't make anything. You press a button and the camera makes it. You haven't even described it.
When you use photoshop you don't make anything. You press buttons and the software just draws the pixels for you. It doesn't make you a painter.
When you use 3D rendering software you don't make anything. You tell the computer about the scene and the computer makes it. You've barely commissioned it.
Sorry, I don't think it's the same because making physical specifications via modifying pixels, or 3D art, or forming a shot is something you do.
It's the difference between making a house with wood and making a house by telling someone to make a house. One is making a house, one isn't.
The problem with AI is that it's natural language. So there's no skill there, you're describing something, you're commissioning it. When I do photoshop, I'm not describing anything, I'm modifying pixels. When I do 3D modeling, I'm not describing anything, I'm doing modeling.
You can say that those more formal specifications is the same as a description. But it's not. Because then why aren't the business folks programmers? Why aren't the people who come up with the requirements software engineers? Why are YOU the engineer and not them?
Because you made it formally, they just described it. So you're the engineer, they're the business analysts.
Also, as a side note, it's not at all reductive to say people who use AI just describe what they want. That is literally, actually, what they do. There's no more secret sauce than that - that is where the process begins and ends. If that makes it seem really uninspired then that's a clue, not an indicator that my reasoning is broken.
You can get into prompt engineering and whatever, I don't care. You can be a prompt engineer then, but not an artist. To me it seems plainly obvious nobody has any trouble applying this to everyone else, but suddenly when it's AI it's like everyone's prior human experience evaporates and they're saying novel things.
Right, it can require describing and refining over and over. I still don't think that means you did the thing. Otherwise, the business analysts who have to constantly describe requirements would be software engineers, but they're not.
Not that that isn't a skill in it of itself. I just don't think it's a creationary skill. What you're creating is the description, not the product.
You are creating the product but have to go through an unclear layer and through trial and error you try to reach your original vision. No different from painting a picture for an amateur.
The better you get the closer you can get to your original vision.
If I were trying to convince people that AI art is interesting and creative then I would not choose to highlight the site dedicated to strip-mining the creativity of non-AI artists, to produce models which regurgitate their ideas ad infinitum.
Not to mention extremely suspicious checkpoints that produce imagery of extremely young women. Or in others words women with extremely child like features in ways kids should not be presented.
How familiar are you with what is possible and how much human effort goes towards achieving it?
https://civitai.com/images
Photography, digital painting, 3D rendering -- these all went through a phase of being panned as "not real art" before they were accepted, but they were all eventually accepted and they all turned out to have their own type of merit. It will be the same for AI tools.