How much AI is too much? Are you allowed to use Photoshop to create your digital art? Almost every tool there is now powered by AI in some way (some a lot more than others). Can you use its auto-fill button? What percent of the image can you use it for?
Can you generate something with AI and then manually edit it in Photoshop? How much manual editing is required before it's not considered AI anymore?
My point is, AI is another tool in the toolbox, it can be used well or poorly. How much is too much? Just like back in the day, using Photoshop wasn't allowed, until it was.
This isn't a "gotcha" experiment, it's a real question. And the problem with "I know it when I see it" is that right now people are biasing towards "if it's good it must be AI" and accusing legit artists and writers of being AI.
I happened to me. I spent 10 minutes writing a reddit comment. I researched it, I sourced it. It had sections with headlines, bullets, and even em dashes. 100% written by me.
As soon as I posted it, it was downvoted and I got PMs saying "don't post this AI slop!".
The problem is the AI has been trained on well executed material, and when you execute well, you look like an AI.
I mean the art community has always been continuous. Artists fight over everything, copying style or composition, claiming people are tracing others works. It's honestly not new, same thing in new clothes.
Clair Obscur: Expedition 33 (recently game of the year) got caught leaving a small amount of placeholder AI content in their game and everyone lost their mind
I’m sure reasonable artists agree with you, but many today do not
The first is that all “AI” is not equal. It’s specifically generative AI that most take issue with, mostly due to questionable ethics in training. Image editors have employed techniques marketed as “AI” for many years that are mostly or entirely unrelated to modern generative AI.
The second is that whether something is “AI art” is a spectrum, not binary. On one end you have creations in which generative AI played no role and on the other you have images that were generated off of nothing but a prompt or vague scribbles. In the middle you have things like images where the artist traced over an AI image or used bits and pieces of generated imagery. Probably the closest shorthand for where an image lands on the spectrum is to what degree the creator engaged their artistic skills.
A great many of digital artists would be happy to use Photoshop 7/CS1/CS2, all long predating generative AI, if those ran on modern operating systems. Some prefer modern simplistic (and without AI) tools like Paint Tool SAI.
That might be part of the equation, but for many it’s a strong gut reaction to having the work of themselves and others taken without consent, turned into the visual equivalent of pink slime and press-formed into other shapes, and sold as a service. It just feels wrong. Even I get a little squeamish thinking about it, and I only do art in a minor/hobby capacity — it’s something I’ve put time into, but it doesn’t pay my bills.
If training were purely ethical, the creative community probably still wouldn’t love generative AI, but it probably wouldn’t hate it nearly as much either. It’s the cavalier attitude towards violation of consent for the sake of profit that really seals the deal.
For many of us, even if drawing that line exactly is debatable, a prompt-generated image, where the "artist" didn't interact with any of the pixels is across the line for "too much AI".
It can definitely take creativity and fortitude to get an AI model to draw what you want it to. But if you worked at a fantasy publishing house and commissioned a cover painting, it might take a fair amount of work for you to get the artist to create something in line with what you envisioned. But you wouldn't get artistic credit for the resultant painting; the artist would! If AI is creating the piece, it is the artist; and you're merely the commissioner of the work.
> But if you worked at a fantasy publishing house and commissioned a cover painting, it might take a fair amount of work for you to get the artist to create something in line with what you envisioned.
If you do this infrequently, you're a commissioner of work.
If you do it daily, in-house, for your own products... you might just have the title "Art Director."
And the best Art Directors today almost all have a background in creating art themselves, in some fashion. I suspect that will remain true in the AI world as well, at least for the foreseeable future.
This argument resonates with me - but it's the same argument that has been made and artists have ignored or put up the same (unconvincing, in my opinion) arguments against the whole time. As you pointed out, this same discussion has been had every step of the way with digital art - from things like photoshop, to the tools that have been gradually introduced inside of photoshop and similar, to even things like brush packs, painting over kitbashes, etc. The traditionalist viewpoint holds strong, until the people arguing blink and realize everyone else eventually stopped caring and did what worked best for them.
At this point, I believe it's not a matter of intellectual honesty or actually disagreeing with any of it - it's just about outcomes. They don't want to see their work devalued, their sources of income drying up. It's an understandable fear. No one who enjoys the work they do enjoys the prospect of potentially having to change careers to keep making a living. Hell, most people that don't enjoy what they do have no desire to have to try and find a new career.
But humans are selfish. The same artists who are worried about technology taking their job will laud praise on technology in other areas that have eliminated jobs, with my recurring example being how happy they are for no longer having to pay a web dev to build them a portfolio site and can instead just go to squarespace and pay them a fraction of the cost. No one laments how there are basically no independent web designers building small sites anymore - it's just not a viable career. It's all been consolidated into shops working for big clients or pumping out themes for Wordpress, Squarespace, and Shopify. And of course, there are countless examples of this throughout history.
I'm not sure AI is going to be the great job destroyer we fear it is. I'm not sure it isn't, either. So I get it. This has a chance to force an issue on a massive scale that usually is much more limited in blast radius.
But to answer the question - I don't think it actually matters to them what the line is from any sort of rational perspective. It will move and shift based on the conversation to wherever they think it needs to be to protect themselves.
The main argument artists use isn’t that it is taking their job. The problem is that it was trained on their work without their consent and without compensation. This is fundamentally different from a Wordpress or squarespace and arguably different from models trained on open source software only.
A result of a prompt you can’t, I believe you can’t trace over a copyrighted work and claim it as your own either so I say that tracing over an AI generated image would not fly either. But IANAL so the details to be fleshed out. Also would probably break if one uses a model that is not trained on any copyrighted data.
AI generated images themselves can't be copyrighted, but if you modify them they can be considered copyrightable, that's the current landscape, though it's a pretty new legal standard so we'll see how it plays out
It's not that deep, if someone thinks it's AI it loses value to them. If you're able to utilize AI tools in a way that doesn't make the output look like AI to the average person you'll be fine.
Eventually no one will be able to really tell the difference and all of this will go away (though likely at the expense of more people's livelihoods).
I see that as being partially true. There will be people like Walter Keane that take the art of others and state it is their own work. [0] AI will assist those individuals.
AI will have a home on people's desktops for those that accept it.
Art that has value will not be AI art. Artist like Margaret Keane will continue to be viewed as exceptional along with their works. [1]
Personally, I view AI art as lacking passion and an attempt to short circuit the path to profit / greed. I wish to not fund that circuit.
A lot of prolific artists historically have passed their own work off to apprentices (for hundreds of years), and there's probably not much different here.
This frees the lead artist up for more conceptual work. AI can potentially carry on this tradition, and at less of an expense.
The truth of many great artists is that they needed to produce a lot more work than they'd have liked to make a living for themselves.
And as I learn about such individuals taking credit for others work I have nothing but disgust for them. This statement also binds to STEM.
Ivan Pavlov's assistants realized the dogs were salivating. Ronald Hare figured out how to mass produce penicillin and saved thousands of lives but no Nobel recognition for her triumph were those that got the prize failed. [0]
I personal will never fund AI art / AI artist. There is no personality in such works, only slop. Profits help artist is not why the create the art they do and if so, they lack soul in their works.
I think you're being too idealistic in these stances, it's just not how it's ever worked.
> And as I learn about such individuals taking credit for others work I have nothing but disgust for them
Historically, this wasn’t considered "taking credit." Leonardo da Vinci’s studio for example, involved apprentices completing significant portions of works under him. Buyers were aware they were commissioning a studio work and not a pair of hands. Koons or Hirst do it today, and it's no secret... it's how prolific artists have operated for centuries, they just can't keep up with the demand for their work solo.
The real soloists, like Van Gogh for example, were mostly obscure while they were alive... didn't really have any patrons, and didn't produce enough work to live off of.
To call the destitute failures "true artists" and someone like da Vinci "soulless" is pointless, because we don't decide these things, history does.
> Ronald Hare figured out how to mass produce penicillin and saved thousands of lives but no Nobel recognition for her triumph were those that got the prize failed.
I don't think this is true? Hare found success from trying to replicate Fleming's results. Fleming, Florey and Chain were jointly awarded the Nobel for the original discovery.
> I personal will never fund AI art / AI artist. There is no personality in such works, only slop
There will come a point in the near future where you won't be able to tell the difference. I lived through the same argument with Photoshop in photography.
Well I think we're just describing taste and craft. AI tools will get better, more granular, and become better integrated into the actual workflows of people over time. A good tool shouldn't take over my sense for taste and craft.
It's a good thing people are pushing back against the slop if we want there to be any incentives for AI tools to not be geared towards helping make slop.
It's funny you chose that as your example, because there are very strict definitions of when day becomes night. I think what you were looking for was "when does someone become bald" or "when does an acorn become a tree".
There is a reason those are classic philosophical questions. Because they highlight the fact that while it is easy to identify the ends of the spectrum, it's impossible to find the midpoint, because everyone has a different lived experience.
Anyone remember the game Leisure Suit Larry? To get the full 18+ experience, you had to answer five trivia questions that only adults should know. But it turns out smart teens who like trivia knew most of them too (and you could just ask mom and dad, they had no clue why you were asking which President appeared on Laugh In).
Also, hilariously, a lot of those questions require a trip to Wikipedia (or a game guide) today. A lot of them reference bits of 1960s/1970s pop culture which are no longer common knowledge.
And is the only state with no drought right now. Although they way they figure it is a bit biased -- it's based on how much water there is compared to historical values, so it's easier to be "drought free" if you've been in a drought for a while.
Yeah hey but for real. The news is focused on California droughts all the time, but my part of flyover country is very, very dry. Like ponds that have never been empty are dry, sort of thing. It's getting bad. . . And we grow all your food.
Between this and all the political nonsense that's happening right now, I feel like a passenger that's noticed the car is out of control while the driver is still opening his beer.
California actually produced the most food of any state. :). But I know what you mean, the water is just as critical in the middle of the country as it is on the edges. Water is critical everywhere, and this problem is just going to get worse and worse.
California leads in the value of goods sold, because it produces a lot of relatively expensive agricultural products like almonds, avocados, tomatoes, etc. Additionally, it’s a larger state, so it naturally will inflate the totals. If you look at food staples, and at the amount produced by square mile, the Midwest is definitely the main food producer of the US.
A 1 square mile state that produced nothing but wheat would beat any other state in terms of “amount of staples produced per square mile,” but it wouldn’t be able to sustain a population. That’s not a useful metric.
A related problem is other parents. Even if you want to let your kid be free, you can't, because nosey neighbors will report you to the police.
It happened to us. We let our kids stay in the car, during COVID, while we quickly shopped.
After we got home, the cops showed up and told us someone reported us for neglecting our kids. My wife just kept asking "did we do anything illegal?" and finally they admitted that no, we didn't. They just said
"it just doesn't look good, with all the crime out there".
I said "what crime?". Then they had to admit that crime is way down over the last 30 years, and is especially low in our area.
They eventually left, but it has a chilling effect on letting our kids be kids for fear that it will happen again.
> What were you thinking? What was going through your heads? I'm genuinely curious.
I know two people who voted for him.
Person one has voted Democrat her whole life. Has worked for the Democratic Party. Has a son who was a Democratic elected official. But she lives in Texas, and watches too much local news, and believed that murderous immigrants were pouring over the border, guns blazing, taking out innocent American citizens daily at the beach and grocery store. So she voted for him because she believed only he could stop this from happening.
Person two is a wealthy white boomer. His business already runs in America. He actually has an advanced degree in economics. He believed that the tariffs would only be used surgically by smart people to protect American business. He is not personally affected by any of the racist policies or any of the other shenanigans. So he voted for him because he liked the protectionist and tax cut policies.
He regrets his vote. I haven't spoken to her in months because she stopped talking to me when I kept show her that her "facts" were made up.
Many of your examples came from people who were funded by Universities in the 80s, which was basically the VC of the time. And in the 90s, a lot of the core committers of those projects were already working at VC funded companies.
Back then it was very normal to get VC funding and then hire the core committers of your most important open source software and pay them to keep working on it. I worked at Sendmail in the 90s and we had Sendmail committers (obviously) but also BSD core devs and linux core devs on staff. We also had IETF members on staff.
You can be a super productive Python coder without any clue how assembly works. Vibe coding is just one more level of abstraction.
Just like how we still need assembly and C programmers for the most critical use cases, we'll still need Python and Golang programmers for things that need to be more efficient than what was vibe coded.
But do you really need your $whatever to be super efficient, or is it good enough if it just works?
Humans writing code are also non deterministic. When you vibe code you're basically a product owner / manager. Vibe coding isn't a higher level programming language, it's an abstraction over a software engineer / engineering team.
That's not what determinism means though. A human coding something, irrespective of whether the code is right or wrong, is deterministic. We have a well defined cause and effect pathway. If I write bad code, I will have a bug - deterministic. If I write good code, my code compiles - still deterministic. If the coder is sick, he can't write code - deterministic again. You can determine the cause from the effect.
Every behavior in the physical World has a cause and effect chain.
On the other hand, you cannot determine why a LLM hallucinated. There is no way to retrace the path taken from input parameters to generated output. At least as of now. Maybe it will change in the future where we have tools that can retrace the path taken.
You misunderstand. A coder will write different code for the same problem each time unless they have the solution 100% memorised. And even then a huge number of factors can influence them not being able to remember 100% of the memorised code, or opt for different variations.
People are inherently nondeterministic.
The code they (and AI) writes, once written, executes deterministically.
> A coder will write... or opt for different variations.
Agreed.
> People are inherently nondeterministic.
We are getting into the realm of philosophy here. I, for one, believe in the idea of living organisms having no free will (or limited will to be more precise. but can also go so far as to say "dependent will"). So one can philosophically explain that people are deterministic, via concepts of Karma and rebirth. Of course none of this can be proven. So your argument can be true too.
> The code they (and AI) writes, once written, executes deterministically.
Yes. Execution is deterministic. I am however talking only about determinism in terms of being able to know the entire path: input to output. Not just the outputs characteristic (which is always going to be deterministic). It is the path from input to output that is not deterministic due to presence of a black box - the model.
I mostly agree with you, but I see what afro88 is saying as well.
If you consider a human programmer as a "black box", in the sense that you feed it a set of inputs—the problem that needs to be solved, vague requirements, etc.—and expect a functioning program as output that solves the problem, then that process is similarly nondeterministic as an LLM. Ensuring that the process is reliable in both scenarios boils down to creating detailed specifications, removing ambiguity, and iterating on the product until the acceptance tests pass.
Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs. First of all, they have an understanding of human psychology, and can actually reason about semantics in ways that a pattern matching and token generation tool cannot. And in the best case scenario of experienced programmers, they have an intuitive grasp of the problem domain, and know how to resolve ambiguities in meatspace. LLMs at their current stage can at best approximate these capabilities by integrating with other systems and data sources, so their nondeterminism is a much bigger problem. We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.
Agree with most of what you say. The only reason I say humans are different from LLMs when it comes to being a "black box" is because you can probe humans. For instance, I can ask a human to explain how he/she came to the conclusion and retrace the path taken to come to said conclusion from known inputs. And this can also be correlated with say brainwave imaging by mapping thoughts to neurons being triggered in that portion of the brain. So you can have a fairly accurate understanding of the path taken. I cannot probe the LLM however. At least not with the tools we have today.
> Where I think there is a disconnect is that humans are far more capable at producing reliable software given a fuzzy set of inputs.
Yes true. Another thought that comes to my mind is I feel it might also have to do with us recognizing other humans as not as alien to us as LLMs are. So there is an inherent trust deficit when it comes to LLMs vs when it comes to humans. Inherent trust in human beings, despite being less capable, is what makes the difference. In everything else we inherently want proper determinism and trust is built on that. I am more forgiving if a child computes 2 + 1 = 4, and will find it in me to correct the child. I won't consider it a defect. But if a calculator computes 2 + 1 = 4 even once, I would immediately discard it and never trust it again.
> We can hope that the technology will continue to improve, as it clearly has in the past few years, but that progress is not guaranteed.
Perhaps there is no need to actually understand assembly, but if you don't understand certain basic concepts actually deploying any software you wrote to production would be a lottery with some rather poor prizes. Regardless of how "productive" you were.
Somebody needs to understand, to the standard of "well enough".
The investors who paid for the CEO who hired your project manager to hire you to figure that out, didn't.
I think in this analogy, vibe coders are project managers, who may indeed still benefit from understanding computers, but when they don't the odds aren't anywhere near as poor as a lottery. Ignorance still blows up in people's faces. I'd say the analogy here with humans would be a stereotypical PHB who can't tell what support the dev needs to do their job and then puts them on a PIP the moment any unclear requirement blows up in anyone's face.
Use tool calling. Create a simple tool that can do the calls that are allowed/the queries that are allowed. Then teach the LLM what the tools can do. Allow it to call the tool without human input.
Then it will only stop when it wants to do something the tool can't do. You can then either add that capability to the tool, or allow that one time action.
This is the answer, and this strategy can be used on lots of otherwise unsafe activities - put a tool between the LLM and the service you want to use, and bake the guardrails into the tool (or make them configurable)
Well, be careful. You mmight think that a restricted shell is the answer, but restricted shells are still too difficult to constrain. But if you over-constrain the tools then the LLMs won't be that useful. Whatever middle ground you find may well have injection vulnerabilities if you're not careful.
Can you generate something with AI and then manually edit it in Photoshop? How much manual editing is required before it's not considered AI anymore?
My point is, AI is another tool in the toolbox, it can be used well or poorly. How much is too much? Just like back in the day, using Photoshop wasn't allowed, until it was.
Where does one draw the line?
reply