>> - Put a strawberry in the left eye socket.
>>- Put a blackberry in the right eye socket.
>> All five of the edits are implemented correctly
This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery. The model placed the specified items in the eye sockets based on the viewers left/right; when we talk relative in this scenario we usually (always?) mean from the perspective of the target or "owner". Doctors make this mistake too (they typically mark the correct side with a sharpie while the patient is still alert) but I'd be more concerned if we're "outsourcing" decision making without adequate oversight.
There's a classic well-illustrated book, _How to Keep Your Volkswagen Alive_, which spends a whole illustrated page at the beginning building up a reference frame for working on the vehicle. Up is sky, down is ground, front is always vehicle's front, left is always vehicle's left.
Sounds a bit silly to write it out, but the diagram did a great job removing ambiguity when you expect someone to be laying on the ground in a tight place looking backwards, upside down.
Also feels important to note that in the theatre, there is stage-right and stage-left, jargon to disambiguate even though the jargon expects you to know the meaning to understand it.
It doesn't affect your point but technically since the IAU are insane, exoplanets aren't technically planets and Jupiter is the largest planet in the universe.
Seems like you're making a judgment based on your own experience, but as another commenter pointed out, it was wrong. There are plenty of us out there who would confirm, because people are too flawed to trust. Humans double/triple check, especially under higher stakes conditions (surgery).
Heck, humans are so flawed, they'll put the things in the wrong eye socket even knowing full well exactly where they should go - something a computer literally couldn't do.
“People are too flawed to trust”? You’ve lost the plot. People are trusted to perform complex tasks every single minute of every single day, and they overwhelmingly perform those tasks with minimal errors.
Extremely talented, studied, hard working humans perform complex tasks all the time, and never with 100% win rate over all time.
In other examples, almost every single person has had the experience of saying, "turn right", "oh I meant left sorry, I knew it was right too, I don't know why I said left". Even the most sophisticated humans have made this error. A computer would never.
Humans are deeply flawed and after pre-selection require expensive training to perform complex tasks at a never perfect success rate.
Intelligence in my book includes error correction. Questioning possible mistakes is part of wisdom.
So the understanding that AI and HI are different entities altogether with only a subset of communication protocols between them will become more and more obvious, like some comments here are already implicitly telling.
If the instructions were actually specific, e.g. Put a blackberry in its right eye socket, then yes, most humans would know what that meant. But the instructions were not that specific: in the right eye socket
If you asked me right now what the biggest known planet was, I'd think Jupiter. I'd assume you were talking about our solar system ("known" here implying there might be more planets out in the distant reaches).
> when we talk relative in this scenario we usually (always?) mean from the perspective of the target or "owner".
I dunno... I feel pretty confident 99% percent of people would do the same thing, and put the strawberry in the eye socket to our left, the viewer's.
You really have to be trained explicitly to put yourself in the subject's shoes, and very few people are. To me, the model is correctly following the instructions most people will mean.
And it's not even incorrect. "The left x" is linguistically ambiguous. If you say "the left flower", it's obviously the flower to our left. So when you say "the left eye socket", the eye socket to our left is a valid interpretation. If they had said their or its left eye socket, then it's more arguable that it must be from the subject's side. But that's not the case in this example.
I don't know if that's so much a mistake as it is ambiguity though? To me, using the viewer's perspective in this case seems totally reasonable.
Does it still use the viewer's perspective if the prompt specifies "Put a strawberry in the _patient's left eye_"? If it does, then you're onto something. Otherwise I completely disagree with this.
I think "the left eye" in this particular case (a photo of a skull made of pancake batter) is still very slightly ambiguous. "The skull's left eye" would not be.
“The right socket” can only be implied one way when talking about a body just like you only have one right hand despite the fact that it is on my left when looking at you.
If you are facing a wall-plate with two power sockets on it side by side and you are telling someone to plug something in, which one would be "the right socket", and which would be "the left socket"?
If above the wall-plate is a photo of a person and you are someone to draw a tattoo on the photo, which is "the right arm" and which is "the left arm"?
ETA: and if I were telling someone which socket to plug something into, it would absolutely be from the prospective of the person doing the plugging, not from inside the wall.
"Right hand" is practically a bigram that has more meaning, since handedness is such a common topic.
Also context matters, if you're talking to someone you would say "right shoulder" for _their_ right since you know it's an observer with different vantage point. Talking about a scene in a photo "the right shoulder" to me would more often mean right portion of the photo even if it was the person's left shoulder.
That was a big problem when I was toying around the original Nano Banana. I always prompted the perspective of the (imaginary) camera, and yet NB often interpreted that as that of the target, giving no way to select the opposite side. Since the selected side is generally closer to the camera, my usual workaround is to force the side far from the camera. And yet that was not perfect.
>> All five of the edits are implemented correctly
This is a GREAT example of the (not so) subtle mistakes AI will make in image generation, or code creation, or your future knee surgery. The model placed the specified items in the eye sockets based on the viewers left/right; when we talk relative in this scenario we usually (always?) mean from the perspective of the target or "owner". Doctors make this mistake too (they typically mark the correct side with a sharpie while the patient is still alert) but I'd be more concerned if we're "outsourcing" decision making without adequate oversight.
https://minimaxir.com/2025/11/nano-banana-prompts/#hello-nan...