Are you for real dude? Lkms is a typo of LLMs. Semantic is only in task demands, and they’re variable. They extend limitlessly from action or spatial syntax, they don’t exist in words or images. You’ve been sold junk tech, read any Gary Marcus or Rodney Brooks and yes am enjoying the ride, we are the next stage - analog entertainment.
You cannot be serious if you expect people to mind read through your typos and make sense of them. Is this supposed to be a performative art piece to demonstrate your point? If you actually care about expressing your point to another person, then you should show some attention over how you present your responses so the other person can understand it.
Is "task demand" what the LLM would expect to do in order to respond to the user prompt? It seems incredulous that semantics would only exist here. As I have mentioned before, semantics is already embedded in the input and output for the LLM to implicitly discover and model and reason with.
https://arxiv.org/html/2507.05448v1
This paper is an interesting overview of semantics in LLMs. Here's an interesting quote, "Whether these LLMs exhibit semantic capabilities, is explored through the classical semantic theory which goes back to Frege and Russell. We show that the answer depends on how meaning is defined by Frege and Russell (and which interpretation one follows). If meaning is solely based on reference, that is, some referential capability, LLM-generated representations are meaningless, because the text-based LLMs representation do not directly refer to the world unless the reference is somehow indirectly induced. Also the notion of reference hinges on the notion of truth; ultimately it is the truth that determines the reference. If meaning however is associated with another kind of meaning such as Frege’s sense in addition to reference, it can be argued that LLM representations can carry that kind of semantics."
As for reference-based meaning reliant on truth, this was mentioned earlier in the paper, "An alternative to addressing the limitations of existing text-based models is the development of multimodal models, i.e., DL models that integrate various modalities such as text, vision, and potentially other modalities via sensory data. Large multimodal models (LMMs) could then ground their linguistic and semantic representations in non-textual representations like corresponding images or sensor data, akin to what has been termed sensorimotor grounding (Harnad, 1990). Note however, that such models would still not have direct access to the world but a mediated access: an access that is fundamentally task- driven and representational moreover. Also, as we will argue, the issue is rather that we need to ground sentences, rather than specific representations, because it is sentences that may refer to truth. But attaining the truth is not an easy task; ultimately, future LMMs will face the same difficulties as we do in determining truth."
In other words, this is the approach Fei-Fei Li and other multimodal models are using to create the world model.
Laughing all the way at the AI clown show.