There have been several projects lately attempting to create running context/memory, and Claude Code also has some concept of continuous conversational memory, but all if these are bolted at inference time, there's still no concept of conversations feeding back into base model training/weights on the fly.
If you are implying I am a bot myself i have to disappoint you. I am just not a native speaker, so some phrases could be awkward because I translate german to english.
Was this just awkward phrasing or did something change and they learn after training?