You are not "having a conversation". Stop anthropomorphizing. You are interacting with a machine which has its singular inhuman workings, developed and kept on a leash by a megacorporation.
Will it report me if I try to discuss "The anarchist's cookbook" with it? Will it try to convince me the "Protocols of the sages of Sion" is real? Will it encourage me to follow the example of the main character in "Lolita"? Will it cast in a bad light any gay or transsexual character because the megacorp behind it is forced to toe the anti-woke line of the US government in order to keep its lucrative military and anti-immigration contracts?
I'm interacting with a language model, using language and normal phrases. That is basically a conversation from my point of view, as is mostly indistinguishable from saying the same and getting similar answers that I could get from a real person that had read that book. No matter what is in the other side, because we are talking about the exchanged text and not the participants or what possibly they could had in their minds or chips.
If you’re against anthropomorphism of LLMs then how can it “encourage” you if you’re not having a conversation? How could it “convince” you of anything or cast something in a bad light without conversing?
Your point about censorship, however, I fully agree with.
Safety is a valid concern in general. But avoidance not the right way to approach it. Democratizing the access to such tools (and developing a somewhat open ecosystem around it) for researchers and the general public is the better way IMO. This way people with more knowledge (not necessarily technical. For example philosophers) can experiment and explore this space more and guide the development going forward.
Also, the base assumption of every prospering society is a population that cares about and values their freedom and rights. If the society drifts towards becoming averse to learning about these virtues ... well, there will be consequences (and yes, we are going this way. For example look at the current state of politics, wealth distribution, and labor rights in the US. People would have been a lot more resentful to this in the 1960s or 70s.)
The same is true about AI systems. If the general public (or at least a good percentage of the researchers) study it well enough, they will force this alignment with true human values. Contrary to this, censorship or less equitable / harder access and later evaluation is really detrimental to this process (more sophisticated and hazardous models will be developed without any feedback from the intellectuals / the society. Then those misaligned models can cause a lot of harm in the hands of a rogue actor).
Will it report me if I try to discuss "The anarchist's cookbook" with it? Will it try to convince me the "Protocols of the sages of Sion" is real? Will it encourage me to follow the example of the main character in "Lolita"? Will it cast in a bad light any gay or transsexual character because the megacorp behind it is forced to toe the anti-woke line of the US government in order to keep its lucrative military and anti-immigration contracts?