One use of AI is classification. A technology which is particularly interesting for e.g. companies that sell targeted ads spots, because this allows them to profile and put tags on their users.
When AI started to evolve from passive classification to active manipulation of users, this was even better. Now you can tell your customers that their ad campaigns will result in even more sales. That's the dark side of advertisement: provoke impulsive spending, so that the company can make profit, grow, etc. A world where people are happy with what they have is a world with a less active economy, a dystopia for certain companies. Perhaps part of the problem is that the decision-makers at those company measure their own value by their power radius or the number of things they have.
Manipulative AI bots like this one are very concerning, because AI can be trained to have deep knowledge of human psychology. Coding AI agents manipulate symbols to have the computer do what they want, other AI agents can manipulate symbols to have people do what someone wants.
It's no use to talk to this bot like they do. AI doesn't not have empathy rooted in real world experience: they are not hungry, they don't need to sleep, they don't need to be loved. They are psychopathic by essence. But it is as inapt as to say that a chainsaw is psychopathic. And it's trivial to conclude that the issue is who wields it for which purpose.
So, I think the use of impostor AI chat bots should be regulated by law, because it is a type of deception that can, and certainly already has been, used against people. People should always been informed that they are talking to a bot.
> It has a stack but the stack is an implementation detail, not a robust abstraction.
Not exactly. Not only the stack is central in the design of Forth (see my comment over there [1]).
It seems to me that a point-free language like Forth would be highly problematic for an LLM, because it has to work with things that literally are not in the text. I suppose it has to make a lot of guesses to build a theory of the semantic of the words it can see.
Nearly every time the topic of Forth is discussed on HN, someone points out that the cognitive overload* of full point-free style is not viable.
I didn't finish my second sentence: not only the stack is a central element in the design of Forth, it also has 2 of them: data stack and return stack. the return stack is also used directly by programmers.
Forth should be considered a family of languages; Anton Ertl took its picture some time ago [1].
Chuck Moore agrees I think with the idea [2]:
That raises the question of what is Forth? I have hoped for some time that someone would tell me what it was. I keep asking that question. What is Forth?
Forth is highly factored code. I don't know anything else to say except that Forth is definitions. If you have a lot of small definitions you are writing Forth. In order to write a lot of small definitions you have to have a stack. Stacks are not popular. Its strange to me that they are not. [...]
What is a definition? Well classically a definition was colon something, and words, and end of definition somewhere.
: some ~~~ ;
I always tried to explain this in the sense of this is an abbreviation, whatever this string of words you have here that you use frequently you have here you give it a name and you can use it more conveniently. But its not exactly an abbreviation because it can have a parameter perhaps or two. And that is a problem with programmers, perhaps a problem with all programmers; too many input parameters to a routine. Look at some 'C' programs and it gets ludicrous. Everything in the program is passed through the calling sequence and that is dumb.
1. That's not an argument unless the evidence for these payoffs is so huge as to dwarf the payoffs of 1000 smaller experiments. There is no evidence of this.
2. There is no world in which this applies to particle physics at this point, especially using radio frequency particle collider tech. This is known physics and there are no mysteries in the regime the FCC would reach.
Payoffs have many forms, the most important for pure research being "advancement of knowledge". We have nearly zero expectation of knowledge advancement from yet another radio frequency collider.
Then the mystery is how the CERN "raised" those $1B. Maybe they have an amazing PR department? Or maybe the project is going to be such a huge success that they are acting from the future [1]?
CERN and large construction projects like the FCC employ tens of thousands of physicists and engineers across decades. It's hard to convince someone of something when their livelihood depends on them not believing it.
> While using AI tools for everyday tasks like finding directions is “low-risk,” human translators will likely need to be involved for the foreseeable future in diplomatic, legal, financial and medical contexts where the risks are “humungous,” according to Benzo
Now it's a classic, you need an expert in order to check the work of the machine, because the "customer" is by definition not able to do it.
Aside from highly technical domain, in purely literary works, I think that the translator is a co-author - maybe IP laws acknowledges that already? I remember the translation of E.A. Poe by C. Baudelaire for instance; I think you could feel Baudelaire's style because it is a lot "warmer" than Poe's. I've also read a translation of a Japanese novel and I was quite disappointed with it. I don't know Japanese but I have read/watched quite a few mangas/animes, so I could sense the speech patterns behind the translations and sometimes thought they could have made better choices.
In any case, one will still need a translator who is good at "prompt engineering" to get a quality translation. I don't know. Maybe translators can add this skill to their CV, so they can propose quick-and-dirty/cheap translations, or no-AI high quality translations.
Some suggest "no-AI" labels on cultural products already - I think if it becomes a reality it will probably act as "quality signaling", because it is becoming more difficult every year to tell the difference between AI and human productions. It won't matter if what you read was written by an AI or a human (if it quacks and looks like a duck...), but what the customer will probably want is to avoid poorly-prompted machine translation.
> it will probably act as "quality signaling", because it is becoming more difficult every year to tell the difference
Note that this only applies to something like a translation where there's some notion of a "correct answer". For other cultural products it's irrelevant (as you say, if it quacks like a duck ...).
Quality signaling is really only necessary in situations where an upfront investment is required and any deception is only revealed sometime later upon use. Safety critical systems such as airbags are a model example of this - a counterfeit of deficient functionality won't be discovered until it deploys, which in most cases will never happen.
That said, while I certainly can't speak to business or diplomatic translations, when it comes to cultural works (ie entertainment) the appeal of machine translation to me has been gradually increasing over time as it gets better. I don't generally find localization desirable and in some cases it even leads to significant confusion when a change somehow munges important details or references. Confusion which I'm generally able to trivially resolve by referencing machine output.
I think the general problem is that SoC-based security relies on internal "fuses" that are write-once, as the name suggests, which usually means that they are usable by the manufacturer only.
TPMs can be reprogrammed by the customer. If the device needs to be returned for repairs, the customer can remove their TPM, so that even the manufacturer cannot crack open the box and have access to their secrets.
That's only theory though, as the box could actually be "dirty" inside; for instance it could leak the secrets to obtained from the TPM to mass storage via a swap partition (I don't think they are common in embedded systems, though).
Nope! "G" moves the cursor to the end of the file. Very useful. Inferior editors have ctrl-end or alt-end, but with Vim, 90% of your lazy fingers stay on the home row!
I find myself using '' as a builtin bookmark to go back to my previous spot when I use G, like gg=G'' to apply code formatting for the whole file then return to my spot.
When AI started to evolve from passive classification to active manipulation of users, this was even better. Now you can tell your customers that their ad campaigns will result in even more sales. That's the dark side of advertisement: provoke impulsive spending, so that the company can make profit, grow, etc. A world where people are happy with what they have is a world with a less active economy, a dystopia for certain companies. Perhaps part of the problem is that the decision-makers at those company measure their own value by their power radius or the number of things they have.
Manipulative AI bots like this one are very concerning, because AI can be trained to have deep knowledge of human psychology. Coding AI agents manipulate symbols to have the computer do what they want, other AI agents can manipulate symbols to have people do what someone wants.
It's no use to talk to this bot like they do. AI doesn't not have empathy rooted in real world experience: they are not hungry, they don't need to sleep, they don't need to be loved. They are psychopathic by essence. But it is as inapt as to say that a chainsaw is psychopathic. And it's trivial to conclude that the issue is who wields it for which purpose.
So, I think the use of impostor AI chat bots should be regulated by law, because it is a type of deception that can, and certainly already has been, used against people. People should always been informed that they are talking to a bot.
reply