Agreed, but in the case of the lie detector, it seems it's a matter of interpretation. In the case of LLMs, what is it? Is it a matter of saying "It's a next-word calculator that uses stats, matrices and vectors to predict output" instead of "Reasoning simulation made using a neural network"? Is there a better name? I'd say it's "A static neural network that outputs a stream of words after having consumed textual input, and that can be used to simulate, with a high level of accuracy, the internal monologue of a person who would be thinking about and reasoning on the input". Whatever it is, it's not reasoning, but it's not a parrot either.
A lot of people confuse access to information with being smart. Because for humans it correlates well - usually the smart people are those that know a lot of facts and can easily manipulate them on demand, and the dumb people are those that can not. LLMs have unique capability of being both very knowledgeable (as in, able to easily access vast quantities of information, way beyond the capabilities of any human, PhD or not) and very dumb, they way a kindergarten kid wouldn't be. It totally confuses all our heuristics.
The most reasonable assumption is that the CEO is using dishonest rhetoric to upsell the LLM, instead of taking your approach and assuming the CEO is confused about the LLM's capability.
There are savvy people who know when to say "don't tell me that information" because then it is never a lie, simply "I was not aware"
I mean if I were promised a "never-have-to-work-ever-again" amount of money in exchange for doing what I'd love to do anyway, and which I think is a working thing, and tolerating the CEO publicly proclaiming some exaggerated bullshit about it (when nobody asks my opinion of it anyway), I'd probably take it.
I mean vested stock doesn't necessarily need to go up in value. In some places the stock comp is higher than the actual pay, so even if you sell on the day of vesting you're doubling your income. So the investment viability of stock comp doesn't matter too much, it only matters how much of it you receive.
well since openai is private probably the employees are getting stock options not RSUs--which, since it's private, might be hard to sell even if they do exercise them..
The right tool might be Spiking Neural Networks. Due to their sparse activation, event driven computation, and temporal coding. This all depends on how good neuromorphic chips get.
Would depend on the implementation, I think. You could probably write one like that, but as usual there will be trade-offs. I suspect it would also matter if you're targeting booleans or numbers.
The typical, obvious case how these things work in Prolog is in list concatenation. If you only supply variables it'll start outputting lists of increasing length with placeholder variables. It's a somewhat simple engine for traversing a problem space and testing constraints that uses backtracking to step through it. Some implementations allow you to explicitly change the search strategy.
I place a post-it note over each paragraph with a few words, motivation: xyz, challenge:xyz, SOTA, approach xyz.
I read to forget because my words are much easier to skim than someone else’s.