Neither NVIDIA or OpenClaw bros care about security at this point. NVIDIA of course wants to fuel the hype train and will proudly point to this, adding 0.1% security to an 2000% insecurity. Most bros wont even mind, produce insecure crap at light speed and never look back. It's probably just there to trick silly non tech corps into this junk.
Terrible historical practices/immoral productions shouldn't the reason alone for such dismissal. Every industry had it's fair share of terrible things. Sometimes, we learn to do better. There are also enough ongoing things to be worried about.
Hollywood should implode and hopefully the art form will resurrect for the better. But for me the primary reason is that they don't live up to what they are supposed to do. Creating good art.
The greed is the same for all players and industries. I don't see why hollywood's situation is different. They just failed to adapt to new conditions.
I agree that the good stuff is just a result of shotgunning, for every great movie, there are 100 that are forgettable. But we have the habit of concentrating power in one place, so I have no clue how it could be otherwise. Sure youtube is an alternative with myriad of independent creators, but it produced totally different outcomes.
In Capitalism, Capital holders are in charge of what gets resources.
If you want to make something non-trivial, you need investment from a capital holder. Even Indie movies spend tons of effort on funding and investment and trying to have cashflow to make something happen.
The death of hollywood does not change this reality, it simply changes which capital holders you must seek patronage from.
Hollywood was sex pests and morons sure, but so are the rest of the Capital holders we have. Big movies will still be beholden to the rich, because the rich are who have capital.
The shape of the system hasn't changed, only the names of the people at the top.
And that's where it went off the rails into lala land. 'a' can have all kinds of distinct meanings. How are you going to make that work? It's hopeless.
a) it's a bullet point
b) a+b means a is a variable
c) apple means a means the sound "aaaah"
d) ape means a means the sound "aye"
e) 0xa means a means "10"
f) "a" on my test paper means I did well on it
g) grade "a" means I bought the good bolts
h) "achtung" means it's a German "a"
I didn't need 8 different Unicode characters. And so on.
Your trolling is really rock bottom. All this already works fine. Millions of times, each day. Just once a week it fails because someone messed up. Not an issue.
I showed that there is no need for semantic information about the glyphs. It's more compelling to demonstrate a need for semantic information rather than just asserting it.
In this case LLMs were obviously used to dress the code up as more legitimate, adding more human or project relevant noise. It's social engineering, but you leave the tedious bits to an LLM. The sophisticated part is the obscurity in the whole process, not the code.
Really weird how you compare contribution to a commercial market leading company with open source licensing. How does that even make sense? If you give your knowledge or workforce away for free, do it. Google will not notice you in any case.
What just came to my mind is that the current main selling point of AI, is coder productivity. Some anecdotal experiences from a small agile team:
We had 1 week sprints and our PO had sometimes trouble to prepare enough work for the next sprint. We had 4 week sprints and we often ended up pulling tickets from the next sprint. There was often a mismatch in pace. (Quite funny, the time we had found a balance, management ordered all teams to have the same sprint lengths. They couldn't deal with all the asynchronous, overlapping sprint starts/ends. They choose to forfeit our productivity for theirs.)
So productivity isn't all about coders, it's also about owners / managers / shareholders supplying work. This kind of work is much about communication with several involved parties and researching usecases and features in a very specific context. LLMs can help with parts of it, but at one point there will be a flood of excessive, unverified generic reports and LLMs that again condense them with all the inaccuracies, that managers/owners may drown in a fuzzy mess of LLM bureaucracy. Nuances and importance will get lost in excess.
We often had rather large stories that simply had a small set of bulletpoints, because we already communicated everything in person and they were just reminders for the most important stuff. The importance here is that this reflected the teams agency how we solve things. An LLM can probably not at all provide that currently, as they are always excessive and try to add "helpful" details. They simply cannot pick up social norms and agreements, and prompting them correctly is in my opinion very hard or too time consuming.
LLM assisted coding or vibe coding is all the hype. But I have the feeling that the big realization sets in once all supporting processes are convoluted with AI noise, the peers that used to collaborate are detached and social conflicts and misunderstandings escalate.
Can't enter a game, kicks me out of queue.
reply