Hacker Newsnew | past | comments | ask | show | jobs | submit | cedws's commentslogin

OpenAI does identity verification too. I don’t know if they use Persona but they should be considered all equally as invasive.

They use Persona for their new "Trusted Access for Cyber": https://chatgpt.com/cyber, at least according to the FAQ

This is the beginning of AI clouds in my estimation. Cloud services provide needed lock-in and support the push to provide higher level services over the top of models. It just makes sense, they'll never recoup the costs on just inference.

The EU is in favour of this kind of rubbish, as is the UK. We need to kick these idiots out of power.

And replace them with other idiots, who also support this rubbish? There isn't anyone who has sanity and decency, as their platform.

It seems the fundamental problem with democracy is that all the wrong people seek power, and the most qualified shy away from it.

Representative democracy.

Other forms exist. Don’t throw the baby out with the bath water.

https://www.aboutswitzerland.eda.admin.ch/en/political-syste...


Anybody running GLM 5.1 locally or within their company? What kind of hardware do you need to achieve performance similar to Claude?

Gen Z has a very doomeristic view towards CS careers now due to social media influencers prematurely dismissing CS/software paths as a dead end or impossible to break into because of AI.

I know the job market can suck for graduates right now, but I do believe studying CS can still lead to decent paying careers. There's always going to be demand for people who understand code, who can break down complex problems and bring a problem solving mindset. LLMs don't solve everything.

The drop in CS students ironically may create a vacuum that allows us employed engineers to demand even higher compensation.


I feel like CS is just correcting back to what it was.

Even back when I was in college (graduated 2017), I noticed there was this clear bifurcation among the students. Alot of the students at that time did it because you could score a great job after college but the smaller cohort were the students that just loved the game. And even back then we had loads of students wash out or graduate then take other jobs after college from the former group.

It's no different today except that the group that did it for money are washing out before they even get to college because they fear that AI will take their jobs, meanwhile the latter group is still here and were able to do more and more with AI.

It's a truly wild time to be alive in this industry. Half of us are seeing the doom and gloom of AI and the other half are seeing the "next age" happen right before our eyes.

And I'll be honest I kinda feel sad for the folks that take the negative view of AI right now. Cause I'm having more fun than I've ever had before in this industry.


It's kind of always been a wild time to be alive in this industry.

You covered people washing out of CS degrees and people getting degrees and then not, ultimately, doing something in the CS field.

But what you see in our field that you don't see as often, elsewhere (or -- at all, depending on regulations) is people who ... (1) washed out of the degree because it was competing with their lucrative career as a software developer (or -- more rarely, successful entrepreneur), (2) got a degree in textual biblical studies and had a long career in software development[0] or, you know, other unrelated degree, (3) none of the above, even sometimes incomplete High School education (also[0]).

I've been hiring developers for almost 30 years, now, at a variety of employers -- one global multi-national telecom, one "we make a lot of the products other companies pass off as their own work" IoT/small shop, and a couple of video conference/remote-enabling service shops. There are far more degrees out there, today, than there were 30 years ago. My experience, however, is that the necessity of a degree at the companies I've been employed at has gone down. I suspect that's because I worked for "the giant multinational", first, and all of the rest have been startup or smaller/younger shops (typically 5-10 devs, but no more than ~20 at peek). The giant multi-national, though, during my 17 years, changed (early on) to "or equivalent experience" while rarely hiring someone without a degree for most IT positions to routinely interviewing and hiring people without regard for their degree (and focusing on "code you've written" over "whiteboard exercises", too) while still generally favoring candidates with them. At the best shop I've worked for, it was an even mix of "none", "some", "unrelated", "+bootcamp", "CS degrees" and filled with extremely competent, well-paid, developers.

It's a whole lot harder to get the experience required to have "equivalent experience" without university/internships/the like, but getting the degree without any relevant work experience along the way isn't a good way to go, either.

Around the late 90s (until the bust) and then again a few years later, everyone was pushing kids into CS degrees and the most "interesting" aspect to many of those kids was the starting/long-term earnings against the cost of the 4-year degree. And while, personally, I think "anyone can do it", not "everyone will find it enjoyable to do" like I do.

I'm starting to believe that last part is far more rare than I think it is with my 18-year-old son mostly disliking his introductory computer programming class in High School[1]. I don't push "what I do" on them, just like my Dad didn't, but I expose them to it whenever I can (like my Dad -- kind of -- didn't). And I'll never forget when their Mom looked over at my screen and said "So ... is that what you do all day?", and I beamed "Yes" because it really is the most interesting thing in the world to me, and she said "Wow ... I think I'd kill myself."

[0] Ok, so that's a specific example of someone I know.

[1] Ultimately coming around at the end when his assignment was "make something you want to make."


> I do believe studying CS can still lead to decent paying careers.

Yes, for countries like India.

With AI, outsourcing becomes much more effective.


can you elaborate on your thesis as to why? it seems to me, with raw code being less of a bottleneck, things like understanding the spec, polishing, and doing the fuzzy work around the edges become all the more important. These were never strengths of outsourcing. In fact, I think that the fact that those parts are important is a big reason why the profession as a whole wasnt entirely just outsourced, despite the compelling economic reasons for it.

See my other comment in this thread.

Isn't it the other way around, AI replacing outsourcing? AI can do the implementation work, but you still need the human who needs to specify what has to be done, give architecture guidance and check and accept the resulting work (or reject it, with notes on what to fix). AI coding is basically outsourcing to AI

This is the paradox. But because AI makes outsourcing jobs easier, those workers need to compete, and so they will be able to do those specification jobs and quality control jobs as well.

The paradox is that the quality of AI output is directly proportional to your expertise and understanding, while the trust/belief/confidence in their effectiveness is inversely proportional.

It will still pay to develop the core understanding. It is just that the world can remain irrational longer than you can remain solvent :).


I was rich even before I came into this field, my family owns lots of agriculture land and I came to this field for passion of it and was never really motivated by money.

Thing is AI is taking outsourced jobs in india at much faster rate than elsewhere.

The latest layoff coming from Oracle mostly laid off workers in india.


Awful take in my humble opinion. Outsourced, vibe-coders as a concept is beyond scary.

It's not doomerism. I've seen this happen at companies.

I was talking to a guy who wanted uptime monitoring. So, he told the executive who called the uptimerobot but then other guy rolled out his uptime robot using AI in 60 minutes and deployed along with centralized logging and it costs the company only $5 VPS.

And honestly it works just as good, I've seen companies are refusing to pay for external tools and building leaner version using AI.

You can build a SaaS faster now but the need for SaaS is on decline.

I've moved to deploying on bare metal from OVH and Hertzner, why? Because devops is completely reduced to few minutes worth of work using agents.


You don't need AI for that. Just deploy Uptime Kuma or similar to a VPS and job done. I can do that in about 15 min vs. your 60 min.

Of course, this is not a production-grade deployment. To get there, I'd need to build images on pipelines, scan them, test them, publish artefacts, write up the IaC to manage the cloud resources, add monitoring around the solution, ...

Deploying a simple piece of software on a custom server was never difficult or slow to do.


You need to invest time in learning its configuration and features you probably won't even need.

Ok, 30 minutes.

Isn't it too early to declare the vibe coded uptime monitoring as "good" from a business standpoint? NIH syndrome has always been a thing and I don't see how the downsides of creating bespoke systems have changed in the LLM-era. You're now stuck maintaining this "genius" solution, LLMs or not, and onboarding devs/users gets harder the more churn you have.

It's 100 lines of code, how hard is it to maintain? It's absolutely better than paying for enterprise license of Uptime robo

Okay, so AI probably wasn't critical to crafting a 100 lines of what I'm assuming is python or some other scripting language that pings a url. Second, I don't think 100 lines is going to be as robust as some pre-packaged monitoring solution (open source or enterprise tier) which has a dashboard and auth story baked in along with proper documentation.

If that script works for your use case then great, but I don't see how LLMs were a game changer here.


Big companies would have rolled out their own anyway, but now 2-3 person companies are also not paying for tools like uptime robots. That's the development LLM brought

Those are some famous last words if I’ve ever heard them.

Some things are better when they are outsourced because other companies specialized for that problem set. Just like you won't roll your own cryptography, you won't and can't do everything in-house.

The reverse Dropbox curlftpfs comment.

> the need for SaaS is on decline

For 5 minutes. The need for cheap SaaS that one person can build and has no uptime requirements or security requirements or legal requirements or ongoing maintenance requirements is indeed declining.


Eh, this said I think SaaS offering had gotten overpriced. Coupled with things like they put up stickers that say "We R havin S3curity" rather than actually having secure systems, oh and constant price increases and methods of locking you into their services there was a need for some pushback in the market.

$5 VPS + your AI subscription?

> So, he told the executive who called the uptimerobot but then other guy rolled out his uptime robot using AI in 60 minutes and deployed along with centralized logging and it costs the company only $5 VPS.

i'm a sysadmin, so i usually watch these things from the side and usually i get called one or two months in to clean up the mess (it happens almost every time).

what you have described is funny to me because uptime monitoring for websites (and also other stuff) is pretty much the use case for prometheus' black-box exporter (https://github.com/prometheus/blackbox_exporter).

assuming you work at a decent company and already have a decent monitoring system (prometheus/alertmanager) it takes 5 minutes to deploy and maybe ten to configure.

if you already have infrastructure then it's basically free. if you already have a kubernetes cluster then it's practically already managed.

> I've moved to deploying on bare metal from OVH and Hertzner, why? Because devops is completely reduced to few minutes worth of work using agents.

at a small scale... maybe. in my experience as soon as you start to reach a decent scale, you'll need experienced engineers (software engineers, system engineers) to actually be at the steering wheel.


Apparently AWS's European Sovereign Cloud has Bedrock, so that could be an option.

The AWS Sovereign Cloud is still owned 100% by Amazon Inc. in the US. Not saying that rules it out for all use cases, but something that should be mentioned. "Sovereignty" is a somewhat vague term.

<American Company> European means nothing. They are all subject to the US Cloud Act, and the moment you start using their services, it inevitably has one or two services that end up contacting us-east-1 anyways. And that's without taking into account that they are all trying to fuck you over from.behind anyways as they sign data exchange agreements between Europe and the US.

The large US players are not an option if you want your data safe from the US.


I haven't looked into the details but I remember from the announcement that the EU cloud is owned specifically by an EU entity headed by EU citizens. There would be no point spinning up a 'sovereign cloud' beholden to the US.

... And this entity is again owned by AWS. And so the cloud act still applies.

> There would be no point spinning up a 'sovereign cloud' beholden to the US.

Of course: It gives (both sides) a narrative that let's them pretend everything is alright.


How would the cloud act apply if none of the employees of the AWS European Sovereign Cloud are US citizens?

> Courts can require parent companies to provide data held by their subsidiaries.

https://en.wikipedia.org/wiki/CLOUD_Act


But they would have no way to actually compel anyone who isn't a US citizen. The worst the US could do is fine Amazon until it complied.

Edit: Looks like the below is not true. However, such setup is technically possible and if they were serious about making it truly isolated from US influence, it can be done.

Original comment: No it's not owned by AWS. It's a separate legal entity with EU based board and they license the technology from the US company.


This source says it's 100% owned by AWS USA:

https://openregister.de/company/DE-HRB-G1312-40853


Hmm I'm not sure how to interpret that page but it looks like you are right, I'll edit my comment. I was told by GCP PMs that is how the GCP/tsystems setup is structured (see sibling comment) and that it mirrored AWS setup, but maybe that was not correct.

How difficult would it be for the "independent" licensor to exfiltrate data from the "sovereign cloud" via logging or replication?

The control-planes have to be completely independent for anything approaching real independence, not just some legal fiction that's lightly different[1] from the traditional big-tech practice of having an Irish subsidiary licensing the parent company's tech for tax optimization purposes.

1. No different at all, according to sibling comment.


I don't know about AWS but I dealt with some (small / tangential) aspects of the GCP setup: https://www.t-systems.com/dk/en/sovereign-cloud/solutions/so...

It is completely separate. There isn't a shared control plane. You don't manage this in the GCP console, its a separate white-label product.

Any updates GCP wants to push are sent as update bundles that must be reviewed and approved by the operator (tsystems). During an outage, the GCP oncall or product team has no access and talks to operator who can run commands or queries on their behalf, or share screenshots of monitoring graphs etc.

(This information is ~3 years stale, but this was such fundamental design principle that I strongly doubt it has changed)


Just came back to this and saw it's shutting down. Unfortunate.

That does not address joshstrange's concerns.

There is very poor clarity about what is and isn't allowed with the Claude SDK/claude -p. Are we allowed to use it to automate stuff? What kind of tasks is it permitted to be used for? What if you call your script 'OrangeClaw' and release that on GitHub? What if your script gets super popular, does it suddenly become against TOS?


This is exactly my point. At what point does it become a ToS violation? Right now it's a huge grey area and the idea of getting my account banned because I crossed an invisible line with zero recourse other than to switch providers is... frustrating.

It's pretty easy to read between the lines tbh. Personal, non-automated use is fine. Using it as a means to automate depleting your 5-hour limit 24/7 ("leftover usage") is not fine. They don't want to put in in the ToS because it's almost impossible because writing what I just said will still have people going "well what's automated, where's the exact line!" when it's all pretty clear what the intended use case here is. The Anthropic peeps have said about as much.

I get that the traditional dev is allergic to the concept of reading between the lines and demands everything to be spelled out explicitly, but maybe you should just see it as something to learn because it's an incredibly useful life skill.


Ok, let's say I'm not using it to deplete leftover usage, the task just happens to run down the 5 hour window usage.

Are you willing to bet your account over whether you've read between the lines correctly? Anthropic aren't going to listen to appeals.


> the task just happens to run down the 5 hour window usage.

In a single prompt? From zero usage? That doesn't "just happen".


When you're using the SDK, yes it can. Example: I used the Python SDK to translate a bunch of source code recently. I spawned a subagent for each module that needed translating and left it to run for a few hours with a parallelism limit of 5. It blasted through the 5 hour usage and dug into extra usage credits.

I have zero assurances that the above can't result in a ban. The usage pattern is not distinct from OpenClaw.


As I said, it doesn't just happen, you explicitly had to set it up so it could happen.

I'm confused about this comment.

The GP has described a task which feels like a task very well within intended usage of CC, but can easily eat up the usage limit.

What should we read between the lines about this scenario?

Is it a bannable offense?


Just in case it wasn't clear, what they described doesn't need extra tooling. You can write this in your CLI and it will easily cap a Max 20x plan in an hour: "we are converting this entire codebase from TS to C#. Following the guidelines I've written in MIGRATION.md, convert each file individually. Use up to 32 parallel subagents. Track your work for each file in a PROGRESS.md file, which you will update for each file starting and completing. Using an agent team, as a secondary step, add a verification layer where you verify each file individually for accurate migration following the instructions in VERIFICATION.md"

Yea there are other ways to do this, you can set up a separate harness sure to make it more efficient, but just the above will also work, it's just text you paste into your CC terminal, and it will absolutely cap the largest subscription plan available no problem.


That "non-automated" part is where I feel like there is a lack of clarity. They even have some stuff in to allow for scheduling in Claude Code. Seems similar to a cron but "non-automated" would rule out using a cron (right?). I'd love to feel comfortable setting up daily/hourly tasks for Claude Code but that feels iffy. Like I said, I don't think the line is clear.

The lack of clarity doesn't matter because they obviously can't tell if you ran a claude -p a few times today with usual prompts or whether your cron job did. It's impossible for them to reliably tell.

It can tell if your cron is running them every 10 minutes 24/7, because basic biology rules out you doing that for more than a day or so.


I had a weird experience at work last week where Claude was just thinking forever about tasks and not actually doing anything. It was unusable. The next day it was fine again.

That happens to me all the time. My current working theory is when their servers are hammered there is a queueing system that invisible to end-users.

The way Claude/Codex behave is entirely consistent with how every vibe coded project (of mine) has ended up so far. I bet those guys have no idea what's going on and are taking guesses because no one understands the thing they've made.

i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it.

I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency.


Ya I've had this experience more than a few times recently. I've heard people claiming they are serving quantized models during high loads, but it happens in cursor as well so I don't think it's specific to Anthropics subscription. It could be that the context window has just gotten into a state that confuses the model... But that wouldn't explain why it appears to be temporary...

My best guess is this is the result of the companies running "experiments" to test changes. Or it's just all in my head :)


Cursor one is back to Claude 4 or 3.5+ at best. Struggles to do things it did effortlessly a few weeks ago.

It’s not under load either it’s just fully downgraded. Feels more they’re dialing in what they can get away with but are pushing it very far.


These days cursor feel more capable and reliable then Claude Code (at last for my workflow). For personal projects, I'm using cursor during planning and verification but run Claude code for just implementation to save $.

Set MAX_THINKING_TOKENS to 0, Claude's thinking hardly does anything and just wastes tokens. It actually often performs worse than without thinking.

Not the guy you're responding to, but when this happens the token counter is frozen at some low value (eg. 1k-10k) value as well, so it's not thinking in circles but rather not thinking (or doing anything, for that matter) at all.

i was having this issue yesterday. the same prompt would send it into a loop where it would appear to be doing nothing for 30+ minutes until i cancelled it. it would show 400 tokens used and thats it. I tested on a previous version (2.1.68) and it still ran into this neverending loop BUT at least the token count kept steadily increasing.

So we are seeing 1. some sort of model degredation is my guess (why it can't break a thinking loop on some problems), as well as 2. a clear drop in thinking token UI transparency

when i left it running overnight it finally sent a message saying it exceeded the 64000 output token limit


This exact thing is happening to me since yesterday. It comes back to life when I throw the whole session away.

This happened to me as well! It was especially infuriating because I had just barely upgraded to the $200 per month plan because I exhausted my weekly quota. Then the entire next day was a complete bust because of this issue. I want my money back!

What day was it?

Thursday starting mid to late morning, and ended Friday night (US timezone).

Same day then. It was happening for me roughly between 9am-5pm BST time.

Probably the only hope is jailbreaking.

Jailbreaking a locked, inaccessible iphone?

Keep in mind that everyone else is usually unaware (by design) of what all the intelligence agencies can do, but I doubt they would help in this scenario even if they could.

On the other hand, if this happens to a far more important person...


Jailbreaking is dead.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: