Hacker Newsnew | past | comments | ask | show | jobs | submit | sobjornstad's commentslogin

Yeah, except then they would lose 10 seconds from going 54 mph instead of 56 mph...

I worked at a place like this and we had a software registry, where if you had installed something and it wasn't on the registry somebody would start sending you nasty emails. This kind of thing would happen all the time: maybe the Linux machines weren't in the scans, or anything that came with the OS was whitelisted.

But if you wanted to install it separately on a computer that didn't have it already, then you'd need to get it “approved.”


  > maybe the Linux machines weren't in the scans
Honest question, how would you actually detect this? I mean I understand using the package manager install (and that's easy for them to control) but building from source and doing a local install (i.e. no `sudo make install`)? Everything is a file. How would you differentiate without massive amounts of false positives?

Even if it is your own work computer?


if the computer is provided for work, by the company you work for, it is not "yours"

limitations on what you can install on such machines can be quite draconian, including forbidding anything that IT Security and similar departments may not like.


I meant the work laptop you are given through working as a SWE. Are you referring to jobs in IT?

And are you allowed to use your own personal computer (laptop)?

If not, and you have to work on what you have been given, why are people OK with it[1]? In the case of IT jobs?

I cannot imagine being productive without my OS, WM, IDE, configurations and whatnot.

I did work on a desktop in an office before, using their software and it was awful. I could have automated the whole damn thing at home. It was the tax office and obviously I understand why I cannot use their software at home, but for an IT job?

[1] Stupid question, people tolerate much more than this, incl. not getting paid for overtime, being worked to death without a break every day of the week, etc.


>I meant the work laptop you are given through working as a SWE.

Everywhere i've worked, i was not "given" a computer anymore than I was given a desk, a chair or a network connection. Perhaps "provided" would be better.

> And are you allowed to use your own personal computer (laptop)?

Never have been, and never have wanted to be.

>why are people OK with it

It's industry SOP, and people pay you to work that way.

> I cannot imagine being productive without my OS, WM, IDE, configurations and whatnot.

You need to improve your imaginative powers, and your technical knowledge.


I don't get where your surprise comes from. Of course companies have the last word on what tools you are allowed/obliged to use when you're on duty. Uniforms, vehicles, why not software?

> I cannot imagine being productive without my OS, WM, IDE, configurations and whatnot.

This is a dream. I hate Windows but, everywhere I worked, Windows was the OS.

One has to adapt to feed a family.


I agree. Unfortunately so. That said, for SWE jobs, it sounds like a nightmare.

I second this. GitHub used to be a fantastic product. Now it barely even works. Even basic functionality like the timeline updating when I push commits is unreliable. The other day I opened a PR diff (not even a particularly large one) and it took fully 15 seconds after the page visually finished loading -- on a $2,000 dev machine -- before any UI elements became clickable. This happened repeatedly.

It is fairly stunning to me that we've come to accept this level of non-functional software as normal.


The trend of "non-functional software" is happening everywhere. See the recent articles about Copilot in Notepad, failing to start because you aren't signed in with your Microsoft Account.

We are in a future that nobody wanted.


Not quite everywhere. There's a common denominator for all of those: Microsoft.

Their business is buying good products and turning them into shit, while wringing every cent they can out of the business. Always has been.

They have a grace period of about 2-4 years after acquisition where interference is minimal. Then it ramps up. How long a product can survive once the interference begins largely depends on how good senior leadership at that product company is at resisting the interference. It's a hopeless battle, the best you can do is to lose slowly.


I remember how crazy Skype originally was.

At my first company we used Skype to communicate with each other. Mostly chats and files.

One day our internet cable to the office got cut by someone. Well, we didn't realize that for some time, because Skype just continued to work without Internet. It was like a miracle. It was unique software, there's nothing like it even today.

I think that the first thing Microsoft did after they bought Skype is making Internet mandatory, probably to spy on all chats.


Things don't always ramp up after 2-4 years. Sometimes MS just kills the project or company after that period of time.

See also their moves in the gaming industry.


Heh, I was working at 2 of those gaming companies when they were acquired by m$. I almost fear taking another job in the gaming industry, there seems to be some kind of bastardised version of Murphy's law that any gaming company that hires me will be acquired by ms 6 months later.

I mean, that's obviously not the case, but it's weird that it happened twice!


Very weird it happened twice! But that's a kind of a cool factoid to tell people haha.

Even with devs and publishers that don't die or are killed, they still lay hundreds off when a game is done. Then the studio limps along in pre-production mode on their next game for 4-5 years it seems like...

Maybe the only job stability in the industry is with indies, and... Nintendo?


I'd add the hugely successful studios to that list. Even after ms acquisition, to the best of my knowledge neither of the 2 studios I worked at had any layoffs.

But they boast the most sold video game in the history of videogames (Tetris a close-ish second), and most downloaded free mobile game, respectively. Each have player bases larger than the population of the country they're from!

Here's to hoping ms is hesitant to gut either!


> But they boast the most sold video game in the history of videogames (Tetris a close-ish second), and most downloaded free mobile game, respectively.

Just out of curiosity, I guessed Minecraft which tracks, and Subway Surfers respectively, rather than Candy Crush Saga. Is CCS actually the most downloaded free mobile game ever?


Hmm... My source here is internal king communications, they were very proud of it. But I left there years ago, so I guess it might've changed?

Pretty sure it was the biggest at the time at least.


Call of Duty and Candy Crush I would say (if you count Farm Heroes as CCS because it's just a reskin for APAC then it's probably not even close).


Common misconception that. I also thought the hardcore gamer games would be in the top. They're not even close.

Minecraft has sold 350 million copies. Call of duty: black ops a measly 43 million.

https://en.wikipedia.org/wiki/List_of_best-selling_video_gam...


I would have said that the OP meant the Call of Duty series in general, rather than a specific instance.

Given the mobile thing mentioned also, it's most likely to be Activison (which has been acquired by Microsoft).


I for one am shocked--SHOCKED, I say!--to learn that anything bad could happen as a result of a) putting everything in "the cloud" and b) handing control over the entire world's source code to the likes of Microsoft.

Who could have POSSIBLY foreseen any kind of dire consequences?


Nobody. Nobody at all could have seen it. Microsoft is cool now, haven't you seen VSCode? They do Open Source, they run Linux, they've joined the fold, the tiger shed its stripes.


Ironically they are enshitifying vscode too. Even products them make themselves can’t survive long.


More like a wolf in sheep clothing


You're obviously being sarcastic, but for the longest time the dominant position of a huge chunk of HN (and the tech world in general) has been that the cloud is the answer to any problem, and that anything deviating from it is either impossible, too expensive or too stupid.

After a generation of indoctrinated people, Microsoft (or any FAANG really) can't even afford to do anything differently.


For this entire time, my position has been that the people trying to put everything in the cloud were idiots who would come to regret this. Thus explaining my sarcasm. It's time to fire back with both barrels at the dumbasses who rolled their eyes and said I was stupid, wrong, had no idea what I was talking about, blah blah blah. (Same shit they always say about every word that comes out of my mouth. So tiring.)

Laughs in Linux


> We are in a future that nobody wanted.

some people wanted this future and put in untold amount of money to make it happen. Hint: one of them is a rabid Tolkien fan.


the irony of Tolkien being associated with a techno-dystopia makes me nauseous


    Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale.

    Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.


Indeed, a theme throughout his works to the point of directly saying it was to beware things that are of a deeper art than you possess.


He was also pretty openly anti-industrialism in the Victorian tradition of guys like John Ruskin and William Morris.


Rent seekers paradise (ft copilot)


It’s just feudal with Capital.


Who is it?


Peter Thiel.

Evil incarnate and the next president of the United States you've never heard of. Vance is his sock puppet, he was chosen because he is guaranteed not to have a single independent thought so when Trump croaks Thiel will be the president in all but name.

It was also he who willed OpenAI to be in order to help destroying American democracy.


I can’t follow OpenAI as a ploy to destroy democracy when he already has Palantir.


Fascism can only thrive in an alternate reality and LLMs are excellent at producing such propaganda on an industrial scale. Accordingly, the political right uses it for that purpose much more and conservatives are much more receptive to it, too.

https://myscp.onlinelibrary.wiley.com/doi/10.1002/jcpy.1461

https://arxiv.org/html/2512.13915


IDK what you're smoking, but he was born in Germany so can't be the next potus.


> IDK what you're smoking

Or maybe it's a problem with your English? (Note: I'm being offensive just because you are :-) ).

> when Trump croaks Thiel will be the president in all but name.

This means that he will have the power, but not the title.


Ok I get it, sorry bout that and thanks for the laugh!


This thread has complaints about software coming from the same supplier both degrading.

The person(s) who wanted this want Azure to get bigger and have prioritized Azure over Windows and Office, and their share price has been growing handsomely.

‘Microslop’, perhaps, but their other nickname has a $ in it for a reason.


MS PM's wanted it, got their OKR's OK'd, got their bonuses, and moved on.


> We are in a future that nobody wanted.

Nor deserved.


Then why is it the future we have?


Let’s just say there are a couple of guys, who are up to no good. And they started making trouble in our neighborhood.

jokes aside it’s all because of hyper financial engineering. Every dollar every little cent must be maximized. Every process must be exploited and monetized, and there are a small group of people who are essentially driving all this all across the world in every industry.


>small group of people who are essentially driving all this all across the world in every industry.

The federal reserve?


It was a complete accident. Nobody could have foreseen it. We are currently experiencing the sudden discovery that Microsoft is an evil corporation and maybe putting everything in the cloud wasn't the best move after all.


Some people, sure. I never thought that putting everything in the cloud is the best move, but I guess N=1.

In fact I was shocked to see that so many allegedly tech-literate people were so blindly pro-cloud (and they still are).


> Sudden discovery

Sudden for 15-years old's only.


Laughs in my own Linux distro


Care to elaborate? How do you deal with the constant churn?


What constant churn? Nothing changes on my system unless I desire it to.


Hey from the GitHub team. Outages like this are incredibly painful and we'll share a post-mortem once our investigation is complete.

It stings to have this happen as we're putting a lot of effort specifically into the core product, growing teams like Actions and increasing performance-focused initiatives on key areas like pull requests where we're already making solid progress[1]. Would love if you would reach out to me in DM around the perf issues you mentioned with diffs.

There's a lot of architecture, scaling, and performance work that we're prioritizing as we work to meet the growing code demand.

We're still investigating today's outage and we'll share a write up on our status page, and in our February Availability Report, with details on root cause and steps we're taking to mitigate moving forward.

[1] https://x.com/matthewisabel/status/2019811220598280410


Literally everyone who has used Github to look at a pull request in say the last year has experienced the ridiculous performance issues. It's a constant laughing point on HN at this point. There is no way you don't know this. Inviting to take this to a private channel, along with the rest of your comment really, is simply standard corporate PR.


Yes agreed it's been a huge problem, and we shipped changes last week to address some of the gnarly p99 interactions. It doesn't fix everything and large PRs have a lot of room to be faster. It's still good to know where some worst performance issues are to see if there's anything particularly problematic or if a future change will help.


FWIW, I find the new React-based diff viewer worse than the old server-rendered page. I disabled the preview for this reason. It does have some nice features but overall it feels more finicky. I would think that in theory this should be better at handling large diffs but I'm not sure that that's the case, and at least the UX feels more choppy.


That's financialization at play. When you render and syntax highlight the diff on the server, Github pays the cost, if you do it on the client side, the cost is paid by the client. At Github's scale it's probably a large enough of a difference that they decided the reduced customer experience is worth it.


I look at pull requests daily, I haven't encountered the problems you speak of, not sure what they are.


I have been using GitHub since 2011 and it's undeniable that the performance of the website have been getting worse. The new features that are constantly being added are certainly a factor, but I think the switch to client-side rendering that obviously shifted the load from their server to our browsers and also tend to produce ridiculously large and inefficient DOMs[1] is the main cause.

If you want a practical example, here you go. I'm a Nixpkgs commiter and every time I make a pull request that backports some change to the stable branch, GitHub unprompted starts comparing my PR against master. If I'm not fast enough to switch the target branch within a couple of seconds it literally freezes the browser tab and I may have to force quit it. Yes, the diff is large, but this is not acceptable, and more importantly, it didn't happen a few years ago.

[1]: https://github.com/orgs/community/discussions/111001


It's insulting to see the word "progress" being used when the PR experience is orders of magnitude slower than it was years ago, when everyone had way worse computers. I have a maxed M5 MacBook and sometimes I can barely review some PRs.


Hopefully the published postmortem will announce that all features will be frozen for the foreseeable future and every last employee will be focused on reliability and uptime?

I don’t think GitHub cares about reliability if it does anything less than that.

I know people have other problems with Google, but they do actually have incredibly high uptime. This policy was frequently applied to entire orgs or divisions of the company if they had one outage too many.


For what it's worth, I doubt that people think it's the engineering teams that are the problem; it feels as though leadership just doesn't give a crap about it, because, after all, if you have a captive audience you can do whatever you want.

(See also: Windows, Internet Explorer, ActiveX, etc. for how that turned out)

It's great that you're working on improving the product, but the (maybe cynical) view that I've heard more than anything is that when faced with the choice of improving the core product that everyone wants and needs or adding functionality to the core product that no one wants or needs and which is actively making the product worse (e.g. PR slop), management is too focused on the latter.

What GitHub needs is a leader who is willing and able to say no to the forces enshittifying the product with crap like Copilot, but GitHub has become a subsidiary of Copilot instead and that doesn't bode well.


> people think it's the engineering teams that are the problem;

It could be, some people are just terrible at their job. Lots of teams have low quality standards for their work.

Maybe that still comes down to leaders but for different reasons. You can ship useless features without downtime.


Permitting terrible engineers to continue to work for you is a management problem.


Sort of I think. There's a culture aspect to it too. Everything is blameless, there's no reason to not mess up.


>I doubt that people think it's the engineering teams that are the problem

Did you forget Microsoft engineering response to Casey Muratori "Extremely slow performance when processing virtual terminal sequences"?

"I believe what you’re doing is describing something that might be considered an entire doctoral research project in performant terminal emulation as “extremely simple” somewhat combatively."

https://github.com/microsoft/terminal/issues/10362#issuecomm...

followed by Casey producing evidence for his 'extremely simple' claim in couple of days.


Can you guys stop adding new features for a while please and just make what’s there more reliable?


GitHub will prioritize migrating to Azure over feature development

4 months ago on HN: https://news.ycombinator.com/item?id=45517173


Every developer has had this fight with management and most lose.


Why does it trigger a usage limit whenever I search anything at all? Sometimes I can get through but when I go back one page, boom error.


Oh I assumed that was just me/because of a linux OS.


Ya, it really was one of the most enjoyable web apps to use pre-MS. I'm sure there are lots of things that have contributed to this downfall. We certainly didn't need bullshit features like achievements.


Even just a year or two ago its web interface was way snappier. Now an issue with a non-trivial number of comments, or a PR with a diff of even just a few hundred or thousand lines of changes causes my browser to lock up.


But even clicking around tabs and whatnot is noticeably slower. It used to be incredibly snappy.


Which website lets you load PRs with 1000 lines and it’s fast? Honest question, it’s not gitlab.


It's ~80 KiB. Do you think 80 KiB should be slow?

GH just a few days ago told me that it couldn't fetch the files changed because there were too many files changed. There were 4.


You didn’t answer the question


I’ve really enjoyed Gitea wherever I’ve used it.


My company’s self hosted gitlab achieves this all the time.


So React rewrite did not help after all? Imagine, one of the largest software tool companies on Earth cannot reliably REbuild something in React. I lost count of the inconsistency issues React introduced.

https://news.ycombinator.com/item?id=33576722


React isn't causing these issues.


Then why is the site slower than it was in 2012 on a 2009 Macbook?


Good to know. So it only causes the UI inconsistency bugs.


The new design/architecture allows them to do great stuff in the name of efficiency; for example, when browsing through some parts of the UI, it's now much more capable of just updating the part of the page that's changed, rather than having to reload the entire thing. This is a significantly better approach for a lot of things.

I understand that the 'updating the part of the page that's changed' functionality is now dramatically slower, more unresponsive, and less reliable than the 'reload the entire thing' approach was, and it feels like browsing the site via Citrix over dial-up half the time, but look, sacrifices have to be made in the name of making things better even if the sacrifice is that things get worse instead.


> to do great stuff in the name of efficiency; for example, when browsing through some parts of the UI, it's now much more capable of just updating the part of the page that's changed

Are you implying that this only doable with React? I mean just for the fun of it you can look at this video:

https://www.youtube.com/watch?v=3GObi93tjZI


> for example, when browsing through some parts of the UI

React allows this? I didn't realize that I needed React to do this when we used Java and Js to do this 20 years ago. I also didn't realize I needed React to do this when we used Scala and generated Js to do this 10 years ago. JFC, the world didn't start when you turned 18.


> I understand that the 'updating the part of the page that's changed' functionality is now dramatically slower, more unresponsive, and less reliable than the 'reload the entire thing' approach was, and it feels like browsing the site via Citrix over dial-up half the time, but look, sacrifices have to be made in the name of making things better even if the sacrifice is that things get worse instead.

I don't think they were being serious.


Shhh, don't insult them. It's the lost art of doing stuff smart and efficient. Now the trend is doing stuff _vibe_ instead.


GitHub used jQuery + pjax to do exactly this a decade ago - rendered HTML for smaller components was fetched and replaced in-place with a single DOM update. It even had fancy sliding transitions.


This is just microsoft doing the only thing they know, which is taking a good product and turning it into a monster by bashing out whatever feature is on some investors mind that barely even work in a isolated vacuum-sealed test chamber. All microsoft producs are like bad experiments.


I've been a GitHub user since the very early days. I had a beta invite to the service. I really wish they didn't swap out the FE for a React FE.

They need to start rolling back some of their most recent changes.

I mean, if they want people to start moving to self hosted GitLab, this is gonna get that ball rolling.


GitLab is slower for me than that React GH app. Why would I move to GitLab?


Was this a local/on prem version of GL or the hosted web version?

My previous org had an on prem version hosted on a local VM. It was extremely fast, we setup another VM for the runners, and one for storing all the docker containers. The thing I’ve seen people do it use the VM they put their gitlab instance on for everything and ends up bogging things down quite a bit.


>"XYZ used to be a fantastic product. Now it barely even works. Even basic functionality..."

The new normal is too many cases. Then people act put off you complain, or act like you are expecting too much.

Lots of people are in software development, or management, who dont have the mindset and personality for it. These roles are not for everyone. But people like the $$$ and so the wrong people get involved.


We loved Github as a product when it needed to return or profit beyond "getting more users".

I feel this is just the natural trajectory for any VC-funded "service" that isn't actually profitable at the time you adopt it. Of course it's going to change for the worse to become profitable.


GitHub isn't VC funded at the moment, though. It's owned by Microsoft. Not that this necessarily changes your point.


> Of course it's going to change for the worse

> It's owned by Microsoft.

I see no contradictions here.


I don’t get it. Why making the UI shittier would possibly lead to more profit?


Moving to client-side rendering via React means less server load spent generating boilerplate HTML over and over again.

If you have a captive audience, you can get away with making the product shittier because it's so difficult for anyone to move away from it - both from an engineering standpoint and from network effects.


It seems most of the complaints are about the reliability and infrastructure - which is very much often a direct result of lack of investment and development resources.

And then many UI changes people have been complaining about are related to things like copilot being forcibly integrated - which is very much in the "Microsoft expect to gain a profit by encouraging it's use" camp.

It's pretty rare companies make a UI because they want a bad UI, it's normally a second order thing from other priorities - such as promoting other services or encouraging more ad impressions or similar.


I mean.. it’s a Microsoft product now. That’s basically a guarantee it will suck, and continue to get worse and worse until it’s an unusable mess of garbage like everything else they make. I haven’t seen any good user-facing windows products in at least 10 years and somehow the bar drops lower by the year.


> GitHub used to be a fantastic product. Now it barely even works.

it's almost as if Microsoft bought it, isn't it?


I have a vague recollection that it might come up named as such in Half-Blood Prince, written in Snape's old potions textbook?

In support of that hypothesis, the Fandom site lists it as “mentioned” in Half-Blood Prince, but it says nothing else and I'm traveling and don't have a copy to check, so not sure.


Hmm, I don't get a hit for "slugulus" or "eructo" (case insensitive) in any of the 7. Interestingly two mentions of "vomit" are in book 6, but neither in reference to to slugs (plenty of Slughorn of course!). Book 5 was the only other one a related hit came up:

> Ron nodded but did not speak. Harry was reminded forcibly of the time that Ron had accidentally put a slug-vomiting charm on himself. He looked just as pale and sweaty as he had done then, not to mention as reluctant to open his mouth.

There could be something with regional variants but I'm doubtful as the Fandom site uses LEGO Harry Potter: Years 1-4 as the citation of the spell instead of a book.

Maybe the real LLM is the universe and we're figuring this out for someone on Slacker News a level up!


Literally every time if I press the "stop" button while it's writing back for the first conversation turn because I notice I forgot something and want to correct it, my prompt is lost.

Anytime I run into a bug like this, part of me wants to go calculate how much of humanity's collective time has been wasted by one company not fixing a trivial bug. It's got to be a lot.


Surprisingly, I find the Apple Watch significantly more comfortable and less perceptible than any other watch I've worn in the past. My theory is it's the contoured back for the heart rate monitor, etc. But maybe I'm just weird.


I've noticed the same thing with rote memory tasks like lines of poetry, so I think it might be a more general thing involving the memory consolidation properties of sleep, maybe particularly focused on fluency/speed rather than mere ability to recall.


I also hate the ones that are exactly at the spot where the speed limit changes and still flash you aggressively in the distance. Yeah, I'm going faster than 35 because the speed limit is 55 where I am and I'm still slowing down.


Do people normally test-drive cars in the dark?


Being underwater does make it significantly easier, though the effect is fairly moderate in most humans: https://en.wikipedia.org/wiki/Diving_reflex


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: