Hacker Newsnew | past | comments | ask | show | jobs | submit | highlightslogin
If you run across a great HN comment or subthread, please tell us at hn@ycombinator.com so we can add it here

I worked on geothermal control systems a decade or so back. There are some less obvious applications for geothermal that reduce electric use (as opposed to generating electricity).

The systems I worked on were for cooling larger structures like commercial greenhouses, gov installations and mansions. 64° degree water would be pumped up from 400' down, run thru a series of chillers (for a/c) and then returned underground - about 20° or 25° warmer.

I always thought this method could be used to provide a/c for neighborhoods, operated as a neighborhood utility. I've not seen it done tho. I've seen neighborhood owned water supplies and sewer systems; it tells me the ownership part seems feasible.


Fun fact, while Trower was the manager who got Windows moving, it was Gabe Newell who served as the lead developer of Windows versions 1, 2, and 3. Win95 was the first version he wasn’t really involved with. By that time, he was working on porting Doom to Windows.

Up until a year ago I was regularly using a Massy Fergusson 135 [0] (Perkins Diesel version), made sometime in the 1970s. It was wonderful! So amazing to drive and use. Clunky and heavy, but you really really felt like you were using a machine. In low gears, if you put you foot down on the accelerator the engine would roar, and your speed would barely change!

And there was no fancy technology in it at all. If I was in the forest and had forgotten the key, I'd just reach behind the dashboard and hot-wire it. The air filter was basically a shisha-pipe that bubbled the incoming air through wire wool and engine oil.

Its fuel gauge didn't work either. You just had to take a look in the tank, or quickly react as soon as the revs started dropping. I ran it dry a few times and had to sit there with a spanner in one hand and YouTube into the other, while trying to bleed all the fuel lines. But they were all on the outside of the vehicle, which made it comparatively easy I imagine.

I've never actually driven a modern tractor, so don't know how it compares. I imagine the clutch is easier on the knees these days!

Anyway, this just felt like the place to share this.

[0] https://en.wikipedia.org/wiki/Massey_Ferguson_135


> “When we work on making our devices accessible by the blind, I don’t consider the bloody ROI.”

I just have to call out how much this impacted my mom’s life. She’s 100% blind and has access because of her iPhone and iPad. Yes she learned JAWSs and literally took classes to do it. Every single windows update has made it so she’d have to retake this class. The iOS updates a rocky but she isn’t literally hamstrung.

My dad, damn near 80, is still happily using his 2012 i7 Mac mini I set him up with before moving away.

Anyway, excited for the future of Apple under Ternus and a hardware guy at the helm. What kind of a11y does robotics have? https://machinelearning.apple.com/research/elegnt-expressive...


I've told only a few people about my near death experience, and most of them were polite, but obviously didn't believe a word I was saying. To be honest, I wouldn't believe it either if I had not experienced it myself.

I did not "see" anything other than a bright light, but I was overcome with an incredible feeling that I was in the presence of, and communicating with somebody who was conveying a message of absolute love for, and total understanding of everything that I was. The feeling of euphoria is impossible to fully describe, because of the absoluteness of it.

I wanted to stay where I was. It was the best feeling I'd ever experienced, and I was content. Somehow, I was "shown" some bits of what I had to live for -- people I had not yet met, and amazing places and things that I had not yet seen or done. I don't really remember making a choice to return, but I woke up in a hospital with a broken back and other injuries. I later learned that I had been hit by a car while riding my bicycle, and was given CPR by a passing stranger.

It makes me uncomfortable to talk about this because it's all just so unbelievable, but there it is.

As the years have gone by, I've met the friends and family that I had in my visions, and I've also been to the places and done the things that I saw myself doing in the vision.

My whole perspective on life was changed by this event, and I have no fear of death whatsoever.


When I was a teenager I was friends with an extremely poor kid who literally lived on the wrong side of the tracks. He couldn’t afford a microphone and used an old pair of busted headphones to rap into as a microphone. He had recorded and produced a whole album like this with Fruity Loops on an old computer he found discarded at the side of the road.

9 years ago, I shared this as an April Fools joke here on HN.

It seems that life is imitating art.

https://github.com/sdd/ieee754-rrp


I am this very term teaching 18-year-old students 6502 assembly programming using an emulated Apple II Plus. They've had intro to Python, data structures, and OO programming courses using a modern programming environment.

Now, they are programming a chip from the seventies using an editor/assembler that was written in 1983 and has a line editor, not a full-screen one.

We had a total of 10 hours of class + lab where I taught them about assembly language and told them about the registers, instructions, and addressing modes of the chip, memory map and monitor routines of the Apple, and after that we went and wrote a few programs together, mostly using the low-resolution graphics mode (40x40): a drawing program, a bouncing ball, culminating in hand-rolled sprites with simple collision detection.

Their assignment is to write a simple program (I suggested a low-res game like Snake or Tetris but they can do whatever they want provided they tell me about it and I okay it), demo their program, and then explain to the class how it works.

At first they hated the line editor. But then a very interesting thing happened. They started thinking about their code before writing it. Planning. Discussing things in advance. Everything we told them they should do before coding in previous classes, but they didn't do because a powerful editor was right there so why not use it?...

And then they started to get used to the line editor. They told me they didn't need to really see the code on the screen, it was in their head.

They will of course go back to modern tools after class is finished, but I think it's good for them to have this kind of experience.


JOVIAL had been in use within the US Air Force for more than a decade before the first initiative for designing a unique military programming language, which has resulted in Ada.

JOVIAL had been derived from IAL (December 1958), the predecessor of ALGOL 60. However JOVIAL was defined before the final version of ALGOL 60 (May 1960), so it did not incorporate a part of the changes that had occurred between IAL and ALGOL 60.

The timeline of Ada development has been marked by increasingly specific documents elaborated by anonymous employees of the Department of Defense, containing requirements that had to be satisfied by the competing programming language designs:

1975-04: the STRAWMAN requirements

1975-08: the WOODENMAN requirements

1976-01: the TINMAN requirements

1977-01: the IRONMAN requirements

1977-07: the IRONMAN requirements (revised)

1978-06: the STEELMAN requirements

1979-06: "Preliminary Ada Reference Manual" (after winning the competition)

Already the STRAWMAN requirements from 1975 contained some features taken from JOVIAL, which the US Air Force used and liked, so they wanted that the replacement language should continue to have them.

However, starting with the IRONMAN requirements, some features originally taken as such from JOVIAL have been replaced by greatly improved original features, e.g. the function parameters specified as in JOVIAL have been replaced by the requirement to specify the behavior of the parameters regardless of their implementation by the compiler, i.e. the programmer specifies behaviors like "in", "out" and "in/out" and the compiler chooses freely how to pass the parameters, e.g. by value or by reference, depending on which method is more efficient.

This is a huge improvement over how parameters are specified in languages like C or C++ and in all their descendants. The most important defects of C++, which have caused low performance for several decades and which are responsible for much of the current complexity of C++ have as their cause the inability of C++ to distinguish between "out" parameters and "in/out" parameters. This misfeature is the reason for the existence of a lot of unnecessary things in C++, like constructors as something different from normal functions, and which cannot signal errors otherwise than by exceptions, of copy constructors different from assignment, of the "move" semantics introduced in C++ 2011 to solve the performance problems that plagued C++ previously, etc.


Since there’s a lot of assumptions on personality here, I’ll toss my perspective here.

Worked at Atlassian for 5 years, had plenty of interactions with Mike. I wouldn’t categorize him as a jerk. I have plenty of disagreements about decisions he’s made, and I think he heavily over-hired (and is paying for it now), but a jerk he is not.

The reality is Atlassian has mechanisms, for better or for worse, that reward social discontent - Hello (their internal Confluence instance which has Reddit-like upvoting on blogs) and their karma bot on slack. Both of which tend to result in people gamifying these to boost their social status, which as you’ve seen with Reddit, often results in a subset of people realizing negative comments get more attention than positive ones. This got out of hand and they’ve been trying to dial it back, leading to cuts like these. It’s been a problem at Atlassian for a while.


Oh wow, I used to work on Excel Add-Ins about 10 years ago. Even got a patent for it. I'd be curious to see how they implemented the calls.

We came up with what I still consider a pretty cool batch-rpc mechanism under the hood so that you wouldn't have to cross the process boundary on every OM calls (which is especially costly on Excel Web). I remember fighting so hard to have it be called `context.sync()` instead of `context.executeAsync()`...

That being said, done poorly it can be slow as the round-trip time on web can be on the order of seconds (at least back then).


One pro-tip as I now somehow have a commercial bottling license these days: get pre-hydrated gum Arabic. Much easier to work with. Almost everybody who messes this up will make the mistake at the hydrating the gum Arabic stage. Blend it with any dry ingredients like sugar before using.

If you can’t source it, I’m not going to tell you that you SHOULD pretend to be a bottling company and ask a gum provider to send you some free samples, but you could and the amount they send you will last the rest of your life. TIC gums is pretty awesome and if you’re into frozen desserts has some incredible gum mixtures for ice creams, sorbets, etc.

Also, consider just using water soluble flavor concentrates and skipping emulsification all together. That’s what most pros do and it’s why Sprite isn’t cloudy like it would be if you used oils. My favorite suppliers that sell in consumer and pro-sumer qtys are Apex Flavors and Nature’s Flavors.

This probably won’t work for Cola as I think some of those ingredients have all of their flavor molecules in the oils, but as a general rule, if you can buy it at the store and it is clear, it is made using water soluble. If it is brown it probably isn’t, hence the caramel color additive.


You have to grind off the existing Al2O3 protective layer using sandpapers/sandblasters and/or power tools, then ultrasound + acetone wash the parts, then dump it into an acid bath while running electrical current through the pieces. Special dyes can be added for color. Then the pieces are boiled in regular water to further improve durability. The combination of the acid and electricity then boiling cause Al to form beehive shaped surface micropores, and dyes - actually inorganic, so pigments - gets electrically jammed into the pores. The whole outer surface become thick insulating layer of highly chemically resistant and mechanically rigid white/transparent Al2O3 once the process is complete. Voltage, current, waveform, temperature, solution acidity, etc etc affect colors and oxide thickness and shapes and sizes therefore aesthetics as well as durability. "Anodization" refers to this process of electro-acidic-heat formation of the oxide layer, not the coloring. The coloring powder is an extra.

Technically it can be done in a garage, but spot and/or intact application might be difficult. Strict color matching against Apple made things would be impossible.


Mike Stewart here! I led the restoration of the AGC documented on CuriousMarc's channel and co-administrate VirtualAGC. There is a lot to unpack here.

First: this is indeed a real bug in the AGC software. However, it did not go unnoticed for the whole program. It was discovered during level 3 testing of SATANCHE, and late development branch of the Command Module software COMANCHE. It was assigned anomaly number L-1D-02, and was fixed between Apollo 14 and 15. There are two known surviving copies of the L-1D-02 anomaly report:

* https://www.ibiblio.org/apollo/Documents/contents_of_luminar...

* https://www.ibiblio.org/apollo/Documents/contents_of_luminar...

The fix described in the article is partially complete, but as noted in the anomaly report there's a little bit more to it. Rather than just adding the two instructions to zero LGYRO, they restructured the code a bit and also cause it to wake up pending jobs. You can compare the relevant sections of the Apollo 14 and Apollo 15 LM software here:

* Apollo 14: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

* Apollo 15: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

The bug would not manifest silently in the way described in the article. For starters, LGYRO is also zeroed in STARTSB2, which is executed via GOPROG2 on any major program change: https://github.com/virtualagc/virtualagc/blob/master/Luminar...

This means that changing from any program to any other program would immediately resolve the issue. This is almost certainly a large part of why it took them so long to notice. Hitting BADEND while actively pulse-torquing is quite rare, and avoided by normal procedure. The scenario presented in the article can't happen since the act of starting P52 will zero LGYRO.

Moreover, in the very specific scenarios in which the bug can be triggered and remain, it results in multiple jobs stacking up attempting to torque the gyros. Eventually the computer runs out of space for new jobs -- similar to what happened on 11 -- and a 31202 (the Apollo 12+ equivalent of 1202) is triggered.

Since the issue was found before the flight of Apollo 14, a further description of how it might occur and what the recovery procedure should be was added to the Apollo 14 Program Notes: https://www.ibiblio.org/apollo/Documents/LUM159_text.pdf#pag...

Some other notes:

> Ken Shirriff has analysed it down to individual gates

I've done the bulk of the gate-level analysis. :)

> the Virtual AGC project runs the software in emulation, having confirmed the recovered source byte-for-byte against the original core rope dumps.

We've only been able to do that in very specific circumstances and only for subsections of assorted programs, but never for a full program. Most AGC software either comes from a program listing, from a core rope dump, or from reconstruction using changelogs and known memory bank checksums. We've disassembled all of the rope dumps into source files that assemble back into the same binary, but the comments and labels will be different from what was in the original listing. And to be extra clear: I've never had the opportunity to dump a module containing Apollo 11 software for either vehicle. Our sole source for both programs is a pair of printouts in the MIT Museum's collection.

> Margaret Hamilton (as “rope mother” for LUMINARY) approved the final flight programs before they were woven into core rope memory.

Jim Kernan was the rope mother for Luminary at least up through Apollo 11. Margaret was the rope mother for Comanche, the CM software, and was later promoted to lead the software division. Their positions at the time of 11 can be seen on this org chart: https://www.ibiblio.org/apollo/Documents/ApolloOrg-1969-02.p...

> Their priority scheduling saved the Apollo 11 landing when the 1202 alarms fired during descent, shedding low-priority tasks under load exactly as designed.

This is a huge topic on its own, but the AGC software was not designed to shed low-priority jobs. Ironically, the lowest priority job during the landing was the landing guidance itself, with high-priority jobs being reserved for things that needed quick response like antenna movements or display updates. If the computer were to shed the lowest-priority jobs, it would shed the landing guidance. This memo contains a list of all jobs active during the landing and their priorities: https://www.ibiblio.org/apollo/Documents/CherryApollo11Exege...

> For example, the ICD for the rendezvous radar specified that two 800 Hz power supplies would be frequency-locked but said nothing about phase synchronisation. The resulting phase drift made the antenna appear to dither, generating roughly 6,400 spurious interrupts per second per angle and consuming roughly 13% of the computer’s capacity during Apollo 11’s descent. This was the underlying cause of the 1202 alarms.

The frequency-lock prevents phase drift, so the phase is essentially fixed once the power supplies are up. Ironically, however, the bigger issue is that one reference was 28V while the other was 15V. Initial testing on actual Apollo hardware suggests that at least for Apollo 11, this voltage difference was the key contributor rather than the phase difference: https://www.youtube.com/watch?v=dT33c70EIYk


In my small island community, I participated in a municipal committee whose mandate was to bring proper broadband to the island. Although two telecom duopolies already served the community, one of them had undersea fiber but zero fiber to the home (DSL remains the only option), whereas the other used a 670 Mbps wireless microwave link for backhaul and delivery via coaxial cable. And pricing? Insanely expensive for either terrible option.

Our little committee investigated all manner of options, including bringing municipal fiber across alongside a new undersea electricity cable that the power company was installing anyway. I spoke to the manager of that project and he said there was no real barrier to adding a few strands of fiber, since the undersea high voltage line already had space for it (for the power company’s own signaling).

Sadly, the municipality didn’t have any capital to invest a penny into that fiber, so one day, one of the municipal counselors just called up a friend who worked for a fiber laying company and asked them for a favor: put out a press release saying that they were “investigating” laying an undersea fiber to power a municipal fiber network on the little island.

A few weeks later, the cable monopoly engaged a cable ship and began laying their own fiber. Competition works, folks. Even if you have to fake it.


I contacted him a number of years ago about his R-12 replacement for my old 1975 Ferrari, rather than converting it. It worked perfectly - better than Freon-12, even. Which is the only reason the EPA refused to allow it to be widely used. His web site (ghgcool, IIRC, I'm sure long gone by now) taught me that you can also mix butane and isopropane as a superior drop-in substitute for R-12, but he didn't pursue that approach because he knew that the EPA would kill it on safety grounds - even though it was only slightly more flammable than R-12 with the required compressor oil mixed into it.

George was a really interesting guy, a true hacker's hacker, and I truly enjoyed talking with him.


Sad to hear! I worked for George for all of my undergraduate time at Purdue. He was an amazing boss with such a passion for all things unix. For a while he had the UNIX license plate on his minivan.

Lighting a charcoal grill with liquid oxygen: https://www.youtube.com/watch?v=UjPxDOEdsX8

My Dad wrote an article about this 25 years ago or so: https://aoi.com.au/LB/LB705/ (How the Neanderthals became the Basques). He would really get a kick out of people reading it (he's 90 now). His website goes back to 96' and it shows.

I am always pleasantly amused that many HN folks share with me a love for weaving, knitting and knotting; not to mention ropes.

Dang had once posted a long list of HN discussions on these topics.

I think there is something about them that squirts a little bit of dopamine in our pattern seeking, puzzle solving brains.

For me, one of draws was how does the symmetry of the woven pattern get weft into the cloth. Multi-shaft looms does it differently from, say, a Kashmiri rug.

When I had joined HN decades ago I had no idea that there would be this shared interest. Frankly, there were no reason for this to be the case.

Then one day this happened

https://news.ycombinator.com/item?id=44462404


Pilot here.

While I definitely approve this and consider the limit to be one too many, I wish ecigarettes would be rather the target as soon as possible. Those are dangerous, and lately the most potential culprit for lithium related problems aboard.


One of the authors (of one of the two models, not this particular paper) here. Just a clarification, these models are *not* burned into silicon. They are trained with brutal QAT but are put onto fpgas. For axol1tl, the weights are burned in the sense that the weights are hard-wired in the fabric (i.e., shift-add instead of conventional read-muk-add cycle), but not on the raw silicon so the chip can be reprogrammed. Though, for projects like smartpixel or HG-Cal readout, there are similar ones targeting silicon (google something like "smartpixel cern", "HGCAL autoencoder" and you will find them), and I thought it was one of them when viewing the title.

Some slides with more info: https://indico.cern.ch/event/1496673/contributions/6637931/a... The approval process for a full paper is quite lengthy in the collaboration, but a more comprehensive one is coming in the following months, if everything went smoothly.

Regarding the exact algorithm: there are a few versions of the models deployed. Before v4 (when this article was written), they are slides 9-10. The model was trained as a plain VAE that is essentially a small MLP. In inference time, the decoder was stripped and the mu^2 term from the KL div was used as the loss (contributions from terms containing sigma was found to be having negliable impact on signal efficiency). In v5 we added a VICREG block before that and used the reconstruction loss instead. Everything runs in =2 clock cycles at 40MHz clock. Since v5, hls4ml-da4ml flow (https://arxiv.org/abs/2512.01463, https://arxiv.org/abs/2507.04535) was used for putting the model on FPGAs.

For CICADA, the models was trained as a VAE again, but this time distilled with supervised loss on the anomaly score on a calibration dataset. Some slides: https://indico.global/event/8004/contributions/72149/attachm... (not up-to-date, but don't know if there other newer open ones). Both student and teacher was a conventional conv-dense models, can be found in slides 14-15.

Just sell some of my works for running qat (high-granularity quantization) and doing deployment (distributed arithmetic) of NNs in the context of such applications (i.e., FPGA deployment for <1us latency), if you are interested: https://arxiv.org/abs/2405.00645 https://arxiv.org/abs/2507.04535

Happy to take any questions.


I remember fondly the AMD K6/2 architecture. It was the CPU of a ultra-budget priced Compaq Presario laptop that got me through graduate school back in the day.

Some years later, back in my home country (Paraguay) I met a lady who had a side business being a VAR builder of desktop PCs. In my country, due to a lot of constraints, there was (and is) quite a money crunch and people tried to cheap out the most when purchasing computers. This gave rise to a lot of unscrupulous VAR resellers who built ultra-low quality, underpowered PCs with almost unusable specs at an attractive price while making a pretty profit. You could still get much better deals in both price and specs, but you had to have an idea about where to look.

Well, back to this lady. She said that during the early 2000s she was on the same line of business, selling beige box desktop PCs at the lowest possible prices. But she said that she loved the AMD K6 and K6/2 architectures because they provided considerable bang for the buck. The cost was affordable, and yet performance was good. Add some reasonable amounts of RAM and storage and you could have a well-performing PC at a good price. The downside, as she said, was that the processors tended to generate lots of heat and thus the fans had to be good. This was especially important in a very hot country like Paraguay. But the bottom line was that AMD K6 line enabled her to offer customers a good deal.

This made me appreciate what AMD did with K6. They really helped to bring good computers to the masses.


In retrospect though, the company wasn't making a technology decision. They were making a decision between Jobs and Gassee. Jobs came with NeXT and Gassee came with Be. I don't think the technology mattered that much in the large scale of things.

Yes and no. The core of the purchase decision was really based on the technology. Ellen Hancock (Apple's CTO at the time) actually did a decent analysis of BeOs and NeXTStep. She was actually against some aspects of the purchase, and was not in favor of Be. She was also not in favor of the NeXT kernel. It is painful to say as a Be employee at the time, but Be internals were fragile, some technologies were very shallow, the kernel was brittle and under constant churn and we had big problems with our decision to have a C++ API. Gil Amelio liked Steve and Steve did a good job selling both a vision and the NeXT technology. BeOs was a really cool demo that was getting pulled into the direction of a real OS but had a long, long way to go. There actually was a possibility that Apple could have also gotten the Be code, but the board didn't go for it. As it turned out, most of the primary BeOs developers ended up at Apple via Eazel. The ones that didn't ended up at Google via Danger Research/Android.


There was a period of like 2 years when I was a kid where chuck Norris jokes were all the rage on the playground and I made an iPhone app that listed them all.

Jokes like “Chuck Norris is able to slam a revolving door.”

Anyway, I “built” this stupid app when I was like 13, copy-pasted like 300 jokes in there and a random one would show every time you tapped the screen.

Chuck Norris’s estate blocked the app from going live. I wish I had printed that rejection out and framed it.


Waymo saved my life in LA.

When I visited LA, I rode in a Waymo going the speed limit in the right lane on a very busy street. The Waymo approached an intersection where it had the right of way, when suddenly a car ignored its stop sign and drove into the road.

In less than a second, the Waymo moved into the left lane and kept going. I didn't even realize what was happening until after it was over.

Most human drivers would've t-boned the car at 50+ km/h. Maybe they would've braked and reduced the impact, which would be the right move. A human swerving probably would've overshot into oncoming traffic. Only a robot could've safely swerved into another lane and avoid the crash entirely.

Unfortunately, the Waymo only supported Spotify and did not work with my YouTube Music subscription, so I was listening to an advertisement at the time of my near-death experience. 4.5 stars overall.


I used to work at a startup that was trying to replace ads as the funding source for news (we failed, obviously)

but the crazy thing we discovered is that the people who run news websites mostly don’t know where their ads are coming from, have forgotten how the ad system was installed in the first place, and cannot turn them off if they try

we actually shipped a server-side ad blocker, for a parter who had so completely lost control of their own platform that it was the only way to make the ads stop


I was a developer at Iris Associates--I worked on versions 2 through 4. For version 3 I stuck in an easter egg in the About box. A certain combination of keys would produce a Monty-Python-like cut-out of Ray Ozzie's head and the names of the developers would fly out of his mouth. [This was when the software world was young and innocent and developers were trusted far beyond what they probably should have been.]

Lotus Notes was, I firmly believe, a glimpse of the future to come. In 1996, Lotus Notes had encrypted messaging, shared calendars, rich-text editing, and a sophisticated app development environment. I had my entire work environment (email, calendar, bugs database, etc.) fully replicated on my computer. I could do everything offline and later, replicate with the server.

And this was two years before the launch of Google and eight years before GMail!

In the article, the author speculates that the simplicity of the Lotus Notes model--everything is a note--caused it to become too complicated and too brittle. I don't think that's true.

Lotus Notes died because the web took over, and the web took over because it was even simpler. Lotus Notes was a thick client and a sophisticated server. The web is just a protocol. Even before AI, I could write a web server in a weekend. A browser is harder, but browsers are free and ubiquitous.

The web won because it could evolve faster than Lotus Notes could. And because it was free. The web won because it was open.


I will say this is one of the few pieces of prose I've read that was AI generated that didn't immediately jump out as it (a couple of inconsistencies eventually grabbed me enough to come to the comments and see your post details which mention it - I'd clicked through from the HN homepage), so your polishing definitely worked! Quite a neat little story

As a Microsoftie of more than a decade... Yeah, I see this.

We have an internal system called Cosmos[0] that does a great job of processing huge quantities of data very fast. And we sat on it for years while the rest of the industry moved to Spark and its derivatives. We finally released it as Azure Data Lake Analytics (ADLA) but did a shit job of supporting/promoting it.

We built Synapse, and it's garbage. We've now got Fabric which I guess is the new Synapse. I wouldn't really know because I probably have five different systems that I use that basically do large-scale data processing, and yet Fabric isn't one of them; who knows, maybe it will become the sixth?

We've had numerous internal systems for orchestrating jobs, and it wasn't until Azure Data Factory that we finally released something externally that we sort-of-kind-of-but-not-really use internally. (To be fair, some teams do use it internally, but we're not all rowing in the same direction.)

I regularly deal with multiple environments with different levels of isolation for security. I don't even know how it's all supposed to work -- I have my regular laptop and a secure workstation and three accounts that work on the two. Yet I have to do some privileged account escalation to activate these roles; when I'm done, there's no apparent way to end the activation early, so I just let it time out.

These things are but a fraction of the Azure offerings, but literally everything I have used in Azure makes me absolutely HATE working in the cloud. There's not a single bright side to it AFAICT. As best as I can tell, the only reason why Azure makes so much damn money is because Microsoft is huge and can leverage its size into growth. We're very much failing up here.

[0] https://www.microsoft.com/en-us/research/publication/big-dat...


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: