Hacker Newsnew | past | comments | ask | show | jobs | submit | y0y's commentslogin

The new breed of garbage collectors may change this significantly for the better. They need to be explicitly enabled, but there is a GC (G1) that may be better suited for desktop applications (low latency pauses, relinquishes memory to the OS more frequently while idle, etc.) available in Java 11.


It's been the default GC since Java 9 https://en.wikipedia.org/wiki/Garbage-first_collector


Ah, I'm confusing its availability with Shenandoah and zgc.

Unfortunately, the last time I used IntelliiJ it came with its own JDK8. Is that still the case, I wonder?


I just ran an update, IDEA and PyCharm use OpenJDK 11, Android Studio still uses 1.8.


They are all concurrent GCs though. You pay for the lower pauses with significant throughput hits.


Scala was the only language that I gave up and used an IDE for several years ago due to the nature of implicits.

The language server protocol has been a game-changer, though. Now I'm back to 100% terminal and couldn't be happier.


I know you're taking some flack for making it about age, but I do think there's some merit there. I'm 34. I feel like I'm in the middle in terms of software developer / sysadmin age. Lots of bright minds came before me and there are lots of bright minds out there right now in their early 20s.

A key difference is I can remember a time when network connectivity was flaky. When it was hardly a given. Even when it was available, the download times alone could often constitute a large part of the overall build time.

I think that we have all become complacent with regard to internet connectivity and service availability, but I think the younger you are the more complacent you are likely to be. If github.com goes down entirely, let's be honest - there are a lot of Jenkins builds that are going to be in the red.


I'm 36 and recently was assigned to mentor a new employee in his 20s. We had a moment of miscommunication when I asked him to use git to clone a local repository. He was confused when he couldn't find it on github.com (what I'd sent him was a path to our private network share). I had to explain that, yes you can use the github.com client if you want, but "git" is different from "github". I honestly couldn't tell if he understood the difference.

Now I'm losing sleep worrying if there's any way he can accidentally add a github.com remote to our private repo and push to it. I would be blamed, and I'd have to explain to managers older in turn than me what both git and github are.


I've worked with developers that have used git for 10 years who didn't fully realize what all of the 'git reset' options entailed, and I don't blame them. Git is complicated and you could certainly have a perfectly effective workflow with it for your whole career without using most of the features. If you'd only worked in environments in which code was entirely managed in GitHub, you probably wouldn't know that either. Judging someone because they don't possess the same slices of implementation-specific knowledge you do doesn't really make sense. They almost certainly know things that you don't simply because you've never encountered situations in which you had to learn about them, or for that matter, remember them even if you did.


> I've worked with developers that have used git for 10 years who didn't fully realize what all of the 'git reset' options entailed

While knowing some of them, if you use git reset on a frequent basis is expected, knowing where to find information on the other options is essential. I frequently go back and read through the man pages for various git commands so that I understand what will happen if I use a particular set of options and also to learn new things while reading through them.

So, the proper answer for a developer who doesn't realize what a git command is capable of is to refer to the man page for that particular command.



I believe it's more beneficial for developers to learn the underlying model of how git works (blobs, trees, commits, tags, etc) and how various commands deal with those underlying concepts. The Pro Git book git internals chapter is a very good place to start. That, in combination with the man pages for the commands they go over, will greatly enhance one's understanding of how git works.

Relying on cheat sheets to learn how to use git is not much different compared to learning how to use a programming language via Stack Overflow. In other words, you'll never develop a more thorough understanding of how the tool works and how to use it effectively.


Right. Assuming that they're not competent because they can't recall a slice of domain-specific knowlege from memory is not good.


I am the same age as you, but I do not think this is about age, at least not mainly about age. My father who is almost 70 has no issue understanding what git and github are and I have worked with people under 25 who have not had an issues with this distinction either. And there are plenty of open source developers I have met who are pushing 70 who keep up with technology just fine.

Sure, I notice that younger developers do not know some things, like they never experienced the Java EE hype, so they can fall into some traps which are well known among older engineers.


Yea this seems like more of a competence-related problem than age-rated. I never even considered that I might have to ask candidates if they knew the difference between git and github but now I wonder....


> I had to explain that, yes you can use the github.com client if you want, but "git" is different from "github".

To be fair, this is largely the result of a concerted effort by GitHub to muddy the difference. If you didn’t know any better, you might think Google is the internet too.


That's funny. That same situation (not understanding that git and github are different things) has happened to me a couple of times but with older engineers.


People adapt to the situation they experience. Github (and other repositories) tend to be stable, so they are used. Network speeds have increased, so we use the network and expend less efforts on local caching etc.

There's nothing "complacent" about this: previous generations also relied on infrastructure and didn't plan for prolonged power outages or had backup ham radio network links for when AOL was down.

People using nom install instead of custom makefiles aren't ignorant or stupid, they have found better ways to achieve their needs. And if 10 years on the job don't create the need to learn some skill, there is no reason to invest time into it. And I have complete confidence that people would be able to come with some workable solutions rather quickly if the githubcopalypse ever happens.

There is some cultural component at play among the "luddites" here as well, maybe comparable to preppers? It feels like planning for really exciting emergencies when one's skills that have been derided for so long are suddenly needed and safe the day. In this analogy, I guess Makefiles are the equivalent of very masculine hunting and zombie-defending skills.


I half agree with you, and am chuckling a little bit about the "very masculine hunting and zombie-defending skills" part, but... I'd like to offer some perspective.

I'm 36 and work on a pretty broad set of consulting projects: some schematic/PCB/mechanical design, firmware, some lower-level desktop/server code, and up and up to web/mobile apps. "Full full stack" if you will.

I live in a "major" Canadian city (although not in the top 15 by population), and I also own two wonderful properties about an hour out of town in a quite rural area. One is a cabin on a lake, and the other is a church from the 1910s. Sometimes I head out to one of these places to do the "Deep Work" thing, distraction-free, and sometimes it's to take time off, but end up getting an emergency call from a client. In either case, my Internet connectivity is limited to tethering, and depending on a few factors, that can either work fantastically well or poorly.

Going from the lowest-level to the highest-level projects, there's a very clearly declining probability of the project being able to build during a low-connectivity event. The embedded stuff pretty much always works just fine (it's a Makefile, or CMake). The C desktop/server stuff? Always works fine (any dependencies were pre-installed). Python/Ruby/Elixir web backend projects usually go OK, although I've occasionally ran into issues where the package manager wants to check for updates. Node front-end builds sometimes start to fall apart, and Android (via Android Studio) often refuse to build at all! (Some kind of weird Maven/Gradle thing that needs to go out and check something, even though the dependencies have all been pre-installed...)

It's extraordinarily frustrating when you can't change a line of code and hit "Build" to test a change locally. Everything's already present on the machine! It worked just fine 5 minutes ago!

To your prepper comment, and the previous comments about infrastructure, there's a significant population of the world that doesn't have 100% reliable infrastructure, even in Canada and the US. The tools we have used to work just fine in that environment, but are getting progressively worse.


> Some kind of weird Maven/Gradle thing that needs to go out and check something, even though the dependencies have all been pre-installed...

It is often possible to tell Maven at least to work in offline mode and not check for dependency updates.


> There's nothing "complacent" about this: previous generations also relied on infrastructure and didn't plan for prolonged power outages or had backup ham radio network links for when AOL was down.

A lot of the protocols for asynchronous communication allowed for operating in offline mode. So if you didn't have an internet connection, you could still compose and send emails, but the client would only actually connect to the network when were actually connected and send the emails all at once (as well as downloading emails from the POP or IMAP server).

git actually has commands that leverage email for sending an receiving patches, so that code review and development can take place without requiring a connection at all other than to send and receive when needed.


> A key difference is I can remember a time when network connectivity was flaky.

I’m young enough for this to not be something I have experienced, but I don’t have to have to understand that depending on things you don’t have control over can be a bad idea.


> which attempts to hide the implementation of distributed networking

To be fair, this is precisely what Akka doesn't do, often citing A Note on Distributed Computing (1994) which explains why that approach is problematic.


What makes you say use1-az6 is the culprit? I only ask because none of our workloads in az6 have experienced any issues. ....yet. We run critical workloads across 3 AZs thankfully, but still.


Sounds like that should apply to Dropbox, as well.


No, because Dropbox is a file syncing utility so symlinks, which are special files that point to other files, should be synced as symlinks. Replacing the symlink with the target file or directory is obviously broken behavior.


> symlinks, which are special files that point to other files, should be synced as symlinks

How would that work on, e.g., the iOS client? Or a Windows client, even?


Syncing tools are tricky because they bridge and overlap (at least) two filesystems - local and remote. It's not obvious how to handle symlinks there in general case. The default behavior dictated by filesystem abstraction is wrong (would duplicate files and break links). There are multiple other ways to handle it, but it's not clear what's best.


Explain what was wrong with Dropbox’s previous behavior (which is described in the link).


"would duplicate files and break links"

And did the previous behavior preserve symlinks that are located inside a folder in Dropbox and link to somewhere else inside the same folder? Because a symlink-blind program would screw that up royally.


I always tell people that ADHD is not an inability to pay attention, it's an inability to direct and focus one's attention at-will.

Whatever is most stimulating gets my attention.


Not sure what meaww.com is, but Daily Mail is the original source and is not generally considered to be reliable, for what it's worth.


Oh my god, the captcha at the end is so good. Is it even possible to get through?


The trick is to select everything.


Oh my god.

I literally just now got the joke: the boring company.

Because they bore tunnels.


I wear the hat everyday - a good conversation starter. Some people guess it's an accounting company name.


Good morning :)


Apparently it took over a year for the coffee to kick in! I just always shrugged and thought it was a cutesy name for a company that did something boring like dig tunnels. I cannot believe that pun escaped me for so long. :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: