Hacker Newsnew | past | comments | ask | show | jobs | submit | PaulRobinson's commentslogin

I put him up for the night on my sofa once.

When you do this, you get his "rider". Google it, it's real, it's infamous for the "don't buy me a parrot" section.

Anyway, in that, he makes clear that if people at dinner are not interested in talking about free software, he's going to pull out his laptop and get on with his work relating to free software.

He doesn't care about fancy food, drinks, etc. - he wants to raise money for free software, and work on free software. He did this in a restaurant when three others of us were chatting about something else, and we all just accepted that's what he does, and that's him. It was fine.

If you're not familiar with him or this, then it's going to be a weird experience.

He also struggles with social interactions in my limited experience, particularly when it's a "fan boy" interaction.

I've seen him not being super nice to other people who were trying to have a conversation with him, not because he's not a nice person (I found him quite personable one on one), but it seems to me that he struggles to know how to behave around people who don't know how to just talk to him about things he wants to talk about.

I once saw him in the audience of a conference with quite a notable set of speakers [0], and I can't remember who it was who he started hectoring in the Q&A (I mean, look at the speaker list, whoever it was, it's somebody you've probably heard of), but he just diverted it into a little lesson about free software for the speaker and everyone else listening. It's the only thing he cares about talking about. It's either a super-power focus, or really annoying. I personally think at this point you just either need to meet him where he is, or avoid him if you don't want to. He's not going to change.

I'm glad I met him, I'm glad he does what he does, I know he's a little spikier than others around him and I'm OK with that. I also know plenty of people who never want to speak to him ever again and think free software needs a new figurehead.

[0] https://curation.cs.manchester.ac.uk/Turing100/www.turing100...


> think free software needs a new figurehead.

It's not like politician's post, where random people can just decide to run for it. We don't really have many people that do what he does.


> I've seen him not being super nice to other people who were trying to have a conversation with him, not because he's not a nice person (I found him quite personable one on one), but it seems to me that he struggles to know how to behave around people who don't know how to just talk to him about things he wants to talk about.

I'd argue that while he may be nice, it's also generally considered impolite to be someone who "only talks to him about the things he wants to talk about". It's meant to be a two-way street, generally. Someone who only wants to talk about what -they- are interested in, not what their conversation partner is interested in is not being nice or polite.



You don't owe random strangers your time, and it's so strange to me that people feel so entitled to other peoples time. Can also be argued that it's rude to engage a person on a topic they're not interested in.

I do, to an extent, agree. But I also think it's impolite to have relationships where "we're going to talk about what I want to talk about, but when we go to talk about what you want to talk about I'm just going to pick up my laptop and ignore you or tell you I don't want to".

You could argue, in your description, the same about RMS - he might feel entitled to someone's time to talk about free software.


The half-life of Apple kit is so high, they are arguably a lot more sustainable than their repairable PC counterparts.

Apple laptops I have that boot include a 2007 iBook (my folks used it until this Summer and then bank websites would stop working with the Chrome browser they could get working on it), which I'll be putting a BSD or Linux distro on over Christmas, a 2012 Intel MBP that has Linux on it and a couple of 2015-2017 era MBPs that I inherited via one means or another.

I'm typing this on an M4 MacBook Air I picked up cheap during Black Friday sales. I fully expect it to still be functional in 10 years.

I don't think I've ever had a PC laptop last close to that.


The half life is no different. Apple doesnt use higher quality parts. Thats just perception from their premium product marketing.

Every time I see this comparison its always "My $3000 apple laptop is still usable after 5 years while my $700 chromebook is slow after 4 years".


Posting from a Thinkpad X61s laptop (Jan 2008) running Trisquel Linux 11.

(I take your point that I would not be able to participate in this discussion using the original operating system that came with this laptop).


Apple’s support for MacOS can been shorter than their laptops longevity (the longevity of their laptops got quite bad when then tried to make them as thin as an usb-c port). So Linux support is also important there imo, and as the original post pointed out because Apple makes it so hard to for Linux to support their hardware, long-term software support may be something to think about before buying a MacBook.

MacOS is abysmal with backwards compatibility. In the music space, everything just breaks every few years. With Snow Leopard, Lion, Catalina, Sequoia. While Windows versions work forever, you're stuck having to upgrade and buy new versions of software to run on newer versions of MacOS. That's if you're lucky. Sometimes you might have no path and you need to look for new software.

Of course it was cultural. This article covers it all in more detail, but I was coding a lot in this period of Perl's decline, and in hindsight it was all so obvious.

I wrote a lot of Perl 3 and Perl 4 to date my experience.

Rails was designed to be a luxury hand-holding experience using a language that was intended - as a design goal - to make programming fun. There was even for a while in the late 2000s the culture of _why and MINSWAN.

PHP was fast, easy, forgiving. It also led to such awful code that several companies banned it. I think nearly all of us who got to deploy code with nothing more than FTP miss that. Of course today it runs more of the web than any other language.

Perl on the other hand wasn't interested in any of that, so many people just left because they were pulled by better ecosystems that took the web seriously, or pushed by a culture that believed the priesthood was real.

For me, Rails and Ruby were genuinely joyful discoveries after spending a couple of years wrestling J2EE and .NET apps in the ~2002-2005 era, and the Perl community felt crusty and unwelcome and immature by comparison.

Today I'm no fan of come of the politics of people being some popular Ruby frameworks cough, but I enjoy Ruby and I'm enjoying my dive back into systems programming via re-learning C and taking a long look at Zig. I'm not sure I'll ever write "production" Perl again.


> [..] as a design goal - to make programming fun. There was even for a while in the late 2000s the culture of _why and MINSWAN.

It was a great time - titles like Learn You A Haskell For Great Good and Land of Lisp really capture the zeitgeist.


But could a different culture have actually changed Perl to be friendly and fun like Ruby? Without completely torpedoing compatibility with existing code and essentially creating a whole new language anyway?

Or did the language itself just get outdated and replaced? (there's nothing wrong with that! most things don't last forever!)


I feel like I grokked Perl enough and I still write Perl code, but I also think that there are some technical reasons why it declined in popularity in the 2000s and 2010s. All those differences between $ % @, the idea of scalar or list context, overuse of globals, and references. These features can all make sense if you spend enough time in Perl, and can even be defended, but it creates a never-ending stream of code that looks right but is wrong, and it all seems to create a lot of complexity with very little benefit.

This requires those with power to relinquish authority and/or try new, unfamiliar practices and accept possible failure.

Any company/organization can theoretically change its culture, but it's quite difficult in practice.


> PHP was fast, easy, forgiving. It also led to such awful code that several companies banned it.

That's not the case nowadays with PHP 8.5 for example... and Laravel framework.


Congratulations, you have now increased the cognitive load to be productive on your team and increased the SQL injection attack surface for your apps!

I jest, but ORMs exist for a reason. And if I were a new senior or principal on your team I’d be worried that there was now an expectation for a junior to be a wizard at anything, even more so that thing being a rich and complex RDBMS toolchain that has more potential guns pointing at feet than anything else in the stack.

I spent many years cutting Rails apps and while ActiveRecord was rarely my favourite part of those apps, it gave us so much batteries included functionality, we just realised it was best to embrace it. If AR was slow or we had to jump through hoops, that suggested the data model was wrong, not that we should dump AR - we’d go apply some DDD and CQRS style thinking and consider a view model and how to populate it asynchronously.


I think this needs some nuance - this is definitely true in some domains.

Most of the domains I worked in it was the other way around: using an ORM didn’t mean we could skip learning SQL, it added an additional thing to learn and consider.

In the last years writing SQLAlchemy or Django ORM the teams I was on would write queries in SQL and then spend the rest of the day trying to make the ORM reproduce it. At some point it became clear how silly that was and we stopped using the ORMs.

Maybe it’s got to do with aggregate-heavy domains (I particularly remember windowing aggregates being a pain in SQLAlchemy?), or large datasets (again memory: a 50-terabyte Postgres machine, the db would go down if an ORM generated anything that scanned the heap of the big data tables), or highly concurrent workloads where careful use of select for update was used.


> In the last years writing SQLAlchemy or Django ORM the teams I was on would write queries in SQL and then spend the rest of the day trying to make the ORM reproduce it.

Ah yes, good times! Not Django for me but similar general idea. I'm not a big fan of ORMs: give me a type safe query and I'm happy!


> Congratulations, you have now increased the cognitive load to be productive on your team and increased the SQL injection attack surface for your apps!

Maybe I am speaking from too much experience but writing SQL is second-nature to me and I would wager my team feels similarly. Perhaps we are an anomaly. Secondly, most if not all SQL connector libraries have a query interface with all the usual injection vectors mitigated. Not saying it's impossible to break through but these are the same connector libraries even the ORMs use.

> ORMs exist for a reason. And if I were a new senior or principal on your team I’d be worried that there was now an expectation for a junior to be a wizard at anything

ORMs exist to hide the complexity of the RDBMS. Why would any engineer want to make arguably the most critical aspect of every single IT business opaque? ORMs may imply safety and ease, but in my experience they foster a culture with a tacit fear of SQL. Sounds a bit dramatic, but this has been a surprisingly consistent experience.


SQL injection is only a thing for those careless to ever allow doing screen concatenation to go through pull requests.

If it isn't using query parameters, straight rejection, no yes and buts.

Naturally if proper code review isn't a thing, than anything goes, and using an ORM won't help much either.


Brainfuck also exists for a reason. That doesn't imply that you should use it.

Feels like a project covering some of the same ground as task warrior [0], which I've used on and off over the years. The main thing I've appreciated is integration with various tools - I had access in both vimwiki and the macOS task bar for a while which was nice - but all these tools miss the key thing that stops me using them all the time: integration with tools on my phone. It's great having cli access to tasks and in other places, but without ubiquity, given the way I work, it might well just be another place that ideas of tasks I need to do go to die.

[0] https://taskwarrior.org


I tried to use task warrior, however, a) I need a way to creat task quickly when I think about them (I guess I am being a bit on the attention deficit spectrum) b) I need a way to integrate a bit with time boxing on my work calendar . The problem is that the task warrior android or mobile eb apps app on mobile are really a bit of a UX nightmare compared to the cli interface. And while the plugin landscape of task warrior is broad, support for typical work settings is not really great out of the box. In the end all 'solutions' only seem to be a road towards meta-procrastination, instead of getting things done...

The biggest difference is that tascli supports records natively - you can use it to just record stuff. This is one of the main reason why I created it - I didn’t just want tasks, but also when I have anything notable to jot down.

Other than that, I try to keep tascli as simple as possible so it can stay small and concise. Putting `tascli list task today` in zshrc is really nice to have a reminder everytime I open a new terminal tab.


Has't someone made a simple runtime environment for CLI apps for iOS and Android ? With a file picker GUI and not much else GUI ?

Taskwarrior has a phone app on iOS and Android, and can sync with the cli one if you set up a sync server. They also revamped the sync server not long ago to be less janky than the old one.

Right now we're in a stage of the current regs where 5 manufacturers can be within tenths of a second of each other in qualifying, and the other 5 are not that far out. Five different teams have gone away with the technical regulations, gone into completely different factories, wind tunnels and simulator setups, some of them have bought in components like engines and suspension but basically have had to build and test everything else and work out all the aero across the wings and floor, and come out over a 5km track to be within meters of each other.

If you think about that a bit, it's kind of crazy and mad.

But it also means to shake things up you need to throw the dice again. It's like this generation has evolved to find the peak apex design and configuration for each and every circuit to the point where teams with more limited resources can now get competitive (yay for Williams last week!), and it's time for a new generation.

I agree next year could be chaos. I think teams that have been consistently applying discipline and consistency will continue to do well (Red Bull, McLaren, Mercedes), those that are catching on will continue to rise (William, Haas), and those who haven't realised that's the name of the game yet (Ferrari, Alpine), will continue their passion-fuelled mismanaged decline. The new players (Audi taking on Sauber, Cadillac), are going to be interesting to watch.

But within 5 years, everyone will be back to within a few tenths of each other over a 5km circuit, and we'll probably need to go again...


I'm more optimistic as I think about SBC manufacturers, plenty of other manufacturers wanting to service this market, and companies like Framework (warts and all - I don't think they're perfect) didn't really exist a couple of decades ago.

I'm actually a big fan of Apple hardware (when you crunch the numbers for base spec machine and when you're able to get discounts, the price/performance for the half-life you get is incredible), but I'm also planning to get back into home-brew builds a bit more over the next year: I need to build a NAS, a home lab, I might look at a gaming rig... and I'm far from alone.

So yes, it's a niche market, but a profitable one for a lot of players, and one that Micron will be glad is still available to them when the data centre bubble bursts.


Disappointing.

When zigbook first appeared here, I took a cursory scan, and it looked pretty solid and a useful resource. Seems it duped me and got me good. I was even defending the use of AI a little - although the claim needed to go.

Seems they just were just trying to do over a nascent community that I'm interested in seeing growing and wasn't a member of yet.

Good riddance, then.


Advent of Code is one of the highlights of December for me.

It's sad, but inevitable, that the global leaderboard had to be pulled. It's also understandable that this year is just 12 days, so takes some pressure off.

If you've never done it before, I recommend it. Don't try and "win", just enjoy the problem solving and the whimsy.


While is „only“ 12 days, are like 24 challenges. As no leaderboard is there, and I do it for fun, i will do it in 24 days.

That sounds healthy! But I would note that there's been interesting community discussions on reddit in past years, and I've gotten caught up in the "finish faster so I can go join the reddit discussion without spoilers". It turns out you can have amazing in-jokes about software puzzles and ascii art - but it also taught me in a very visceral way that even for "little" problems, building a visualizer (or making sure your data structures are easy-to-visualize) is startlingly helpful... also that it's nice to have people to commiserate with who got stuck in the same garden path/rathole that you did.

Last year was the first time I ever did the thing in sync, and it was a source of real delight to see other people foot-gunning themselves in the same way as me (also in different ways, schadenfreude and all that....)

any recommendations on how to do this?

One way I've found is to break the problem down, and think about each step in reverse. So for example, what does the final stage want to do in order to achieve the result in a simple way? It might be that to get the final result it needs to sum numbers, but also needs to know their matching index in another array, plus some other identifier you got from an as-yet-unwritten previous step. This means your final stage needs a bunch of records that are (number, idx, sourceId), which means the step before needs to construct them - what information does it need to transform into that?

Write the simple code you want to write, and think about what makes the prior step possible in the easiest way and build your structures from there, filling in the gaps.


Same. I usually try to use it as the "real-world problem" I need for learning a new language. Is there anywhere that people have starter advice/ templates for various languages? I'd love to know

- install like this

- initialize a directory with this command

- here are the VSCode extensions (or whatever IDE) that are the bare minimum for the language

- here's the command for running tests


learnxinyminutes.com is a good resource that tries to cover the key syntax/paradigms for each language, I find it a helpful starting point to skim.

This is an area where LLMs can really help out: getting started with an unfamiliar language/IDE/ framework.

Yep, I don't really understand why the author didn't make it one per day for 24 days. Am I missing something obvious?

Since the start, each problem has 2 parts (2 "stars"). Part one sets up the problem, ensures you have parsed the input correctly, etc. After submitting the correct answer to that part, part 2 is revealed, which sometimes expands the proplem space, adds new limits, etc. Something that solves part 1 might be inadequate for part 2.

Yes, but nothing (theoretically) stops him from saying: "congratulations, you have solved part 1, wait until tomorrow for part 2".

I think either the author thinks people appreciate more the 2 stages challenge, than having one problem each day; or, more likely, the whole "infrastructure" is already prepared for 2 stages challenges per day. And changing that meant more work, eventually touching literally 10 y.o. code. The reason for the reduced days is exactly the lack of time. I assume he preferred to have 12 days, and modify as little as possible the old code. Having 1 stage per day maybe would have been possible at the expense of having less challenges, which again defeats the purpose.


The "only" 12 days might be disappointing (but totally understandable), however I won't mourn the global leaderboard which always felt pointless to me (even without the llm, the fact that it depends on what time you did solved problems really made it impractical for most people to actually compete). Private leaderboards with people on your timezone are much nicer.

The global leaderboard was a great way to find really crazy good people and solutions however - I picked through a couple of these guys solutions and learned a few things. One guy had even written his own special purpose language mainly to make AoC problems fast - he was of course a compilers guy.

Agreed! It’d be nice to surface that somehow. The subreddit is good but not everyone is there. I found a lot of interesting people and code in the folks who managed to finish challenges in like 4 minutes or whatever..

how else to try it? only for shits and giggles, there is enough fake virtual internet pointz as is

> the global leaderboard had to be pulled.

Frankly I'm better off with it being this way instead of the sweaty cupstacking LLM% speedrun it became as it gained popularity.


I think I’ll set up a local leaderboard with friends this year. I was never going to make it to the global board anyway but it is sad to see it go away.

And this is how I know I am not a developer/programmer. I have no urge or interest in such event.

Your logic is flawed. You can be a developer and not be interested in AoC. Not being interested in AoC only shows you're not interested in AoC.

I wasn't casting logic. I'm not a developer and that when it comes to AoC I have no interest in. Nor being such.

Why post, then? No one cares about your lack of interest.

It always seemed odd to me that a persistent minority of HN readers seem to have no interest in recreational programming/technical problems solving and perpetually ask "why should I care?"

It's totally fine not to care, but I can't quite get why you would then want to be an active member in a community of people who care about this stuff for no other reason than they fundamentally find it interesting.


It's all marketing, I can sell this to you and convert you.

Thing is it may have some interesting challenges, I too, wouldn't want to solve some insane string parsing problem with no interesting idea behind it. For today's problem, I did the naive version and it worked. The modular version created some issues with some corner cases.

There should be more events like AoC. Self-contained problems are very educational.


I wonder how this is the most straightforward way to know that?

Not read the whole thing as there is paywall involved, but I have a broad take on what I have read.

I say this as somebody with a hobby obsession with trading on sports betting exchanges, which I've been doing on and off for 20+ years.

In high school onwards, most of us were taught a great deal about calculus, and not a great deal about probability. That's because for many decades working out ballistics was a more useful skill to teach young engineers than understanding how to interpret the statistics of a pandemic, for example.

The rising interest in probabilities in recent years has sat at a weird intersection: real World events that surprise us as being "unlikely"; people questioning the validity of scientific trials using illogical arguments on social media; the legalisation of sports betting markets in the US; and the prevalence of probabilistic and stochastic methods in modern technologies from RL to LLMs.

But, here's the thing: most people are awful at it. And most people are going into prediction markets (and sports betting markets), thinking they know something others don't, with all the logical and calculated thought of an anti-vaxxer who does not understand terms like "sensitivity" and "specificity".

Signal is not noise. Noise is not signal. Yes, the guy on CNN is wrong, just as wrong as the guy on Fox News, but it doesn't mean expertise is dead and gut instinct by amateurs is winning by showing superiority of the Wisdom of the Crowds.

Look, for example, at the last US Presidential election. The markets didn't agree with the polls by a long way, everyone assumed the players were corrupt (moving the line helps move the conversation in the media), or idiots.

Turned out, it was a guy with a smart idea to figure out things experts refused to figure out: shy Trump voters. He commissioned polls that rather than asking people who they would vote for, asked them who they thought most of their neighbours would vote for. Turns out, that's a way more accurate technique. He did some maths, pulled up some spreadsheets or notebooks, throw some Bayesian analysis at it, and realised the main polls and prediction markets were out, so throw some money at it. And then his government (the French), said he couldn't have the money, but that's another story.

The point I think I want to make is that this is an interesting and fascinating area to dive into, but almost everything I've read about it online is shallow, nonsensical, illogical and often wrong. From the intro I'm not sure this is any different. YMMV. But yeah, dive in, it's fun playing with this probability stuff in real World scenarios.


what guys is this? Sounds like an interesting read

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: