As to your direct question: I used some Pis for TV dashboards at work and had some random bit flips on the SD card and corrupted files for the dashboards. It may be "rare," but seems inevitable on a long enough timeline. For toy projects where you can re-image the SD card it's alright, but even for my trivial personal projects it made me uneasy.
100% to backups. I know we all put off doing it, but you'll rest a lot easier, even with personal data you don't think you care about. It's not only about a hardware failure, but even a fluke sysadmin error where you accidentally nuke something. I'd recommend getting a account for Backblaze B2, and setting up restic on each Pi to at least daily backup the data directories and stuff you care about. For your Gitlab it's a bit less risky since presumably you also have a clone of each repo on some other machine.
I love that people are building small datacenters out of Pis. I haven't done the math as far as TCO, but instead of multiple Pis for self-hosting, I have a lonely secondhand Dell Precision with an old 8th gen Intel CPU (6C/12T), 64GB of RAM, and several TB of NVMe plus some spinning rust for the long term stuff. It's just a crazy amount of horsepower. Most trusted workloads run as containers, and my other experiments can run as VMs, and I have capacity in all the right places (I need disk and RAM more than CPU). Not as exciting as building a cluster, but I have the excess capacity to spin up multiple VMs on that one machine, if I want to play with that. It can get very Inception-like, what if I'm running VMs in KubeVirt on top of Kubernetes that is running on a cluster of VMs that are ultimately on a single machine, but while delegating whatever extra /64 IPv6 prefixes Comcast gave me to each of the bottom-layer VMs so that each pod still gets a globally routable IPv6 address. Cool times for the homelab stuff, and helped me understand things like Kubernetes and IPv6 to a much greater depth.
It's a neat project though, that I hadn't heard of before. I have ran Postfix to do domain-wide email forwarding (to Gmail coincidentally) but going the other way around and having the end destination be self hosted is on my to-do list.
I agree with you that the timing is very coincidental, and it irritates me that you agree to once set of ToS for a device you purchased, and then the company can say "accept the new terms or stop using the device you already paid for." It's crap.
However, I don't understand the outrage about this particular incident, and every headline I've seen about it is disingenuous and makes it sound like Roku was breached.
If some other service "XYZ" gets hacked and they steal your password, AND that's the same password you use with Roku, AND you didn't bother to turn on 2FA ahead of time, what exactly was Roku supposed to be doing to protect you?
If this is Roku's fault, then every service in existence should mandate 2FA with the assumption that their users are reusing a single password on every site. In which case, they might as well ditch the passwords completely and use only an SMS or e-mail verification for login ("magic link").
Agreed! But as IPv4 with NAT has an implicit "deny all inbound" firewall rule, in the IPv6 world many routers will default to "deny all inbound" to any "inside" IPv6 addresses. So game servers may benefit from an outside coordination server (so bidirectional traffic can flow) but unsolicited inbound traffic to web servers would still require some configuration at the firewall.
Absolutely in the P2P case it is fantastic. IPv6 effectively guarantees direct connection for WebRTC audio/video, VPNs like Tailscale, etc. However there is still some necessity for video services that can provide a Selective Forwarding Unit, for instance once you get 5+ people in a video chat, you want a server in the middle to mediate and rescale video to manage the experience for all participants. But for sure it is better for everyone for 1-on-1 chats to be able to establish the connection without an intermediary, and that underscores what the Internet is all about.
I've had some gripes with Comcast/Xfinity in the past (as many have) but I feel like they are in the lead as far as residential IPv6 deployment (on by default), and I was originally using their own gateway, which as you mention just works with IPv6.
When I switched to my own modem and router (Arris and Ubiquiti/Unifi), I really wanted to dig in and understand IPv6 thoroughly. The modem acts as a bridge and the router gets a single /128 address, and then uses IPv6 Prefix Delegation (PD) over that link to request additional address space for clients (from a different subnet).
The Xfinity gateway only has one local network to support so it requests a single /64 PD, and then clients can use SLAAC (and optionally the privacy extensions) to acquire one or more addresses out of the /64.
When I switched to the Unifi equipment, through some trial and error I found out I could request up to a /60 from Xfinity. Some ISPs will do more, some will do less. No way to really tell, just request larger prefixes and see what you end up getting. Anyway, my /60 gave me 16x /64s to play with. It is wild that my address space is 68,719,476,736 times larger than the entire IPv4 address space.
I have a few VLANs, each of which gets assigned a /64 out of the /60, but even if I'm not using all 16 of them, Xfinity's routing table will send the entire /60 to me. So beyond my VLANs and directly-connected devices, I have the rest of the /60 to use for VMs, Kubernetes pods, etc. and I can add routes to direct that traffic to its next hop. It was a learning curve and a little unsettling that every VM or pod has a publicly routable address. But NAT != Firewall, so unsolicited inbound connections are still blocked, and not having to deal with NAT is very cool! Even though many networking people have it ingrained that private devices should have private addresses.
This is free filing with "trusted partners," but the IRS also has their Free File Fillable Forms (not open yet) which have no income limit. The UI is a little rough, but matches the official IRS forms.
I got mad enough paying H&R Block a couple years ago and decided to download the fillable PDFs and do it manually. It was a good learning experience. Since I'm filling out the PDF forms anyway I wish I could just upload those to FFFF, instead of having to re-enter everything manually.
Still it's better than my state (Indiana) where I can't file online without a third party. It does bring me some satisfaction that I fill out the PDF forms, print them, mail them to the state, and someone gets to (probably) type it all back in to a computer. Efficient.
I've been using free fillable forms for years. Luckily my state also uses them. One year they didn't and I filed paper (by filling out PDFs, as well), and wrote a letter to the department of revenue promising to file paper returns every year they didn't have a free efile option.
And the next year they brought it back. Because of me, I'm sure. ;)
But until the IRS has full service online, I'm not paying a third party out of principle. The only time I shell out is if it gets too complex, and then I pay an accountant.
This is an incredible collection and I'm very impressed by the research folder and the discovery of all of these.
I remember being amazed by the Reaper Bot and how smart it seemed, probably a result of how novel it was compared to other bots of the time. I liked UT even more than Quake though, and I thought it was cool that Epic offered Steve Polge a job based off of it.
I've dumped (wasted? invested?) many 100s of hours into the research of old quake bots, contacting the authors, trying out mods, parsing readmes, writing essays about them, etc. It's a type of hyperaddictive nostalgic madness that strikes me from time to time :)
Quake 1 and 2 had so many awesome mods including bots. As a kid I think I hoarded 13gb of stuff mostly over a dial up connection just to explore what was possible.
Side note would anyone know similar archives of say warcraft 2 ai mods or original StarCraft ai maps or mods? I remember finding efforts someone or a team did and they were insanely good.
At work we have hundreds of Alix/APUs in the field and they have been great, very low hardware failure rate (occasionally a NIC going bad). We will probably make a final order to carry us through and then it's back to the drawing board.
I also have fond memories of my personal Soekris box running m0n0wall in the early 2000s (I think back then pfSense was more bloated/slow on limited hardware). My experience with that setup was definitely what made me consider the PC Engines equipment for work.
Similarly I have also switched to Ubiquiti for home use.
I live in rural Indiana and our REMC rolled it out a few years ago, 1Gbps up/down for $95.00, no additional taxes or fees. I am paying slightly more as I was paying for 6Mbps DSL, that always had signal quality issues. Maybe there are reasons the phone company couldn't do better, but it seems like the co-op's focus is on delivering good service, and not how much money they can extract from people who otherwise don't have a choice. Last I heard the telco was finally looking into FTTH (now that there is some competition?)
My installer explained that the USDA subsidized a portion of the co-op's build-out, since rural areas are often underserved by broadband providers. I don't know all the specifics but I genuinely hope that it's sustainable in the long term for them (even if the subsidies weren't there). The article indicates that the co-op (or their partner at least) has done it with FCC money, from the Rural Digital Opportunity Fund. I know there's some balance between "well it's your fault you live in the middle of nowhere" and "we must deliver broadband to every American, at any cost," but I'm glad these programs exist so there's a chance of some competition/option in currently-underserved areas.
I have this problem on my 2006 Civic. I wondered if it was a GPS week rollover issue. Week numbers are 10 bits and roll over every 1024 weeks[0]. Jan 2nd, 2022 was the start of week 143 on the 2019 epoch. They may have programmed it such that "for week numbers 143-1023, assume the 1999 epoch is used; for week numbers 0-142, assume the 2019 epoch was used." Without having the clock "remember" the latest year it has seen from GPS (e.g. by writing into EEPROM), it would to be limited to 19.6 years of useful life. It's a matter of choosing which epoch they wanted to associate groups of week numbers with.
There's something more to it though, since if it was using the wrong epoch, the date would be off but the time would still be correct. Week 143 on the 1999 epoch would be May-ish 2002 but the observed behavior is that the clocks are all stuck around 4:00 on Jan 1, 2002 (and shown as a Sunday, when it was actually a Tuesday). Since the 1999 epoch started on Aug 22, 1999, I am wondering if that's another piece to the puzzle and why they suggest it will auto-correct in August.
It's an interesting problem and I'm curious about the root cause, but in reality this will just be the push I need to get a newer radio that can do CarPlay/Android Auto.
100% to backups. I know we all put off doing it, but you'll rest a lot easier, even with personal data you don't think you care about. It's not only about a hardware failure, but even a fluke sysadmin error where you accidentally nuke something. I'd recommend getting a account for Backblaze B2, and setting up restic on each Pi to at least daily backup the data directories and stuff you care about. For your Gitlab it's a bit less risky since presumably you also have a clone of each repo on some other machine.
I love that people are building small datacenters out of Pis. I haven't done the math as far as TCO, but instead of multiple Pis for self-hosting, I have a lonely secondhand Dell Precision with an old 8th gen Intel CPU (6C/12T), 64GB of RAM, and several TB of NVMe plus some spinning rust for the long term stuff. It's just a crazy amount of horsepower. Most trusted workloads run as containers, and my other experiments can run as VMs, and I have capacity in all the right places (I need disk and RAM more than CPU). Not as exciting as building a cluster, but I have the excess capacity to spin up multiple VMs on that one machine, if I want to play with that. It can get very Inception-like, what if I'm running VMs in KubeVirt on top of Kubernetes that is running on a cluster of VMs that are ultimately on a single machine, but while delegating whatever extra /64 IPv6 prefixes Comcast gave me to each of the bottom-layer VMs so that each pod still gets a globally routable IPv6 address. Cool times for the homelab stuff, and helped me understand things like Kubernetes and IPv6 to a much greater depth.