Hacker Newsnew | past | comments | ask | show | jobs | submit | Topgamer7's commentslogin

Yeah I started with memories for nextcloud. But it was buggy/slow unfortunately.

Being able to scroll to dates with immich is golden. And the facial recognition is on device and works great.


I don't have that experience with Nextcloud Memories.

Everything works well and it's comparably fast with Google Photos for me, and scrolling to specific dates works fine.

How long ago did you try it? I've only been using it for a few months so maybe it's improved over time.


I always enjoyed Garry's blog.

It just seemed like a public diary. And a place to vent about dev,life,w/e. He seems to be unapologetic-ally himself.

Although I was pretty sure there used to be more posts (although maybe I'm conflating his posts there with his contributions to his old forums.)

https://garry.net/posts/


Todays game of was it AI generated?

> The compact size of the Mac mini, which packs a powerful System on a Chip (SoC) into a tiny footprint.The energy efficiency of Apple silicon (M-series) chips, which allows high density without overheating or excessive power draw.

This really adds nothing to the article, and looks like AI fluff to me.

Combine that with there being a bold section in like every single paragraph, I'm going to assume yes


The thing that got me was always referring to Scaleway in the third person. e.g. this read like the response I get when I ask AI to review code:

> Scaleway’s solution to that problem was ingenious: embedding a Raspberry Pi module with each Mac mini.

(I realize this may be an artifact of a corporate style guide, but I'd much prefer "Our solution to that problem was embedding . . ." Both because the "was ingenious" doesn't add a ton and reads like puffery and because this is Scaleway's own blog and referring to yourself in the third person is grating.)


To me, it just reads like their marketing person's first language is not English. Which tracks because I believe the whole company is based on France.

It doesn't? If you didn't know those two things, they seem highly relevant to the subject being discussed. They define SoC, which might be an acronym you've known since high school (I did, but I'm a total nerd), and it justifies why use Mac minis instead of what usually gets used.

As to whether it was AI generated or not, who cares? It's useful information if you didn't know it already, and if those words came out of matrix math or someone non-technical with a BS in communications, does it really matter to you? Are you going hungry tonight because the money that went to creating those words went to Nvidia and not Sarah in Marketing? Sarah in Marketing might be out of a job soon, but her boyfriend has a good job that's not threatened by AI, so I hope she'll be fine, but I don't know. Is that the underlying worry here?

There is an emdash in the article though, you didn't think to call that out too?


OTOH, empty human-generated marketing slop has been around for a LOT longer than empty AI marketing slop and also reads exactly like this.

If it smells like ai…

I find myself struggling to connect. I feel like we live in a dystopian period where its too easy to sit and doom scroll, and most people don't find enough value in just spending time getting to know each other, or economic pressures make social endeavours too expensive for many people I know.


I feel you,

one thing that has helped me has been to become less of a lurker in forums/social media spaces where they discuss topics on things i ACTUALLY enjoy.

seems obvious yet I've found to be the answer to "doom scrolling". When you doom scroll, you're looking for something to peak your interest but find nothing...

The crazy thing is, you ALREADY know what you like!

Music? Try to Engage with your favorite artists, musicians, fan clubs, album reviews, listening parties (bandcamp.com)....

Space photos? Look to forums, telescope videos, equipment youtube channels, star parties (irl)

Welding? Conspiracy theories? ...whatever interests you I guarantee there is more niche content out there than you could ever possibly scroll. More than you could ever connect with. But you must connect. (And if you do ever find the end, you can always be the fire starter for newer content!)

But you are correct, times are bleak and possibly getting bleaker. And that makes the algorithm happy.

That's why connection (as you said) is more important now than ever. So find a place you can connect

(HN is just ONE of the many places I do - random username and everything ;) )


Which part of this was racist? Did you even bother to even skim the article?


I find his videos very interesting. And I'm always a proponent of open systems and processes.

I will say that his presentation style always tilts me a bit. It's his laugh/excitement always seems forced/fake.


And crash zooms in on his face/mouth/nostrils.

I was keeping up with them for a while, putting up with it/skipping bits (why am I a goblin or ghoul? What does that mean?) but had lapsed and forgotten about it until this submission.


I completely agree. I can't handle watching his videos for long, for similar reasons.

However besides the personal dislike, I think its worthwhile to stop giving so much merit based on advertising "open source" or "effort" or "presentation" etc, because frankly many of these YouTube "makers" and the "maker community" are misguiding a lot of people to make bad designs and waste time and resources. People ought to value correctness and quality a bit more, lest our things become even more enshittified than they are now. One would think that hobbies would be a refuge from disposable low quality shit... yet we get RPi Pico et al (which are arguably getting less shitty but still laughable compared to actually good MCU products) and the people who claim to "out do the big corp" by using a Raspberry Pi with an SDR dongle and saying they achieved $50000 of capability with $60 in parts..

Case in point, the Opulo PNP systems are significantly overpriced and have amateurish mechanical as well as circuit design that are worse than cheaper systems like Pandaplacer in terms of reliability and performance.


Explain how RP Pico (rp2040) is "low quality shit "compared to "actually good MCU products" (which are those supposed to be?)

I agree that Opulo PNPs are overpriced though, but I'm sure people getting these are aware that it's just a bunch of 20x20 aluminium profiles, 3d printer mgn rails, basic DC pump, etc and parts-wise there's nothing to justify the price, but they get them anyway because of entusiastic community engagement and support aspects - most importantly - in English. And it probably works just fine for the small scale projects they are used in.


I'm referring to their collective effort in MCUs, including RP2040 and RP2350. There are a bunch of issues that are now addressed but permanently makes me distrustful of them especially when combined with how uninteresting the hardware are

- Broken ADC design on RP2040 with nonlinearity at certain codes (and they're not fixing it)

- Shipping chips with exact part number of inductor and specific DCDC layout requirement (like come on, the Chinese are advertising zero decoupling capacitor required and you can route the USB right under the chip in funky shapes and everything "just works".. meanwhile RPi is doing this)

- GPIO current leakage (fixed with a stepping but I would hate to be those who bought a reel of the earlier stepping)

"Actually good MCU products" in my opinion are those with at least a reason to exist. For example the ubiquity of STM32, the radios of ESP32, the high compute of i.MX RT1172, the cheapness of PY32 et al, the low power of Ambiq chips, the reliability of Atmel/Microchip, the USB3 on CH569, the potential true MCU-level-SoC-capability on AG32, etc. When compared with these, RP chips are frankly not innovative at all (PIO does not unlock much actual capabilities besides party tricks). Combined with the general culture of people hyping RPi Pico chips, it results in a culture of ignorance and hype.

Erratas alone aren't a big deal, but the fact that they're happening with such basic things and for no innovation is not a good sign. We shouldn't give RPi a pass just because "it's the good old RPi that we know"


So the faulty ADC (nonlinearity issues) of RP2040 is the only thing you can list and that makes it "low quality shit"?

I implore you to open up the errata sheet of stm32g4, just the ADC section alone (or frankly any stm32 mcu) (https://blog.mjbots.com/2023/07/24/stm32g4-adc-performance-p...), and that's an MCU series with focus on analog peripherals.

Stm32 chips are plagued with all sorts of issues and hardware bugs that are very easy to run into. In comparison rp2040 has surprisingly few major defects apart from its ADC implementation.

I see no mention of exact part number of inductor requirement in their hardware design guide, are you making shit up now? They are somewhat more particular in oscillator selection, and unfortunately don't include factory trimmed RC oscillator like most MCUs do these days.

> PIO does not unlock much actual capabilities besides party tricks

Ok, so you've no idea what you're talking about.

RP2040 is widely used in many projects because it has insane bandwidth for MCU in its price category. It can do 4 x 32bit reads/writes per cycle (if those ops are spread across 64kb x 4 memory banks), at 200mhz base clock, which gives theoretical maximum of 3.2 gigabytes per second bandwidth. That is pretty crazy.

This enables you to interface with or easily emulate many highspeed interfaces. And do things like 24ch 400mhz logic analyzers and similar. And this is what they are commonly used for (emulating memory cards, etc)

And that's a 60cent MCU. In this price range MCUs don't have 264kb of SRAM and 133/200mhz much less with two cores, that can push anywhere remotely this insane amount of bandwidth.

rp2040 additionally has human friendly and readable documentation, with truckloads of examples, and API that's pleasant to use. (can't exactly be said about stm32 ref manuals and APIs).

While it is not perfect (rp2040 ADC, and lacks encryption), some of those shortcomings have already been addressed in rp2350, with double sram (520KB at this price point!), floating point, even more PIO, more improved DMA channels and so on.

While cheap py32, gd32, apm32, etc are cool, but they just generic arm32 m0/m4. A 10 cent 24mhz m0 puya with 3kbs sram, isn't particularly impressive when put next to 60cent rp2040 with 80x sram, etc

> Combined with the general culture of people hyping RPi Pico chips, it results in a culture of ignorance and hype.

You haven't opend an errata sheet of stm32 chips even once and you talk about ignorance.

rp2040/rp2350 are unironically one of the best MCUs on the market (esp. in their niche), both in documentation/API and price/perf and features/flexibility in doing highspeed interfaces/bandwidth.


I have read the docs, and like I said the point of STM32 is ubiquity. It's not a great design in other respects but it was once ahead of the envelope and that made it ubiquitous which made it king for longevity. There is no room for another "king" on that throne. Especially counting all the clones of STM32, it is basically a forever design.

Comparing a 60 cent chip to a 10 cent chip is itself crazy work. That's like a whole three stratums apart in terms of capability. Dammingly, you are forgetting about the cost of the external flash that it requires, when program flash is the main cost of MCUs. It shows you don't have much experience with this stuff.

> I see no mention of exact part number of inductor requirement in their hardware design guide, are you making shit up now?

LMFAO go read the literal datasheet page 455 https://datasheets.raspberrypi.com/rp2350/rp2350-datasheet.p...

They literally had to "work with Abracon to create a custom 2.0×1.6mm 3.3μH polarity marked inductor" like wtf

Besides how it looks like you weren't one of the early adopters (since RPi shipped one abracon inductor with every one RP2350 for a bit), you also clearly haven't designed a board with the chip in question.

> theoretical maximum of 3.2 gigabytes per second bandwidth. That is pretty crazy.

This is what I'm talking about, like honestly what capability does that unlock for you beside party tricks? Can you name anything meaningful beside "logic analyzer" and "some memory card?" Even disregarding that, what can you do with such thruput if you are bottled by USB 1 speeds and a core without FPU? It doesn't come close to being able to do interesting things like LVDS ADCs or actual high speed memory interfaces because of the bit width requirement, yet people will go into a frothing frenzy should you dare insinuate that RPi Pico might be kinda useless

> rp2040/rp2350 are unironically one of the best MCUs on the market (esp. in their niche), both in documentation/API and price/perf and features/flexibility in doing highspeed interfaces/bandwidth.

As you might surmise, I disagree. Go make some actual projects instead of "reading the docs" all day (though I must admit I do the same). Also, it sure looks like our definition of high speed differs by a wide margin


> Dammingly, you are forgetting about the cost of the external flash that it requires, when program flash is the main cost of MCUs. It shows you don't have much experience with this stuff.

If you had any experience "with this stuff", you'd know 16mbit of QSPI flash (compatible with rp2040) costs 7-8 cents in volume. 64mbits 12cents or less. And would calm your tits. It is okay.

> Besides how it looks like you weren't one of the early adopters

If you had any experience "with this stuff" you'd know better than to buy reels of mcus on rev1/rev2 that haven't been on the market for atleast a year or two.

> bottlenecked by a core without FPU

Why would lack of FPU impact bandwidth? Lack of FPU is non-issue with fixed point math in most cases.

> I have read the docs, and like I said the point of STM32 is ubiquity

And yet during chip shortages, rp2040 were one of the few MCUs without stock issues or crazy prices... in fact, I've never seen it out of stock. STM32 on the other hand... ouch. Fun times!

>This is what I'm talking about, like honestly what capability does that unlock for you beside party tricks?

So every capability and use-case that doesn't tickle your zoomer sensibilities is a party trick?

Okay.


Whenever someone brings up "AI", I tell them AI is not real AI. Machine learning is a more apt buzzword.

And real AI is probably like fusion. Its always 10 years away.


The best part of this is I watched Sam Altman say he really thinks fusion is a short period of time away in response to a question about energy consumption a couple years ago. That was the moment I knew he's a quack.


Not to be anti YC on their forum, but the VC business model is all about splashing cash on a wide variety of junk that will mostly be worthless, hyping it to the max, and hoping one or two is like amazon or facebook. He's not an engineer, he's like Steve Jobs without the good parts.


Altman recently said, in response to a question about the prospect of half of entry-level white-collar jobs being replaced by "AI" and college graduates being put out of work by it:

> “I mean in 2035, that, like, graduating college student, if they still go to college at all, could very well be, like, leaving on a mission to explore the solar system on a spaceship in some completely new, exciting, super well-paid, super interesting job, and feeling so bad for you and I that, like, we had to do this kind of, like, really boring old kind of work and everything is just better."

Which should be reassuring to anyone having trouble finding an entry-level job as an illustrator or copywriter or programmer or whatever.


So STNG in 10 years?

edit: Oh. Solar system. Nvm. Totally reasonable.


Fusion is 8 light-minutes away. The connection gets blocked often, so methods to buffer power for those periods are critical, but they're getting better so it's gotten a lot more practical to use remote fusion power at large scales. It seems likely that the power buffering problem is easier to solve than the local fusion problem, so more development goes to improving remote fusion power than local.


Sam is an investor in a fusion startup. In any case, how long it takes us to get to working fusion is proportional to the amount of funding it recieves. I'm hopeful that increased energy needs will spur more investment into it.


He had to use distraction because he knows that he is doing part in increasing emissions.


Fusion is known science while AGI is still very much an enigma.


> Whenever someone brings up "AI", I tell them AI is not real AI.

You and also everyone since the beginning of AI. https://quoteinvestigator.com/2024/06/20/not-ai/


People saying that usually mean it as "AI is here and going to change everything overnight now" yet, if you take it literally, it's "we're actually over 50 years into AI, things will likely continue to advance slowly over decades".

The common thread between those who take things as "AI is anything that doesn't work yet" and "what we have is still not yet AI" is "this current technology could probably have used a less distracting marketing name choice, where we talk about what it delivers rather than what it's supposed to be delivering".


Machine learning as a descriptive phrase has stopped being relevant. It implies the discovery of information in a training set. The pre-training of an LLM is most definitely machine learning. But what people are excited and interested in is the use of this learned data in generative AI. “Machine learning” doesn’t capture that aspect.


But the things we try to make LLMs do post-pre-training are primarily achieved via reinforcement learning. Isn't reinforcement learning machine learning? Correct me if I'm misconstruing what you're trying to say here


You are still talking about training. Generative applications have always been fundamentally different from classification problems, and has now (in the form of transformers and diffusion models) taken on entirely new architectures.

If “machine learning” is taken to be so broad as to include any artificial neural network, all of which are trained with back propagation these days, then it is useless as a term.

The term “machine learning” was coined in the era of specialized classification agents that would learn how to segment inputs in some way. Thing email spam detection, or identifying cat pictures. These algorithms are still an essential part of both the pre-training and RLHF fine tuning of LLM models. But the generative architectures are new and very essential to the current interest in and hype surrounding AI at this point in time.


It's a valid term that is worth introducing to the layperson IMO. Let them know how the magic works, and how it doesn't.


Machine learning is only part of how an LLM agent works though. An essential part, but only a part.


I see a fair amount of bullshit in the LLM space though, where even cursory consideration would connect the methods back to well-known principles in ML (and even statistics!) to measure model quality and progress. There's a lot of 'woo, it's new! we don't know how to measure it exactly but we think it's groundbreaking!' which is simply wrong.

From where I sit, the generative models provide more flexibility but tend to underperform on any particular task relative to a targeted machine learning effort, once you actually do the work on comparative evaluation.


I think we have a vocabulary problem here, because I am having a hard time understanding what you are trying to say.

You appear to be comparing apples to oranges. A generation task is not a categorization task. Machine learning solves categorization problems. Generative AI uses model trained by machine learning methods, but in a very different architecture to solve generative problems. Completely different and incomparable application domain.


I think you're overstating the distinction between ML and generation - plenty of ML methods involve generative models. Even basic linear regression with a squared loss can also be framed as a generative model derived by assuming Gaussian noise. Probabilistic PCA, HMMs, GMMs etc... generation has been a core part of ML for over 20 years.


And yet, people very often find themselves using generative models for categorization and information retrieval tasks...


How does "it's called machine learning not AI" help anyone know how it works? It's just a fancier sounding name.


Because if they're curious, they can look up (or ask an "AI") about machine learning, rather than just AI, and learn more about the capabilities and difficulties and mechanics of how it works, learn some of the history, and have grounded expectations for what the next 10 years of development might look like.


They can google AI too... Do you think googling "how does AI work" won't work?


AI is an overloaded term.

I took an AI class in 2001. We learned all sorts of algorithms classified as AI. Including various ML techniques. Under which included perceptrons.


That was an impressive takeaway from the first machine learning course i took: that many things previously under the umbrella of Artificial Intelligence have since been demystified and demoted to implementations we now just take for granted. Some examples were real world map route planning for transport, locating faces in images, Bayesian spam filters.


back in the day alpha-beta search was AI hehe


As a young child in Indonesia we had an exceptionally fancy washing machine with all sorts of broken English superlatives on it, including "fuzzy logic artificial intelligence" and I used to watch it doing the turbo spin or whatever, wondering what it was thinking. My poor mom thought I was retarded.


My rice cooker also has fuzzy logic. I guess they just use floats instead of bools.


Andrew Ng has a nice quote: “Instead of doing AI, we ended up spending our lives doing curve fitting.”

Ten years ago you'd be ashamed to call anything "AI," and say machine learning if you wanted to be taken seriously, but neural networks have really have brought back the term--and for good reason, given the results.


Arguing about the definitions of words is rarely useful.


How can we discuss <any given topic> if we are talking about different things?


Well that's rather the point - arguing about exceptionally heavily used terminology isn't useful because there's already a largely shared understanding. Stepping away from that is a huge effort, unlikely to work and at best all you've done is change what people mean when they use a word.


The point is to establish definitions rather than argue about them. You might save yourself from two pointless arguments.


Except AI already had a clear definition well before it started being used as a way to inflate valuations and push marketing narratives.

If nothing else it's been a sci-fi topic for more than a century. There's connotations, cultural baggage, and expectations from the general population about what AI is and what it's capable of, most of which isn't possible or applicable to the current crop of "AI" tools.

You can't just change the meaning of a word overnight and toss all that history away, which is why it comes across as an intentionally dishonest choice in the name of profits.



And you should do some reading into the edit history of that page. Wikipedia isn't immune from concerted efforts to astroturf and push marketing narratives.

More to the point, the history of AI up through about 2010 talks about attempts to get it working using different approaches to the problem space, followed by a shift in the definitions of what AI is in the 2005-2015 range (narrow AI vs. AGI). Plenty of talk about the various methods and lines fo research that were being attempted, but very little about publicly pushing to call commercially available deliverables as AI.

Once we got to the point where large amounts of VC money was being pumped into these companies there was an incentive to redefine AI in favor of what was within the capabilities and scope of machine learning and LLMs, regardless of whether that fit into the historical definition of AI.


I do not care what anyone thinks the definition is, nor should you.


AI is whatever is SOTA in the field, has always been.


AI is in the eye of the beholder.


I haven't run windows in half a decade at this point.

I had Win11 running in a vm, and just the amount of ads it would show in the task bar or notifications at idle leaves me flabbergasted.


While using Windows 11 for my gaming PC (my only windows device) I used a debloat script [1] to keep it free of all the garbage it came with. I eventually moved to EndeavorOS and never looked back.

[1] https://github.com/Raphire/Win11Debloat


They keep putting new stuff in as well!

Most recently, ads on your lock screen which aren't obvious how to disable.


And, even when you do find out how to disable them, they randomly re-enable them after updates or reboots just so you stay miserable.


Ah yes, the Facebook dark pattern. "Oops we reset it to defaults! Again! So sorry!"

Except nobody apologizes anymore; we've gotten used to it.


What's more flabbergasting is that there are no real alternatives to Windows that are as accessible (price wise and technically speaking). Macs are too expensive. Linux is out of the question for the average user since drivers tend to not work out of the box on several machines, and more importantly because Microsoft Office does not work on Linux. Most people (who aren't retired) need a computer where they can create and edit documents (pptx, docx, xlsx) that they can share with others. Linux prevents them from doing this. Using only Google docs has not caught on for the average user, sadly.


>Using only Google docs has not caught on for the average user, sadly.

I'm not sure I'd agree. Most people I know use Google docs by default since nearly everyone can access and unlike M365, it's free.


But most enterprises that I've had experience with use Microsoft Office and not Google Sheets?


> because Microsoft Office does not work on Linux.

Hasn't Microsoft Office (or whatever concoction of 365, copilot and other eterprise-ish words they call it today) been turned into a web app over the last 15 years? Isn't it perfectly usable for like 99% of users?


Indeed, I'm regularly dumbfounded that enterprise users put up with this. In a business environment, it would have been unthinkable 20 years ago.


They should be using Windows 11 Enterprise, where those features can be disabled easily with Group Policy.


> implying the outsourced IT dept. or anyone with authority over them gives a fuck anymore


Enterprise licenses have some of this disabled by default. At minimum, a competent IT group can configure these by group policy.


I remember when the domain started redirecting to blenderartists.org.

I used to enjoy doing the speed modelling challenges :)


Y'all are crazy. What is even the possible value in this?


Not who you're talking to, but there is none. Browsers have had "Bookmark all tabs" functionality for ages, which completely replaces tab hoarding. Especially now that the content of the page you visited 5 months ago isn't actually loaded in memory. It's basically a bookmark, switch to the tab and the content is reloaded.


Yes, I am aware of bookmarks, and as someone who used to use them quite a lot, I'm aware of their limitations. Some things are just ephemeral and should remain so. Browser search is great. As you mentioned, tabs lazy load so the main functionality is the same, so it's presumptuous to assume I get no value out of my organizational strategy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: