Hacker Newsnew | past | comments | ask | show | jobs | submit | maliker's commentslogin

Masterclass in turning a goodbye email into a hire me after my next gig ends. I’m not being sarcastic, this is a great example of highlighting the value they added.

I wonder how hard it is to remove that SynthID watermark...

Looks like: "When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks." From https://spectrum.ieee.org/ai-watermark-remover



Berkeley National Lab did a great study on this recently [0]. Short answer what's raised prices over the last 5 years, slide 22 in the linked doc: supply chain disruption increasing hardware prices, wildfires, and renewable policies (ahem, net metering) that over-reimburse asset owners.

I'd love to be able to point at something that implicates data centers, but first I'd need to see the data. So far, no evidence. Hint: it would show up in bulk system prices not consumer rates, which are dominated by wires costs.

[0] https://eta-publications.lbl.gov/sites/default/files/2025-10...


I can live with the different visual style but iOS 26 has cost about 30% of my battery even running all day on low power mode on an iPhone 14. It’s horrendous. Hard to even get through one day on a charge now.


Yeah I’ve never understood this for lithium ion systems. Maybe some parallel or series the cells differently to get different total max power outputs? But I don’t expect that would affect cost either way.

With flow batteries there are definitely differences since the power and energy components of the system can each be scaled independently from each other. Ie need more total energy then just expand the amount of liquid electrolyte storage you have.


Interesting. What LLM model? 4o, o3, 3.5? I had horrible performance with earlier models, but o3 has helped me with health stuff (hearing issues).


Whichever the default free model is right now- I stopped paying for it when Gemini 2.5 came out in Google's AI lab.

4o, o4? I'm certain it wasn't 3.5

Edit: while logged in


> Whichever the default free model is right now

Sigh. This is a point in favor of not allowing free access to ChatGPT at all given that people are getting mad at GPT-4o-mini which is complete garbage for anything remotely complex... and garbage for most other things, too.

Just give 5 free queries of 4o/o3 or whatever and call it good.


If you're logged in, 4o, if you're not logged int, 4o-mini. Both don't score well on the benchmark!


This gets at the UX issue with AI right now. How's a normie supposed to know and understand this nuance?


Or a non-normie. Even while logged in, I had no idea what ChatGPT model it was using, since it doesn't label it. All the label says is "great for everyday tasks".

And as a non-normie, I obviously didn't take its analysis seriously, and compared it to Grok and Gemini 2.5. The latter was the best.


Added context: While logged in


Might be worth trying again with Gemini 2.5. The reasoning models like that one are much better at health questions.


Gemini 2.5 in AI Studio gave by far the best analysis


I can’t believe you’re getting downvoted for answering the question about the next-token-predictor model you can’t recall using.

What is happening?


This is an awesome, and as a bonus I learned about a mature reactive notebook for python. Great stuff.

The data sharing is awesome. I previously used Google Colab to share runnable code with non-dev coworkers, but their file support requires some kludges to get it working decently.

I know I should just RTFM, but are you all working on tools to embed/cross-compile/emulate non-python binaries in here? I know this is not a good approach, but as a researcher I would love to shut down my server infrastructure and just use 3-4 crusty old binaries I rely on directly in the browser.


Now that there is great GPU-accelerated remote desktop options, I mostly just remote into more powerful machines. Even a country away the on-screen performance is almost like sitting at the machine, and as a bonus I don't hear every fan on my laptop going crazy. I've been a happy Parsec.app user for a while, but there are many other options (e.g. RustDesk has this).


I've been waiting for this to get good enough. Can any of these apps do passthrough of USB/webcam?


Looks like it's not supported in RustDesk or Parsec, but there are other tools that will do it [1].

[1] https://github.com/rustdesk/rustdesk/discussions/6014


People can't get into the OpenAI round so they buy Anthropic maybe?


It's been fascinating to watch the open source community build out vector map tile capabilities. I was doing some web GIS work back in roughly 2018, and Google/Apple's streaming vector maps performed like magic and something we would have loved to use if we could afford it. Shortly thereafter the core tech was available in open source, and then there were even free hosted solutions. Now our leaflet maps have great vector layers for free. Thanks open source!


I'm a little surprised it's taken this long for OSM to get there, the basic technical pieces were available over a decade ago. I don't mean to complain about the free map service, it's excellent, and I recognize they focus more on the editing and data ownership. Serving is hard and expensive.

Mostly I wonder how much MapBox's dominance for a few years disrupted other efforts.


The new stack has some unique feature like the vector tiles being minutely updated directly from OSM mapping changes.

There are still issues to fix as it is still only a technical preview.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: