Hacker Newsnew | past | comments | ask | show | jobs | submit | spyder's commentslogin

Cool, but there are already sleep analysis apps for that, like Sleep as android and probably many other similar apps too.

There are apps that capture noises over a certain threshold, write them to disk and make it easy to review them?

nah, it's probably worse: it could be some system prompt for their models...

it's probably regex, just to not burn money on checking

But how it was not destroyed by flying Russian drones? Did it shot them down? Or did it have some anti-drone support unit helping it?


I have an educated guess.

Flying drones are lethal, but fairly random. Attacking a point target, which will have anti-drone support, is not what they are for.

And too cheap for a missile which is the weapon for a point target


District 9 vibes :-)


Nice, but it's weird that no "language" or "English" is mentioned on the github page, and only from the "Release multilingual TTS" Roadmap item could I guess it's probably English only for now.


Using physical analogs for virtual things is not the best choice, for example: Would you give a copy of your bike, or copy of your food to your poor neighbor kid if you could copy it as easily and as cheaply as digital products?


For you... But the results are different for different users.

For me Google shows the .net site first the github one as second.

Asking chatgpt 5.2 (Auto mode) to search for the nanoclaw site, it says the same, first links the .net site and shows the github as an optional page. When I try to give it a hint by asking "are you sure?" it still even hallucinates that it's linked from the github:

"Yes — nanoclaw.net is the official documentation/site for the NanoClaw project, in the sense that it’s the project’s published homepage and is directly linked from its canonical open-source repository. It describes the project, features, installation steps, and links to the source code on GitHub, which is the authoritative source for the project’s codebase."

Chatgpt 5.2 (Thinking mode) and Claude gets it right the first try, they asnwer with the official .dev page first and claude shows the .net second as "another site covering the project".


I was surprised by what you said, so I used a browser that's not logged in to a Google account, to compare. Indeed the fake site ranks #1! Dang!

I guess Google has my account in an autism bucket, so biases GitHub links higher ;)


Yea, even in the case they could match human level stereo depth perception with AI, why would they say "no" to superhuman lidar capabilities. Cost could be a somewhat acceptable answer if there wouldn't be problems with the camera only approach but there are still examples of silly failures of it. And if I remember correctly they also removed their other superhuman radar in their newer models, the one which in certain conditions was capable of sensing multiple cars ahead by bouncing the signal below other cars.


Because they don't have superhuman LIDAR. They never did. Nobody ever did. LIDAR input is not completely reliable so what do you do then?



Not the great answer you think it is.


It all depends on how cheap they can get. And another interesting thought: what if you could stack them? For example you have a base model module, then new ones come out that can work together with the old ones and expanding their capabilities.


It sounds similar, but doesn't sound the same to me. Also how would you determine the similarity allowed? Maybe if we would have such a measure they could use that in voice model training to not allow that much similarity to a single voice, but if we don't have an agreed upon value for that than it's a subjective "sounds the same to me" rule then it's hard to follow that. Ok, they can say that don't train on their voice, but it's very likely that a blend of voices from an "allowed" set could produce a very similar voice to his.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: