Hacker Newsnew | past | comments | ask | show | jobs | submit | pawelduda's commentslogin

I've been coding a lot of small apps recently, and going from local JSON file storage to SQLite has been a very natural path of progression, as data's order of magnitude ramps up. A fully performant database which still feels as simple as opening and reading from a plain JSON file. The trick you describe in the article is actually an unexpected performance buffer that'll come in handy when I start hitting next bottleneck :) Thank you

If you just ask it to find problems, it will do its best to find them - like running a while loop with no return condition. That's why I put some breaker in the prompt, which in this case would be "don't make any improvements if the positive impact is marginal". I've mostly seen it do nothing and just summarize why, followed by some suggestions in case I still want to force the issue

I guess "marginal impact" for them is a pretty random metric, which will be different on each run. Will try it next time.

Another problem is that they try to add handling of different cases that are never present in my data. I have to mention that there is no need to update handling to be more generalized. For example, my code handles PNG files, and they add JPG handling that never happens.


Did it create 200 CODE_QUALITY_IMPROVEMENTS.md files by chance?

Unfortunately same about my XPS, looked promising but turned to shit faster than I'd expect

What do you get with claude code that isn't already in cursor? I've only used it in cursor

This screams AI, 100%

Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast

Incredibly fast, on my 5090 with CUDA 13 (& the latest diffusers, xformers, transformers, etc...), 9 samplig steps and the "Tongyi-MAI/Z-Image-Turbo" model I get:

- 1.5s to generate an image at 512x512

- 3.5s to generate an image at 1024x1024

- 26.s to generate an image at 2048x2048

It uses almost all the 32Gb Gb of VRAM and GPU usage. I'm using the script from the HF post: https://huggingface.co/Tongyi-MAI/Z-Image-Turbo


Weird, even at 2048 I don’t think it should be using all your 32GB VRAM.

It stays around 26Gb at 512x512. I still haven't profiled the execution or looked much into the details of the architecture but I would assume it trades off memory for speed by creating caches for each inference step

IDK. Seems odd. It’s an 11GB model, I don’t know what it could caching in ram.

Even on my 4080 it's extremely fast, it takes ~15 seconds per image.

Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.

Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.

(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)


Haha, I know how it goes. Thanks, I'll give that a try!

Update: works great and much faster via ComfyUI + the provided workflow file.


Sounds plausible but I guess it's something that they would've confirmed, had it been true

Or it was ABS-CF but they forgot to dry the filament /s


Unless you know and trust person X, you don't want to authorize and interact with such contracts. Scammers will leave loopholes in code so they can, for example, grab all funds deposited to the contract.

Normal contracts that involve money operations would have safeguards that disallow the owner to touch balance that is not theirs. But there's billion of creative attack vectors to bypass that, either by that person X, or any 3rd party


The end effect certainly gives off "understanding" vibe. Even if method of achieving it is different. The commenter obviously didn't mean the way human brain understands

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: