Hacker Newsnew | past | comments | ask | show | jobs | submit | intheleantime's commentslogin

Building AI coding tools, there are two common options.

The AI suggests code and you run it. Or the AI runs code on your actual machine. We decided to go with the third option: The AI gets its own computer.

Sandboxed Linux workspaces that CoChat LLMs spin up on demand. It Write code, installs dependencies, runs tests and start servers.

We are now running an automation that gets triggered every time an error is logged in Grafana Labs, the agent then spins up the workspace to verify and fix the issue before creating a PR and Issue on GitHub. If it had already addressed the issue previously it will just track occurrences.

We get notified on Slack.


Thank you and great question. Right now, feedback is qualitative only. (Surveys, feedback buttons, controlled user tests). We are trying to build AI evaluators but they suffer from the same problem when trying to evaluate whether the “right” memory was pulled.

Still trying to find a good solution here.


I am not sure how this use case is prevelant on your system but in my sessions with chatgpt, claude web, claude code, I often find myself in a situation where I enjoy the fact that it is stateless. I can give a fresh context of who I am and get a suitable reply.


There is something to be said about that, I agree. For that reason you can turn off memory inside a chat thread and also create temporary ones that do not use memory.


lol the I stands for “I think I know everything”


Thanks for the feedback! I agree with the general sentiment. At the same time we want to target companies that are aware of their neurodiverse workforce.


Thanks! There is so much more to come!


Hi, thanks for the question.

It is a simple PHP application using Mysql as the database backend. We are using a domain driven design architecture across the application and decided to skip the slow ORMs in favor of a repository layer with hand written sql.

We are framework agnostic to not be locked into any flow but you will see classes from Symfony and Laravel. The goal has always been to make Leantime as lean as possible to allow hosting it on any shared host out there. That means we don't use any exotic extensions or OS features. You can run it safely on the smallest Godaddy instance if you wanted to.

We recently introduced htmx into the stack to offload some of the rendering back to the server and we love it.

PHP itself is really not a bottle neck anymore especially since PHP 8.0

We haven't had a chance to run a lot of large scale load tests yet so take the following with a grain of salt but a direct Task hit currently takes about 2.08 sec to load on our production site. (that includes javascript processing time as it loads in a modal)

I know we have instances with thousands of tasks and users in the wild and generally performance is not an issue we get reports on on our github repo.


2 seconds is pretty slow, especially for a single item lookup which is about as good as it gets for database lookups etc.

Where is that time spent?


That is the entire cycle from browser, to server and back + js execution.

As mentioned in a comment below php execution including db call is: P95 is 120.9ms P99 is 634.11ms

Which means the rest is DNS lookups and js execution.


PS I forgot to mention that we have Sentry profiling. Full Application load (php side) P95 is 120.9ms and P99 is 634.11ms


We got Google Analytics and Clarity on the site. Not sure that is out of the ordinary. The site should still work with pihole though, so I'll take a look at that.


Huh, not sure I would agree with that.

We have some overlapping feature set but I would argue Leantime looks a bit better but I may be biased :D

The key differentiators are our goal and strategy tracking though.


To each their own. HN users prefer light sites like HN itself so Leantime with material styles is not ideal for lightness. Pivotaltracker and Redmine are really nice with basic-looking UI.

Though I've just set up Leantime in my homelab for todo list/calendar.


I agree, there is a real challenging disconnect between messaging for PMs/Leaders vs individual contributors and it boils down to organization size. Small companies and startups tent to be a lot more democratic about their choice of tools where as large organizations tend to be more pm/leadership driven with input from ICs.

Right now we are targeting the smaller companies that don't have a lot of PM experience or resources available. The dream being the slack story of IC driven product adoption :D


Thank you for the feedback! Don't hear a lot of people say anything positive about AGPL these days... :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: