Hacker Newsnew | past | comments | ask | show | jobs | submit | beyang's commentslogin

If you like Claude Code but either (1) prefer an agent that doesn't ask for review on each file edit or (2) miss the IDE for things like reviewing diffs, I'd humbly submit you try out Amp: https://ampcode.com. It has both a CLI and VS Code extension, and we built it from the ground up for agentic coding, so no asking for permission on each edit, a first-class editor extension (personally I spend more and more time reviewing diffs and VS Code's diff view is great), and it employs subagents for codebase search and extended thinking (using a combo of Sonnet and o3) to maximize use of the context window.


Thank you for the suggestion. Do you guys also have subscription plans? Or do I need to pay separately for the models/apis I use?



For those looking for a free coding assistant they can also use at work / in the enterprise, Cody has had a free tier for awhile: https://sourcegraph.com/cody

- Works with local models

- Context-aware chat with very nice ergonomics (we see consistently more chats per day than other coding assistants)

- Used by both indie devs and devs at very large enterprises like Palo Alto Networks

- Hooks nicely into code search, which is important for building a strong mental model inside large, messy codebases

- Open source core


FYI, you can do that with Cody + Ollama. A good portion of our user community does exactly that: https://sourcegraph.com/blog/local-code-completion-with-olla...


I don't want a local LLM, it is slower, less capable, and slows the computer down generally

What I would like is a deep integration with VS Code using my preferred foundational model

I see Cursor has their own model and support for 2 foundational models, but not my preferred model and they charge a monthly fee.

Supposedly: https://cloud.google.com/blog/products/ai-machine-learning/g...

but do I still have to pay microsoft $20+ per month? what I really want is pay-per-usage, not pay-for-access+usage


Hi there, Cody contributor here—sorry to hear you had a bad experience! In our evals, our DeepSeek variant outperformed previous models and other alternatives. If it's working worse for you know, would be open to sending us some examples/screenshots of poor completions examples? We'd like to incorporate these into our eval set so we can capture a more representative distribution of codebases and how Cody performs!


I can do that. What's the best way to get them to y'all?


Ping community@sourcegraph.com and I'll get a thread going. :)


This is in the cards and thank you for the feedback! (Sourcegraph CTO here)


What would you say...you do here?


Author of the post here—as another commenter mentioned, this is indeed a bit dated now, someone should probably write an updated post!

There's been a ton of evolution in dev tools in the past 3 years with some old workhorses retiring (RIP Phabricator) and new ones (like Graphite, which is awesome) emerging... and of course AI-AI-AI. LLMs have created some great new tools for the developer inner loop—that's probably the most glaring omission here. If I were to include that category today, it would mention tools like ChatGPT, GH Copilot, Cursor, and our own Sourcegraph Cody (https://cody.dev). I'm told that Google has internal AI dev tools now that generate more code than humans.

Excited to see what changes the next 3 years bring—the pace of innovation is only accelerating!


> I'm told that Google has internal AI dev tools now that generate more code than humans.

Given that code is not an asset but a heavy liability the elephant in the room is the question: Who is going to maintain all that huge piles of generated code?

Wake me up when there's an AI that can safely delete code… That would be a really disrupting achievement!


> Wake me up when there's an AI that can safely delete code… That would be a really disrupting achievement!

Big tech already has depandabot-like bots doing this: dead code removal, refactoring and other things a linter can warn you about all get turned into automated pull requests (or PR comments). These are things a linter would tell you - if you had the patience to wait several hours to lint a gigantic monorepo; there probably will be more tooling support based on LLM trained on the huge code bases.


I'm not talking about dead-code elimination (that's something that compiles do since decades, without AI), and not about something like Scalafix's automatic refactorings (which are actually deterministic and correct, because it doesn't use AI), but about some true AI that could simplify code—and remove in that process at least 80% of it, because that's usually just crap that piled up over time.

Like said: Wake me up when when you can show me metrics where the use of AI shrunk code-bases by significant large amounts. For example going from 500 kLOC to 100 kLOC, or something like that. (Of course without loosing functionality; and at the same time making the code-base easier to understand). That would be success.

Everything that goes in the opposite direction is imho just leading inevitably to doom.


> Google has internal AI dev tools now that generate more code than humans

wondering how good it is? any googler using it can compare it to copilot?


It's absolutely fantastic. I feel disabled coding without it in my spare time. Imagine a pair programming session with someone that can read your mind and knows the entire codebase. I don't know how much I'm allowed to talk about so I'll leave it at that.

I don't have access to copilot so I can't compare but I'd wager it works a lot better for Google internally because the training data is customized to Google.


Interesting, I didn't find it that impressive when I tried it a couple months ago and fairly quickly went back to regular old-fashioned autocomplete. What language(s) were you working with out of curiosity?


No idea how good other languages are but Java in cider-v is very nice.


I'm at Google and am not entirely sure what you are referring to, but I'd love to try it. Could you provide an internal codename I could search for? Or is it integrated somewhere in Cider (the name is public knowledge so I'm not leaking anything by using this) that you could guide me to (i.e. X dropdown -> 3rd option) while preserving ambiguity?


I’m fascinated that of two people working at the same company, one raves about how an internal tool is a complete game changer and indispensable while another isn’t even aware it exists.


I'm fascinated that at a 175k employee company, two employees I know nothing about, in business areas or positions I know nothing about, could possibly use different tooling for their day to day duties.

*Surely* at your place of work, the cleaning staff is familiar with the CTO's dearest tools and vice versa?


That’s an unnecessarily snarky answer given the two people above are clearly both in similar roles. The parent comment was asking where to find the option in the tooling they both shared.

Snarky replies comparing CTO to cleaning staff don’t even begin to apply.


surely at my place of work, there is no cleaning staff and no CTO.


Shoot me a mail to e-hackernews[at]wthack.de so we can connect less publicly.


Googlers.. please take it to internal comms.


I miss the days when google was super open with their new tech :/


Relax, they just bragging.


It's not trivial to find the person talking. Why does a comment or two bother you so much?


It's a conversation that isn't relevant to anyone outside the company. One commenter, who may or may not work in Google, is asking for an internal codename.

You're the downvoter here so why does my comment bother _you_ so much?


On any other "social media" they would be great comments, but they're not really in the spirit of Hacker News.


Close friend worked at Google Brain, now Deepmind. Said he presses tab 70% of the time and it just works. When shown my paid version of Copilot said it felt his auto completes had higher accuracy.


The code search section misses the two most obvious tools IMO: ripgrep / git grep and GitHub. (They were the obvious tools in 2020 also).


Github's codesearch leaves something to be desired.


The new one is pretty good


It's definitely better, I'll give it that!


Besides the tools mentioned, what are some other AI tools that can be used to accelerate coding for established codebases? I have some money to try out new tools, but am wondering if there are that are less "black boxy" and would work with a company's private instance of Azure ChatGPT?


[flagged]


Eh... Boilerplate exists, it's not just limited to Bigco.


With powerful high-level languages you can reduce boilerplate to almost zero. (And for the rest you can use code-gen.)

Copy-paste on steroids (AI code assistants) OTOH will lead to doomed code-bases even faster than manual copy-paste did in the past. People are going to learn this really soon. Maybe two or three years left until the realization will kick in.


If folks want to continue searching open source, https://sourcegraph.com/search does not require sign in and also includes major projects that are not on GitHub.

(Full disclosure: I'm the Sourcegraph CTO)


Thank you. I don't want to sign into my personal github account on my work computer, so I always use sourcegraph if I need to search a repo.

I remember longing for the ability for github's search to rival git clone + grep, but I never expected a login wall to come with it. IIRC I expected the login wall to just be because the feature was in beta and would be removed when it became the primary search.


Why did you decide to allow search without a sign in but not for cody?


Zoekt was heavily inspired by Google's internal code search, as mentioned in the blog post. The original version of the internal code search is described in the rsc post. Zoekt keeps some of the foundational ideas (e.g., trigram index), but was a from-scratch implementation. We probably should link to the rsc post for completeness, will update.


At the time that I started Zoekt (2016), Google's internal codesearch used suffix arrays for the string matching, which the team wasn't happy with, presumably because of the algorithmic complexity and indexing slowness. The Codesearch team was exploring alternatives, one of them the technique described in https://link.springer.com/article/10.1007/s11390-016-1618-6. The positional trigrams were a simplification of this, that they didn't mind me open sourcing.

so, in terms of algorithms, Zoekt wasn't actually inspired by Google's internal code search.

The precise query syntax of zoekt is mostly copied from google's internal syntax, though.


Great call out! We've built this code navigation infra on top of Zoekt into Sourcegraph. Example: https://sourcegraph.com/github.com/golang/go/-/blob/src/net/...

Docs: https://docs.sourcegraph.com/code_navigation/explanations/pr...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: