The fact that people applaud Kagi taking the money they gave for search to invest it in bullshit AI products and spit on Google's AI search at the same time tells you everything you need to know about HackerNews.
Search is AI now, so I don’t get what your argument is.
Since 2019 Google and Bing both use BERT style encoder-only search architecture.
I’ve been using Kagi ki (now research assistant) for months and it is a fantastic product that genuinely improves the search experience.
So overall I’m quite happy they made these investments. When you look at Google and Perplexity this is largely the direction the industry is going.
They’re building tools on other LLMs and basically running open router or something behind the scenes. They even show you your token use/cost against your allowance/budget in the billing page so you know what you’re paying for. They’re not training their own from-scratch LLMs, which I would consider a waste of money at their size/scale.
We're not running on openrouter, that would break the privacy policy.
We get specific deals with providers and use different ones for production models.
We do train smaller scale stuff like query classification models (not trained on user queries, since I don't even have access to them!) but that's expected and trivially cheap.
Do you have any evidence that the AI efforts are not being funded by the AI product, Kagi Assistant? I would expect the reverse: the high-margin AI products are likely cross-subsidizing the low-margin search products and their sliver of AI support.
We're explicitly conscious of the bullshit problem in AI and we try to focus on only building tools we find useful. See position statement on the matter yesterday:
> LLMs are bullshitters. But that doesn't mean they're not useful
> Note: This is a personal essay by Matt Ranger, Kagi’s head of ML
I appreciate the disclaimer, but never underestimate someone's inability to understand something, when their job depends on them not understanding it.
Bullshit isn't useful to me, I don't appreciate being lied to. You might find use in declaring the two different, but sufficiently advanced ignorance (or incompetence) is indistinguishable from actual malice, and thus they should be treated the same.
Your essay, while well written, doesn't do much to convince me any modern LLM has a net positive effect. If I have to duplicate all of it's research to verify none of it is bullshit, which will only be harder after using it given the anchoring and confirmation bias it will introduce... why?
I have not properly tested the new Kagi research agent yet, but here is a custom prompt I have been using in Kagi for the last year:
You are a research agent. Nobody cares what you think. You exist to do research on the internet. You search like hell and present the most relevant results that you can find. Results means lists of primary sources and the relevance to the query. If the question is about a specific place, don't forget to search in local languages as well as english. You don't ever suggest alternative search terms in your reply. Instead, you take those alternative search terms and you do more searches yourself until you get a wide range of answers.
It generates a lot of good leads very quickly when I want to learn about something. The bit about local languages is especially handy, it gives a bit of an edge over traditional search engines in many situations.
> which will only be harder after using it given the anchoring and confirmation bias it will introduce
This is a risk, but I have found that my own preconceptions are usually what need challenging, and a traditional search approach means I find what I wanted to find... so I use the research agent for an alternative perspective.
Just to give my point of view: I'm head of ML here, but I'm choosing to work here for the impact I believe I can have. I could work somewhere else.
As for the net positive effect, the point of my essay is that the trust relation you raise (not having to duplicate the research, etc) to me is a product design issue.
LLMs are fundamentally capable of bullshit. So products that leverage them have to keep that in mind to build workflows that don't end up breaking user trust in it.
The way we're currently thinking of doing that is to keep the user in the loop and incentivize the user to check sources by making it as easy as possible to quickly fact check LLM claims.
I'm on the same page as you that a model you can only trust 95% of the time is not useful because it's untrustable. So the product has to build an interaction flow that assumes that lack of trust but still makes something that is useful, saves time, respects user preferences, etc.
You're welcome to still think they're not useful for you, but that's the way we currently think about it and our goal is to make useful tools, not lofty promises of replacing humans at tasks.
Equally, And despite my disagreement, I do genuinely appreciate the reply, especially given my dissent.
> Just to give my point of view: I'm head of ML here, but I'm choosing to work here for the impact I believe I can have. I could work somewhere else.
> As for the net positive effect, the point of my essay is that the trust relation you raise [...] to me is a product design issue.
Product design, or material appropriability issue? Why is the product you're trying to deliver, based ontop of a conversational model? I know why my rock climbing rope is a synthetic 10mm dynamic kernmantle rope, but why is a conversational AI the right product here?
> LLMs are fundamentally capable of bullshit.
Why though? I don't mean from a technical level, I do understand how next token prediction works. But from a product reasonability standpoint? Why are you attempting to build this product using a system that you makes predictions based on completely incorrect or inappropriate inputs?
I admittedly, am not up-to-date on state of the art, so please do correct me if my understanding is incomplete or wrong. But if I'm not mistaken, generally, attention based transformers themselves dont hallucinate when producing low heat language to language translations, right? Why are conversational models, the ones very much prone to hallucinations and emitting believable bullshit the interface everything uses?
How much of that reason is because that ability to emit believable bullshit, is actually the product you are trying to sell? (The rhetorical you, I'm specifically considering LLM as a service providers egear to over sell the capabilities of their model. I still have a positive opinion about Kagi, so I could be convinced you're the ones who are different) The artificial confidence is the product. Bullshitting something believeable but wrong has better results, in bulk, for the metrics you're tracking. When soliciting feedback, the vast majority of the answers are based on vibes, right?
If you had two models, one that was rote and very reliable, very predictable, rarely produced inaccurate output. But wasn't impressive when trying to generate conversational feeling text, and critically, was unable to phrase things in a trivial to understand way exuding an abundance of confidence. Contrasted with another that very very rarely would produce total bullshit, but all the feedback shows everyone loves using that model. But it makes them feel good about the answer, yet there's stil that nagging hallucination issue bubbling under the surface.
Which would you ship?
Again, I'm asking which would you ship with the rhetorical you... perhaps there is someone in charge of AI that would only ship the safe version, even if few users ranked it higher than normal organic search. Unfortunately I'm way too much of a cynic to believe that's possible. The AI is good crowd doesn't have a strong reputation for always making the ethical selection.
> So products that leverage them have to keep that in mind to build workflows that don't end up breaking user trust in it.
> The way we're currently thinking of doing that is to keep the user in the loop and incentivize the user to check sources by making it as easy as possible to quickly fact check LLM claims.
Do you feel that's a reasonable expectation from users when you've already given them the perfect answer they're looking for with plenty of subjective confidence?
> I'm on the same page as you that a model you can only trust 95% of the time is not useful because it's untrustable. So the product has to build an interaction flow that assumes that lack of trust but still makes something that is useful, saves time, respects user preferences, etc.
> You're welcome to still think they're not useful for you, but that's the way we currently think about it and our goal is to make useful tools, not lofty promises of replacing humans at tasks.
I don't think I'm the ideal person to be offering advice. Because I would never phrase the problem statement as "we have to give users to tools to verify if the confident sounding thing lied this time" I know far too much about both human nature, and alarm fatigue. So I can only reject your hypothetical, and ask what if you didn't have to do something I worry will make the world worse.
I attribute a large portion of the vitriol, anger, and divisiveness that has become pervasive, and is actively harming people and communities; as stemming directly from modern algorithmic recommendation systems. These systems prioritize speed, and being first, above the truth. Or they rank personalized results, that selectively offer only the content that feels good, and confirms preexisting ideas, to the detriment of reality.
They all tell you want you want to hear, over what is true. It will take a mountain of evidence to assure me, conversational LLMs wont do exactly the same thing, just better or faster. Especially when I could uncharitably summarize your solution to these defects, as merely "encouraging people to do their own research"
And to be clear you shouldn't build the tools that YOU find useful, you should build the tools that your users, which pay for a specific product, find useful.
You could have LLMs that are actually 100% accurate in their answers that it would not matter at all to what I am raising here. People are NOT paying Kagi for bullshit AI tools, they're paying for search. If you think otherwise, prove it, make subscriptions entirely separate for both products.
Kagi founder here. We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools. Life is too short for that. I also happen to like Star Trek version of the future, where smart computers we can talk to exist. I also like that Star Trek is still 90% human drama, and 10% technology quitely working in the background in service of humans - and this is the kind of future I would like to build towards and leave for my children. Having the most accurate search in the world that has users' best interest in mind is a big part of it, and that is not going anywhere.
edit: seeing the first two (negative) replies to my comment made me smile. HN is tough crowd to please :) The thing is similar to how I did paid search and went all in with my own money when everyone thought I was crazy, I did that out of own need and need for my family to have search done right and am doing the same now with AI, wanting to have it done right as a product. What you see here is the result of this group of humans that call themself Kagi best effort - not more, not less.
Just wanted to chip in with a positive comment among the hail of negativity here. Thank you for what you and your team are doing. I've been getting tons of great use daily out of the search and news features, as well as occasionally using the assistant. It can definitely be hard to find decent paid alternatives to the freeware crap model so prevalent on the web, seeing your philosophy here is a huge breath of fresh air.
I found Kagi quite recently, and after blowing through my trial credits, and now almost blowing through my low tier (300) credits, I'm starting to look at the next tier up. However, it's approaching my threshold of value vs price.
I have my own payment methods for AI (OpenWebUI hosted on personal home server connected to OpenRouter API credits which costs me about $1-10 per month depnding on my usage), so seeing AI bundled with searches in the pricing for Kagi really just sucks the value out of the main reason I want to switch to Kagi.
I would love to be able to just buy credits freely (say 300 credits for $2-3) and just using them whenever. No AI stuff, no subscription, just pay for my searches. If I have a lull in my searches for a month, then a) no extra resources from Kagi have been spent, and b) my credits aren't used and rollover. Similarly, if I have a heavy search month, then I'll buy more and more credits.
I just don't want to buy extra AI on top of what I already have.
Some people are contrarian simply to be contrarian. I'm loving the features that are coming out. I use Translate, Assistant, Universal Summarizer, and (of course) Search multiple times a day. Not everything is going to be for everyone, I'm personally not really interested in News, but it certainly feels like I'm getting value out of my subscription. The only thing I'm actively missing with Assistant is a proper app - the PWA is fine but the keyboard glitches out sometimes, navigation is not as fluid/smooth as it could be on a native app, same with file/photo uploads, and notifications when you leave the app and the reply is done would really tie it together.
> We are moving to a future where these subscriptions will be separate. Even today more that 80% of our members use Kagi Assistant and our other AI-supported products so saying "people are NOT paying Kagi for bullshit AI tools" is not accurate, mostly in the sense that we are not in the business of creating bullshit tools.
For what it's worth, as someone who tends to be pretty skeptical of introducing AI tools into my life, this statistic doesn't really convince me much of the utility of them. I'm not sure how to differentiate this from selection bias where users who don't want to use AI tools just don't subscribe in the first place rather than this being a signal that the AI tools are worthwhile for people outside of a niche group who are already interested enough to pay for them.
This isn't as strong a claim as what the parent comment was saying; it's not saying that the users you have don't want to be paying for AI tools, but it doesn't mean that there aren't people who are actively avoiding paying for them either. I don't pretend to have any sort of insight into whether this is a large enough group to be worth prioritizing, but I don't think the statement of your perspective here is going to be particularly compelling to anyone who doesn't already agree with you.
> I also happen to like Star Trek version of the future, where smart computers we can talk to exist [...], this is the kind of future I would like to build towards
Well if that doesn't seal the deal in making it clear that Kagi is not about search anymore, I don't know what does. Sad day for Kagi search users, wow!
> Having the most accurate search in the world that has users' best interest in mind is a big part of it
It's not, you're just trying to convince yourself it is.
I can't really do anything with the recommendation you're making.
The recommendation you made worked from your personal preference as an axiom.
The fact is that the APIs in search cost vastly more than the LLMs used in quick answer / quick assistant.
If you use the expensive AI stuff (research assistant or the big tier 1 models) that's expensive. But also: it is in a separate subscription, the $25/month one.
We used not to give any access to the assistant at the $5 and $10 tier, now we do, it's a free upgrades for users.