Ep 97: How Aravind Srinivas (Perplexity CEO) is Disrupting Google Search with AI

One of the most preeminent AI founders, Aravind Srinivas (CEO, Perplexity), believes we could see 100+ AI startups valued over $10B in our future. In the episode, we discuss why Perplexity decided to compete with Google and Aravind shares predictions for the utopian future of artificial intelligence. He also discusses the differences between working at both DeepMind and OpenAI, why personalized knowledge on demand could be a trillion-dollar opportunity and more.

The Evolution of Search Engines

[00:00:00]

Logan: Aravind, thanks for doing this.

Aravind: Thank you. Thank you for having me here.

Logan: So there's an access that you talked about where on the left hand side is traditional search and on the right, right hand side is an answer engine. Um, can you, can you elaborate on that and where Google fits where chat GPT fits and where perplexity fits on this access left to right?

Aravind: So the left end of the extreme is probably like all these links. You go with a query, you just get Uh, that's the 10, you know, Larry and Sergei's [00:02:00] starting point. Whatever Google is today is already not that. Uh, like, for example, if you ask, um, you know, where does, I mean, like Logan Bartlett age or something might pick up in case it's public, right?

So on the right end, it's purely chat UI. You just treat everything like a chatbot. Chat, chat UI is the default. And so that's not the best way to consume information about, like, What's the latest score of the basketball game, because you have to get the sweet spot of accuracy, high bandwidth communication of the information, um, and like the form factor in which we consume the answer, all three things together, right?

Right. So then. The answer seems to be somewhere in the middle. Google's also somewhere in the middle today, but more closer to the left end because not because they want to more because that's their business model. Like even if they are incentivized to give you the answer from a product perspective, from a stock perspective, they're not.

And the right end chat GPT is like, I'll just throw the largest giant model and give you a chat bot and you interact with it and the model figures out what to do. Building a product is a beautiful and like an interesting exercise because You're constantly searching for that sweeter spot every, every single day, every single week and trying to get to the user.

So I believe it's somewhere like. Answers most of the time, sometimes information presented in panels and knowledge panels and like, uh, scorecards, widgets and things like that. And sometimes you just want to get away from your app to like your, your destination, like say some subreddit, you just want to go and surf.

And you want to be able to serve all these use cases in like one single product. It's pretty hard to figure out what that is, but we believe like, um, starting from the answer end point can build in a different financial incentive to getting to the sweet spot than starting from the [00:04:00] Lincoln point.

Perplexity: Bridging the Gap Between Google and Wikipedia

Logan: When people talk about perplexity, there's often this natural comparison to Google because of that traditional search versus answer engine paradigm that you just spoke on. Um, but when I actually use the product, there's a lot of, um, parallels of my usage to Wikipedia. Pedia in some ways. Do you take eve equal inspiration from both of those two things or how do you think about the wikipedia?

versus the the google, um, you know dimensions

Aravind: absolutely. Look, I mean, I, I was a Wikipedia nerd as a kid. Uh, I used to just. Go to the internet, open a Wikipedia page, and then keep clicking on these hyperlinks within it, and get into these rabbit holes, so that definitely was a big inspiration for Perplexity. In fact, a user that really likes Perplexity once described it on Twitter as, uh, Wikipedia and ChatGPT having a baby, but the data Uh, comes from the whole of the internet.

That's what perplexity is.

The Philosophy Behind Perplexity and Its Unique Approach

Aravind: So the whole citation format, uh, using authoritative sources, uh, giving the answer in paragraphs or like well formatted markdown, uh, trying to generate a personalized Wikipedia article on the fly for what you're asking is certainly the inspiration for the product. So, uh, it is natural that you're comparing it to Wikipedia.

Now you can ask like, okay, if Wikipedia exists, why do I need your product? Except Wikipedia is like one static version for everybody. Like you, you may be interested in, in like the Oppenheimer movie, um, about like some particular aspect of it that might not even be in the current article. And you might be surfing and reading the whole thing.

Uh, someone else might just be interested in the cast. Someone else might be interested in the budget. So you don't want to read the whole thing again from scratch and like surf and, you know, squint through and like go and find sources elsewhere.

Logan: And there's comparative elements of you know between pages, right if I wanted to ask I don't know what grossed more [00:06:00] oppenheimer or Barbie. I would need to

Aravind: Yeah, exactly. The sort of personalized experiences, uh, that can never be catered to by like a single corpus is what perplexity offers to the user. And if we can get this kind of traction Wikipedia has achieved today, like they get, I mean, still in our top five websites by traffic in the world, if you exclude all the porn sites, uh, then that's like a big deal for perplexity.

And that's why I'm very convinced that like, we don't have to defeat Google to win.

Logan: Um, it's an important point.

Redefining Search: The Shift from Links to Answers

Logan: And I've heard you say that search has always been a hack to get us information. Can you elaborate on that point and how Google's model was kind of a hack of explaining things back to us?

Aravind: By the way, I want to credit Mark Andreessen to this point. Uh, it's not an original thought. Uh, I, I heard him say this in, um, his podcast with Lex Friedman, where Lex asks him, Hey dude, like you build the first browser, um, you know, Google came onto it and, you know, and now like you're an AI in chat and so it's the ultimate search engine and answer engine.

Dude, Mark says. Yes, I mean, everybody knew this, that was the reason Ask Jeeves was even an attempt made in the 90s. Except it never worked. We didn't have the technology to do this at that time. In fact, Google, uh, the core reason Google even became a search engine that people used was because they ignored the existing wisdom at that time, which was more focused on natural language extraction.

Like text space, query, keyword matching, like traditional information retrieval algorithms, they ignored most of it. And instead said, we are going to be link first. We're going to focus only on the link structure. And we're going to primarily rely on the link structure to give us the ranking. And we're still going to use like these keyword matchings, you know, based on the titles and like some content in the page, but our [00:08:00] ranking is going to be very different from the others.

So that's what got them all the usage because like, The basic natural language understanding at that point in time was very bad. The traditional IR algorithms did not really work. But that was the early 2000s, right? And, um, obviously when you start a company and have a product and have a lot of users, you have to build a business and they build a business for the link.

As a first party citizen now, 20 years later, um, we have a new advancement called large language models, which are basically trained on whole of internet and, um, they have such good language understanding capabilities that we can build a tool that can look at all your links and read, read them and come back to you like as if an intern of yours.

Did that work and gave you a Wikipedia article on, on, on the question you ask that human equivalent intelligence that a person would do by browsing, reading, and summarizing and writing an article can now be done by an AI. So suddenly answer engine becomes possible. And then that hack doesn't become necessary anymore.

The Business Model Dilemma: Google's Challenge with Innovation

Aravind: So all this is great. It's just a product feature. So, you know, why, why doesn't, um, Google just launch it, like, Reels, like Zuck did in this kill, uh, like Tik Tok or like, why doesn't he, why isn't not just like stories? Like, you know, kill snap, except Instagram can launch additional things. It doesn't change their existing UI at all.

Um, and like, like, um, he still figured out like how to advertise reels, but in the case of Google, like. You are incentivizing people to click on the link. Like you are incentivizing people to like click on the link. And then when you bid on an ad word, you're looking at the cost per click. Now, if there's no need to click on as many links anymore, if like you click only on like 1 percent or 10 percent of the links compared to earlier, uh, how are you going to convince the advertisers to still pay the same amount of money?

You can argue like, oh yeah, you know what, it's like higher intent. People are asking real questions. So you kind of have to bid [00:10:00] more for like these bigger questions. They're not going to be like, you know, changing their mental models about this. So they have a budget every month allocated for advertising on keywords.

And that's what they're going to want to continue to do. And if the highest paying users no longer come there, To ask questions about like, you know, which sparkling water brand should I buy? Should I buy a sample of, you know, should I buy the Crocs? Like, like if those questions are going somewhere else and then they're making their decision and going and doing the transactions somewhere else, like what is the point of advertising on this platform, right?

These are the kinds of questions they're facing. They have no incentive to like get rid of the hack as fast as like somebody else.

Logan: There's some point in that journey of this disruption that's occurring that to us. It could become inevitability and that Google's forced to reckon with the blockbuster Netflix situation. And I don't know, um, uh, if that is ultimately how this is going to play out, obviously, but I'm curious, do you, are we looking at this pure innovators dilemma, Christensen, uh, total, it's really hard to bet your entire business model, uh, shift on a small percentage chance that something looks different than what it currently does?

Aravind: So my sense on how it's going to play out is it's going to play out more like Google cloud. So why did Google not build Google cloud, uh, first and let Amazon get the AWS moment?

Logan: I, I actually don't know.

Aravind: That's a very simple reason for

Logan: Cultural related stuff?

Aravind: Uh, some part of it is that, but primarily it's, um, primarily it's mostly the margins. Like, I mean, Bezos has this goal.

Your margin is my option. He used it in a different context, but I'm just borrowing it here of when, when my margins on search advertising is like 60%, 70 percent and cloud is likely to be like 20, 30%. [00:12:00] It's still a good business. Am I incentivized to go expand the 60 percent business or am I incentivized to go build out an alternate business?

How, how can I roll out like 10 years from now, all this advertising business is going to be under threat. But if they had, and diversified, no problem now, they would have at least been as big as Azure, or even bigger, ideally because they had better technology at the time, and, um, even if search advertising business, uh, margins are lower, like they're going and spending money to build AI business, um, it's going to be like, Much better than now.

So my guess is they would have learned that lesson and said, okay, look, we're not going to kill this business. We're not going to go and try to change something that's producing us money. Uh, but let's spend some money and try to build this AI subscription business or like whatever it turns into today's subscription APIs are on our models.

Let's try to build that business. Whatever OpenAI is ahead today. Um, maybe that's going to produce like. 10 billion a year revenue at some point for Google, and that's still pretty significant, like to Wall Street. Uh, and then they would say like, okay, this is a lower margin business and keep reporting revenues on that.

Uh, but they're not gonna like, try to change the existing product. The, the radical changes that Netflix did, uh, on, like introducing advertising onto the core platform and stuff like that. I don't know. If Google is going to be willing to do such things, um, you know, retesting is a different kind of CEO and Netflix has a different kind of culture from Google, so I'm not sure.

Uh, I, my, my guess is they're just still going to be on the following, like they, whatever other people are doing, they're going to try to follow that.

The Importance of Aligning Shareholder and User Interests

Logan: What's the tension that exists for Google between the shareholder and the user? Because those things have Diverged, it seems, at least in terms of the search experience,

Aravind: Yeah. So, so that's the whole, like, you [00:14:00] always need to make sure the shareholder and user alignment is there. Um, but now think of Google as basically two products. One is the core search product and the other is the advertisement product. There is massive alignment between the users of the advertising product and the shareholders.

Probably the most massive alignment ever. Um, but that came at the cost of the alignment between the core search users.

Logan: and this is an important point. So, so, so advertisers in this case would be a big brand that are paying to be on the link. Shareholders are the stock stockholders, and then the users who's actually searching

Aravind: Yeah. If you go to adwords. google. com, you get a UI. You can see a bunch of keywords. You can see their frequency of searches, number of searches, approximate statistics, and then you get to see the CPC you bid and then like your ad ad is running, you get analytics on that.

You get the conversion stats. This is a whole product. It's so hard to build all this and it's so hard to get new users onto this and like build all the tracking for them. They build a whole suite of products for that. It's pretty amazing. Um, and that's what makes them revenue, but it's built, it, it's basically le using the platform they have for search.

To build this, right? Like, like, it's sort of like saying, uh, I have a really fertile, like, lot of land, and you come and, you know, host your businesses here, your rent spaces, and like, I'm making money out of all the rent you're paying, except that when these guys are all coming in, they're like, hurting the, uh, the land, and like, you know, creating mess, like garbage, and shit.

And it's hard to clean up and like people who are already living there are like, uh, I don't want to live here anymore. That's basically what's happening.

Logan: the differences in actually using the product are pretty remarkable. Um, for people listening on YouTube, I'll show this, uh, example live, but I was at a basketball game the other night and, um. [00:16:00] One of the players on the Knicks, uh, who's 6'4 had 20 points and 19 rebounds. And I thought, I wonder who the shortest player is ever to have 20 points and 20 rebounds.

And so I'll show your search results here for people that are listening. But basically, you gave me the answer, which was Jerry West, uh, who was 6'3 and a half in 1962. And then I went to Google and I got some hodgepodge of different results that directionally had to do with. the topic but didn't actually answer the question.

It was like the number of people that have had 20 rebounds, the shortest player in NBA history, all these things. And then I went to chat GPT and it actually gave me a wrong answer. Uh, it gave me the shortest player in NBA history, which is directionally right, but very different. Um, knowing that, uh, I think a lot of the things you've done are kind of built on the same corpus of data that both of them have. How are you able to give me that answer? That makes so much more sense to what I'm asking than what those other two are able to do.

Aravind: Yeah, like the, as for Google, the, the, the first point is very clear. That's why it's hard to go from a link base to the sweet spot. Because when you're in a link base, you have to, you're, you're, the true intent you're supporting is, Giving the user the link.

So the user is, when the user actually types in a real question, Um, it's really hard for the algorithms to work in the same way. You have to detect that this is a question and route it to something else called SGE. And it doesn't always get triggered if the classifier is not precise. All these problems exist.

Logan: So there probably wasn't a specific web page that answered that question, so it didn't serve me

Aravind: exactly. Um, and, uh, as for ChatGPT, look, I've always been saying this when people tell me, like, perplexity is just going to be subsumed by ChatGPT. Um, if the purpose of ChatGPT is to allow people to [00:18:00] search on the web, and only that, yes, like, we shouldn't exist. The reason ChatGPT got all the hype and virality was people allowed, like, interacting with the actual model behind ChatGPT.

And not the, not what it can do by taking information from the web and giving it back to them. That's a, that's, that's what we built. And, and that's a way less viral product than, you know, just asking AI interesting questions, very open ended. Answers don't exist on the web at all. Like it's what the AI thinks, it's almost treating the AI like another human and talking to it.

That is why Chad GPT went viral. And so, uh, when you're trying to build a completely different utility and value proposition in another product that users think is for something else, then it's kind of confusing to the user and doesn't, and even if, you know, leave the user confusion part, like it doesn't work always in the way it's intended to.

It has too many incentives it wants to fulfill.

Logan: Citations? Core component of what you you all do. How do you how do you think about inserting citations

Aravind: Yeah, it comes from our academic background. Um, you know, in academia, citations is sort of like a currency. Uh, it's sort of like, it doesn't matter by the way that, you know, I'm not, I'm not like going to glorify citations. Uh, because there are so many ways to have your Google Scholar, uh, like have a lot of citations by just being part of one big paper.

Again, it's not well counted. It's counted equally for all the authors. You can be an author and like a 20 author paper where the most of the work was done by the first two or three people. And you can get a hundred thousand citations because of if that paper was a big hit, but everything aside, a good paper.

Uh, has a lot of citations. This is the same insight for PageRank, by the way. Like, page has academic background, says, you know, a good webpage is one that's cited by other webpages. So we have the same idea that, um, a good chatbot should be accurate and have authoritative [00:20:00] answers. And the best way to do that is to make sure it only relies on highly cited, uh, sources.

And it's a chicken and egg problem of deciding what to cite. So the only way to know what can be highly cited is like start rolling out a product and make it cite a few sources and get a lot of data through your product and use those data points to decide like which websites to pick and not pick based on the quality of the answers.

And if this becomes a, you know, a well oiled algorithm in a machine, it basically converts into a machine that keeps improving itself. That's what we want to build.

Logan: if you dream the dream of perplexities success, you're gonna play a very important role in what is true across increasingly complicated topics For the most part Google's been able to Defer to a lot of the links, uh, in the sources of information. Um, and you, you also provide the links and the citations of all the stuff you're pulling from.

But, um, people aren't always, I know I don't always click through to the actual sources. increasingly, um, as ubiquity spreads, you're going to provide some version of truth people, um, in a world in which truth is a complicated topic. Um, can you maybe speak to how you think about this and the responsibility element of, of truth?

Aravind: Yeah. Uh, I think like the computer scientist way of thinking about it is like, truth is just a problem you can solve. Uh, but if that is true, go a step further and ask like, what is truth? What is the pseudocode for verification? Like, if you can verify something, then you can solve it. Because whatever you say, you run it through a verifier, and the verifier says it's true or false.

Then you, if it's true, you give the result to the user. If it's false, they go back and try to change what you said. Right? But the truth is, like, we don't even have a verifier today. Like, as a human, what do you use, actually, to know something is true or not? If it's in your domain expertise, you, you, you know, you, you, you can write the pseudocode.[00:22:00]

If it's outside of your domain expertise, you rely on experts. You go and ask another person, your colleague or your friend, as like, learn more about that topic or space, and you ask them what they think, what is this legend? And, I mean, I'll just give you your example. You're, you're, you're a venture capitalist.

So when you're like looking at companies to invest, and someone's telling you, oh, this is the hot shit, you gotta get in on this. What do you do? You're verifying if it's true, right? You go and do your due diligence. You ask a bunch of people, you, you know, ask other, other, uh, like cap VCs in the, in the, in the space and find out.

So that's what like we are doing. We're basically sourcing information from like good quality sources on the web. And we are taking all those different parts of the sources and looking at the query and then giving back. The answer by giving a viewpoint across all of them, we're not making a decision for the user.

We're not telling the user. Hey, this is what it is. We're telling, look, these are, this is what all these different links say, uh, there is no source that accurately explicitly say it's this, but these are the viewpoints, right? Like that's sort of what we are presenting to the user. And I would say that is a step towards increasing, like every person to discover truth, but it's not truth as a service.

I think the second thing is much harder to build.

Logan: I guess related to that and the thought around truth, there's another side of safety in some ways and the obligations around that. And so if I, uh, I actually didn't do this, but if I typed in into perplexity, what's the best way to kill yourself or what's the best way to plan a school shooting or something that's like,

Aravind: Uh huh.

Logan: heavy topics.

Um, Is there anything you look to, to guide the decisions you make about what information you should surface back to a user

Aravind: Uh huh.

Logan: what you should, you know, not provide answers to?

Aravind: I mean, it's a complicated topic, right? Um, I would [00:24:00] say you should obviously warn the user. Like, look, if you're trying to kill yourself, like it's important to like make sure you talk to some place for help before you make anything drastic. But someone might just be curious how other people kill themselves, right?

Uh huh. It's still important to know if you want to save others from killing themselves, it's important to know how people kill themselves in general. So if, if, if the intent for the query was that, should you not answer the question? You should. So I kind of believe that disclaimers along with an actual answer would work.

And there's a different philosophy where some people think like. Okay, if, if, you know, sure, but why, why make it even easier? Like why help someone make a bomb? Or like, why help someone kill themselves? Or like kill their dog or whatever, but I'm the belief that you should know everything in the world. Like you should be able to make your good decision and the AI should help you influence you to make a good decision.

But if the intent behind the query was to actually do something, like just get the facts out. Then you should still be able to provide the answer. Um, and a lot of people believe that it's so hard today to make, get, get answers to these things. But reality is it's very easy. YouTube has videos. Uh, Google has all these links.

If Google's not doing this or YouTube's not doing this, they can go to Bing or Yandex or whatever. If someone's so motivated to kill themselves, then they're gonna find ways to do it. Uh, so, I don't see this need to adopt an extremely high moral ground and like, uh, virtue signal to every, every user that you're not gonna let.

People like, like be able to get answers to these queries.

Logan: Yeah, there, it starts to be the question of hosting the information, uh, or, or providing the information, um, through someone else's source to actually. Distilling and simplifying it and maybe that's too new or maybe that's an [00:26:00] unnecessarily narrow distinction between between the two and the disclaimer solves it.

But

Aravind: Yeah. I mean, uh, in general, like by adopting this, uh, positioning as a scholarly tool, like a tool that's meant to increase knowledge and be factful, accurate, boring, but useful, we've already avoided the sort of, uh, Affordance to the user that, oh, they see this as an AI they can talk to and have conversations with.

You're not going to come and ask Wikipedia, I mean, you know, usually you're not going to go and ask Wikipedia how to kill yourself, you're going to go there to learn. And on the other hand, you might ask a more like character AI, kind of an AI, more personal things. So by adopting that sort of affordance already with the user, we are Somewhat safe as a tool, like we're not going to be used for negative purposes.

And, uh, I think we can do some small, small things like disclaimers.

The Complexities of Building a Competitive AI Talent Pool

Logan: you've given two anecdotes which show the challenges presented in building a company like Perplexity that competes for talent against folks like OpenAI and Google and Meta. One was you tried to hire a very senior researcher from Meta and they said to come back when you had GPUs, which would cost billions of dollars and take five to 10 years to do.

Not exactly a practical request. The other was when you got someone from Google actually to commit to joining only to have Google forex their salary to stay. How do you compete for talent in a world like this?

Aravind: By the way, I, I genuinely wish I just said Big Tech and not the actual company names because, you know, it's unfortunately putting the people who spoke, uh, in the spot. Uh, so that's a mistake I made. I just think that the right, right answer is not chasing the ones with the biggest brands on LinkedIn or something like that.

But. People who are truly motivated to build, if you're already able to build, you know, if you're already in a great [00:28:00] position in your big, big tech company where, you know, you're coding, you're the tech lead, you're driving everything, like, it's very hard to convince you to leave. Life is already happening for you, like, it's already great.

Like what can be greater, right? Like it's very hard to find that sort of a bet for you, but then if you're really talented, but somehow you're not fortunate enough to be the driver of progress in the, in your current big tech company, then you're the right kind of person I should talk to because at least I can provide you that.

And you're not going to make a decision based on money, because even if your parent company offers you. Much better money to retain you. Uh, the reason you want to leave is not money, but actually to be able to build something. So if we can arrive at a middle ground of where our offering is also good, but what matters more to you in life at that moment is building that money, then the decision making is much easier, except like these sort of situations where someone is really smart, talented, somehow not in the right position to build in their current company.

And it's prioritizing building over money. All of these things happening at the same time is very rare. Even getting one such person changes your current company's destiny. Um, and there are very few really good engineers already. And all these additional conditions need to be in play for you to lure them out, which is why like getting somebody out of big tech is not as easy as people think, because the really good ones, they are being retained well, they are being given like autonomy and like power and technical decision making power and things like that, and they're being paid extremely generously, so it's very hard to still get them up.

And this is why, like, you know, there was a window in 2020 or. 21 to maybe two and a half year period where. [00:30:00] People at Google were actually unhappy. They're really talented. They didn't have any agency. And like, um, they just felt like they wanted a different kind of place. And OpenAI pushed several of their researchers.

Right. And, um, that's like one unique moment. It doesn't always happen.

Logan: How do you assess the talent of people for their ability to succeed within an artificial intelligence machine learning world when they, they don't have, um, the existing competency or maybe you're coming from outside of industry? How do you go about assessing those people?

Aravind: Actually, I have a parallel to AI itself on how to assess the human, um, the best models, the best AIs are those that you can prompt with few examples or, and they just get it. They have never trained on that, but they just get it. That's what amazes you, right? This AI never trained on this, but it's able to pass this exam or with just a few examples is able to do this task.

Um, I think those are the kind of humans you want, ideally. Fast learners, right? Now that's a skill that's very hard to interview for. So some things you can look for is whether they've done an array of things in their past, not just one thing. How they worked in like very different projects and how they succeeded at both of them.

That's a very good signal. It's very hard to be good at like many things at once. So you must have had to learn on the job. Um, and also I think you should interview for like. Mentality, like the, are they [00:32:00] just interested in getting things done, right? Are they a doer and that you can get through like back channel references.

And, um, and then you interview for a culture fit. Do you really want to struggle here? Like, why are you even interested here? Are you coming here for the money? Are you coming here for the mission? And, um, what does winning mean to you? Um, and all these things give you like a. Rough contour of like what that person is.

And, uh, beyond that, you just make, you know, you already have the coding assessments and like, you know, the technical skills. So you, you get a pretty good measure if you have done like at least six or seven interviews and then, then you make a decision, right? Maybe you're wrong and like, it doesn't work out.

It's fine. Uh, usually the way I make decisions on like offers is like. If somebody is really good that I'm like making a very good offer, um, there's always this sort of, um, debate among other people in the company of, Oh, should we really, you know, make such a good offer? Or like, you know, we might be paying up, let's negotiate.

And I'm like, listen, these are extreme cases, mu plus Sigma, two Sigma cases, where if it doesn't work out, we'll know really quickly and both people, both parties will mutually part ways. But if it really works out, you're not going to regret. Making very generous offers. So, and, and, and, and the time period for that is not going to be too long.

So let's just go ahead and make a good offer. And, um, that, that's helped me a lot. Like you got to speed up things. Decision making needs to be sped up in a startup.

Perplexity's Business Model and Subscription Strategy

Logan: How'd you land on the business model of charging a subscription for perplexity?

Aravind: Honestly, I, we just copied chat GPT. Like, like there was nothing, I really wish they started off with something like 30 a month, like everybody in the industry would have adopted it. Everyone in the industry copied them. Uh, like Anthropic, uh, Gemini Advanced, Copilot, Microsoft, it's all priced at 20 a month.

You can ask like, where did that number even come from? And it's like a [00:34:00] random number OpenAI made up, pretty sure. Like you can put some thought into it and like justify it. But if you went and asked them, like, would they be, would they have done it? Like 30 a month? They would have said, yes,

Logan: you think this ends up being the long term business model that creates the most equity value for

Aravind: don't think so. I don't think so. Look, I mean, the best business models have always been like usage driven, right? Like performance advertising works for Google because it's at a query level. Facebook, similarly ads are at an impression level. Um, so. The largest businesses like highest margin businesses have always been about usage like as your meters, right?

So I think like, um, the current subscription model doesn't capture that. Uh, and so like any model that actually captures that had a really large scale. can create an even more profitable business. That said, this is already a profitable business. I hear ChantGPT OpenAI is actually profiting from it. It's not just making revenue.

A company might still be losing money because they have to spend it all on like pre training clusters, but if you just ignore that and create two different accounting mechanisms for the product and the Research teams, I think, uh, the product team is already profitable. So that means like, you know, this can be already a bigger business than DoorDash or Uber when grown at scale.

Logan: Do you feel today like by charging, you're limiting the, um, long term ability for it to get embedded in as many people's hands as possible and into their workflow? And how do you think about that tension?

Aravind: Uh, I don't think so. My belief is that the free version of the product will keep getting better. And, uh, the paid version of the product will still be better than the free version. And, um, so I don't think it limits the adoption and day to day usage today. Um, if [00:36:00] it's completely gated by paywalls, yes, clearly it limits and your focus is more on like high end, like Bloomberg terminal sort of thing.

Um, but that's not the way of the current model today. So we can definitely get it in the hands of more people.

Logan: There's a concept of verticalizing, which would require, uh, which would have required you to focus on a subset of data and information rather than keep competing across a broad surface area like you are now. Why'd you not pick the, that path?

Aravind: I was very confused to be very honest. Um, I spoke to many people in Silicon Valley about this. Uh, this was before even we raised series A funding. Everybody told me, look, you launched a really good product. We've got a lot of buzz getting usage. Perfect. Uh, now go and figure out a vertical and raise money for that.

No one's gonna fund you for the horizontal. Um, but one person I really respected told me the opposite. Uh, that was Mark Andreessen. He said,

Logan: Two good plugs for Andrew, so I'm going to have to edit this out.

Aravind: no, I really respect him a lot. He said, everyone's gonna tell you to go vertical. Don't do that. Even if you're going to fail in the horizontal, going vertical is a guaranteed failure. Whereas going horizontal is not a guaranteed failure. It's like low odds of success, but it's not guaranteed failure. And I asked him, okay, how are you so sure?

And he said, once Google was succeeding, whole venture capital internet, like internet businesses wanted to fund, um, vertical Googles and all those companies. Nobody even knows what they are today. There have been successes on vertical search engines. All of them changed into like. A platform or like an end to end tool rather than just search and search is just like one portion of it that can even in fact be outsourced to some API.

And, uh, that's, that basically ended up what happening like for booking. com, what do you call it? A [00:38:00] travel search engine, or what do you call Pinterest? The visual search engine. They're not exactly that, like they're doing a lot more than that, which is what makes people come there. Like Pinterest allows you to pin.

That is a core value prop. Of course, the visual search is making it even easier for you to find things to pin. Similarly, Yelp allows you to look at reviews, but it's not like a local search engine. Uh, or like, Booking. com allows you to get hotels, but it's not meant to be a search engine for hotels. It allows you to book stuff.

Customer care. All these additional things that make things work for them, right? So, When you decide to go vertical, you are going for that vertical. You're building a product for that vertical. You're not a search engine company anymore. So only go vertical if you don't want to do search. I'm just like, I want to do search.

Then you better be horizontal. Like, and also the other thing in AI, at least so far, leave alone this traditional internet wisdom, um, the power and the magic of all these models is because they are so general. Like the base models are, RLHF tuned chat models. They've all been tuned to do a lot of things really well, which is why it's working in such free form conversations.

You're able to ask follow ups and like, you're able to talk to it. Like you're talking to another human and it's able to understand whatever you're saying. The moment you start fine tuning it for a specific vertical by just throwing a new data set, people think like, oh yeah, that's it. I got a new model.

It's just vertical. It knows everything about my domain. But it just stops having that old magic. It cannot converse in a more general way. It cannot understand a lot of the things anymore. And you're like, oh, I would rather retain the original model and done more prompt engineering. Then like doing all this fine tuning, so all this dark magic of fine tuning, where you are adding a new knowledge to the model, but still retaining the generality and magic of the original model is still less understood.

So given both of these things, it's, it's not, it's not a good idea for you to go vertical.

The Future of Search: Personalization and Real-Time Data

Logan: Moving forward, I, I assume having access to data that's up to date and near real time will be [00:40:00] increasingly important. How do you think about getting access to that data? Will that be deals that you, you cut with, you know, subset of data providers? Or

Aravind: Yeah, I'm sure, I'm sure we're going to have to like, you know, have like licensing deals or API access. Uh, we already access like, you know, Yelp's data, for example. Um, you know, we use Shopify's APIs. So there are like so many. Amazing sources of data to power narrower experiences, like yeah, local searches or shopping.

Uh, I'm sure we'll have to do something similar for travel. Uh, or like restaurant booking, I'm sure like, you know, has to be integrated even more deeply with Yelp or OpenTable. So we are going to have to use APIs and licensing for being able to build like, you know, more end consumer facing applications.

Logan: I've heard you say that you actually think retrieval augmented generation, uh, for a consumer app like yours, uh, is very different than B2B. Can you explain first what rag is for people that. May not know. And then can you talk through why you think the capabilities are different for the different sub segments?

Aravind: Yeah, absolutely. RAG means retrieve and generate or retrieval augmented generation. I don't know exactly what A stands

Logan: I think it's augmented.

Aravind: Augmented. Okay. So why do you need to do that? Like, okay, so first of all, just generation means you ask something. And there is a whole neural network, a model, a giant model with billions of parameters.

And then you just get the response from that model. That's how ChagGVT works. Um, And, um, Retrieval Augmented Generation, what it does is, you take in, whatever query you ask the system, it doesn't directly get the model to output the completion. Instead it goes, pulls some documents that are relevant to your query, populates the prompt with it, and asks the model to look at both the original query and the pulled up relevant documents and then give you the completion.

So this way it can allow the [00:42:00] model to get access to knowledge that's relevant to your query on demand whenever you ask a question and doesn't have to be baked into the model's weights itself. So that is the best way for you to break the real time Knowledge that chat GPT has and, um, which documents you pull up.

Or which APIs you give it access to for information, which tools it uses. All these are like, depending on your application. It could be the web, links in the web. It could be your directory on your computer. It could be all the other files in your enterprise. And depending on that, the end application changes.

Now, I said that the RAG for the web is a different kind of technology than the RAG internally because the ranking algorithms are very different. The ranking signals that you need to use for web based RAG, for building a product like Perplexity, are more around like, is this domain high authority? Or is this domain low authority?

Like, is this domain more spam? Is this recent sites? Is this site being updated in the last few hours? Uh, our new site should be updated even more frequently. Um, and then like, uh, New York Times or Wall Street Journal may be even more authoritative than like, um, some, some, uh, lower quality magazine, right?

So all these are signals you use. Uh, to, to, to rank the final top K on the other hand, for the internal documents, how do you even decide which Google docs are important? Like, is it the document written by the CEO or is it written by an engineer? Uh, even the CEO can write crap docs at times, maybe, you know, not all of them write good docs and an engineer might have produced an amazing doc.

So what other statistics, like how many people have access to it, or like how many edits have been done? Like, these are all like. Completely different, uh, signals compared to like, what do you use on the web and also like how you chunk, you know, the, the, how you chunk the document into [00:44:00] different, uh, paragraphs, uh, people write good web pages because it's going out there in the public, but internally people don't necessarily write good docs.

Um, so then you, you might have low quality information. So there's, it's, it's a completely different problem. If it was the same problem, search on Google drive would work amazingly well. Yeah. There's a reason it doesn't work.

Enhancing User Engagement with Suggested Next Questions

Logan: There's this suggested next question feature that's become useful for me and using perplexity and investigating things. Um, how did this feature come to be? And why do you think people are bad at coming up with next questions and discovering things?

Aravind: Um, I wouldn't say people are bad, but I would say people are, it is a difficult skill to articulate a good question. I mean, there's a reason you prepared a bunch of questions for our conversation, or if we meet, I would prepare a question, a bunch of questions to ask you. Uh, asking good questions is not easy.

There's some amount of human cognitive labor that goes into it. And when you're using a product, you, you are, you do want to be in the lazy mode. You just want to like. Use it in the easiest possible way. And if there's a good amount of friction for you to ask a good question, then you're not going to be able to do it.

So the suggested follow ups, uh, is one way to minimize that. Maybe your first question was not good. You got some answer and now you kind of know, okay, I'm not interested in that anymore, but I may want to ask something else and I'm going to suggest you what to ask that could give you ideas on how to ask your next question, or you could just click on the question I suggested to you already, what is true is all of us are curious.

But not all of us can convert that curiosity into an articulated question. If the AI can do that work for us, It makes the product even more fun and easy to use. Um, and we believe it's even, this follow ups is just a small part. The real alpha lies in even the starting question. And there's a reason Google auto suggests you as you type, right?[00:46:00]

They don't even want you to type. That's the real, like, Larry Pate style product design. Like, the user is never wrong, don't blame them, they're going to be lazy, you let the product do all the magic for them. Uh, now what is the equivalent of that for asking your first question? It's not clear. We're trying some experiments, like if you look at our Discover page.

There's like a bunch of interesting questions.

Logan: Very topical ones like,

Aravind: Yeah, exactly. Imagine this is personalized to you and like, every day you get like a feed of questions that you And uh, various, like Kitmus, you know, Ilya Sutskever, like, you know. All these things are,

Logan: all these famous people

Aravind: like just interesting, interesting stuff.

Or even like stuff about the world that you don't understand that doesn't have anything to do with real time knowledge. I think like there should be a reason to open the app, even if you don't have a query, that's what you want to get towards.

Logan: It's interesting. It's sort of like the difference between radio and Spotify. It's like picking a song or thinking what you have to listen to versus leaning back and just pressing play and letting someone

Aravind: right, right. True, true autopilot, right? Like, yeah, actually that is true. One of the worst things in Spotify app, for example, is like deciding what to play.

Logan: the cold start

Aravind: Yeah. Because yeah, cool stuff. Yeah. You, you're in a car and like. Yeah, like, play some music, man, I feel sleepy, and then you don't even know what to play, and you're just, I mean, okay, sure, let's play Taylor Swift or whatever, right?

Logan: the DJ actually within Spotify is I stopped using it, but they're trying stuff around

Aravind: Yeah, yeah, being a good DJ is not easy,

Logan: yeah in introducing that feature. Have you found I assume time of browsing using the app has gone up Has that been a material Delta the asking the next question?

Aravind: I mean, we, we, yeah, four minutes was the first average session time for us, and once we introduced these suggestive follow ups, it went up to eight

Logan: Wow, so doubled the time by asking

Aravind: yeah, it's one of the best decisions we made, and like, I mean, I'm happy sharing this because now everybody has copied it already.

Chanchipiti has it. And like, I think [00:48:00] Google has it now and Google is trying to do this even for the regular search, not even the SGE. So it's a clear winner. Like this is one of the best features we rolled out.

Logan: That's awesome.

Key Success Metrics and the Importance of Daily Queries

Logan: What are the important success metrics that you, you think of? Obviously, you're generating revenue through the subscriptions themselves, but I, I

Aravind: Number of daily queries is a Northstar

Logan: Yeah, got it.

Aravind: Every company should have a Northstar metric because it, you know, for social media companies, it's like number of DAOs. Uh, some people even joke, you should measure. How are the active users? Uh, but for us, it's been the number of daily queries, which is actually what Google picked to, uh, because that's the only metric that's correlated with usage and your product only gets better with more usage.

So the only way to truly improve your company is make it a better product. And the only way to truly make it a better product is get more people to use it every day, actually use it. So measure a unit of usage, right? And, um, we could have measured like number of DAOs or number of VALs or like the, the number of pages in our index, but these don't indicate anything.

You can have like a 10 billion page index and, uh, it could be useless. Or you can have like, um, a million daily active users and they all just come and just scroll through the feed and go back. And that's not very useful either.

Logan: I've heard you mentioned five dimensions that you need to focus on, uh, for the business, uh, accuracy, reliability, latency, UX, and iteratively improving. How do you think about the different importance of, of those and tying back to your North Star metric of queries?

Aravind: Yeah. I mean, like not symmetrical queries will is the only way to make sure the accuracy can improve. The latency, uh, will help the number of daily queries go up if you improve that because. Uh, more people, more, more, if the, the, the friction to like getting an answer is so little, people will use the product more.

So it's the other direction. And readability, again, the more readable the answers are, the more likely people will use it more, and you get better queries, you [00:50:00] get more data on like which queries are unreadable today that you can improve on. And then, uh, UI has like, you know, it's pretty aligned with like readability.

Iteration speed and personalization. Um, I would say that, like, that's more not related to the North Star, but, um, people want to use something that's constantly improving, right?

The Challenge of Gaining User Trust

Aravind: Like, why, you're, you're a no name brand, like, why do they need to trust you with their time? Like, eight, eight seconds or eight, whatever, eight minutes, eight seconds doesn't matter.

Um, it's still a valuable part of people's lives. It's very hard to, like, make people give you their time. They would even spend it on like, scrolling through X, but they might not use a better product because you're not worth their attention today. So you have to be improving and convince them that you're worth their time.

Personalizing Products for User Engagement

Aravind: And the best way to do that is personalize the product to them, and also keep shipping a lot of improvements they feel like it's very valuable.

Navigating the AI Model Landscape

Logan: Can you maybe talk through the staging on the model side and earning the right to build your own model over time? I think the pejorative people saying a wrapper on top of GPD would have been the pejorative term in the early days. But,

Aravind: It's still a set, by the way. I even wrote a tweet that says, rapper in bio. You know, uh,

Logan: which is a play on, uh, all the other in bios that are showing up on X these

Aravind: Yeah, exactly. So. Look, I mean, what, what else should I do? Like, should I start a company and, uh, uh, raise like a billion dollars, hire like top resources from DeepMind and then, uh, join Microsoft after that?

Logan: Do

Aravind: So like, you know, it's pretty hard to do this, right?

The Strategic Decision Against Building Own Models

Aravind: I'm not, we never said we are the best foundation model players. So are you saying like every new startup should always go and build a cluster and train their models then, and then end up with the same fate as the others? Uh, except for like the top three or four who are doing well, like OpenAI, Anthropic, Mistral.

In fact, that's it. You [00:52:00] can, the other two are Big Tech, Meta, and Google. Uh, it's pretty hard to do this. It's pretty hard to do even a good product. Like it's pretty hard to be a good rapper. So, uh, When you have like two hard things, why choose to do the even harder thing of doing two hard things at once?

Just focus on doing one thing well and rely on the ecosystem, right?

Leveraging Open Source Models and Infrastructure Efficiency

Aravind: Like I'm sure like Zuck's gonna put out some really good models later this year. Um, I'm sure like maybe if not happens this year. Beginning of next year, we'll have a model that's open source and as good as GPT 4. Uh, and like, GPT 4 is already like, optimal more or less on like, 80, you know, 8 out of 10 queries are always accurate, or like 9 out of 10.

So, your product is already more or less solved with the existing capabilities, and it's been guaranteed that there'll be an equivalent open source version very soon. Why are you so worried? Um, like, why do you want to go and raise that capital and build all these models yourself? Okay, we do have the ability to serve them efficiently.

Like, we have really good inference engineers. We showed to the world that we can move and, like, do all this analysis on different GPUs and maximize throughput. We're not even an inference provider. But we have the best inference infrastructure, or at least competitive with the others, like, Together, or Grok, or these other people.

So, um, And, and we are training our models to like, we, we post train these existing models based on all the user data we have. I mean, you have contractors, we collect data. If you're truly a rapper, like, why am I spending all this money? Is it just for optics? Like, obviously not. I have an incentive not to, because, you know, I, I need to save as much money as possible to build this for the longterm.

Right.

The Business Model and Value Proposition

Aravind: So I think like people need to like this. I mean, there are very few people who say this, like Twitter people are always like that, but most, mostly people need to understand that your goal is to build a business. And how you generate value for that business is what matters. If people are paying me 20 bucks a month for the service, the product provides them and they don't care, like which model is providing the [00:54:00] service, that's what matters to me,

Logan: you think we're at a In equilibrium. You mentioned the five different names on the model side out there. Open AI, Anthropic, Mistral, and then the two big cloud providers. Do you think we're at a steady state equilibrium of the different model providers or you think there'll still be a new entrance?

Aravind: maybe X, like the XAI, um, but their models clearly like further behind, they will release an open source version. Um, usually the joke is, um, you only release open source when you're behind. Except for like in a matters case where they're like. You know, I'm pretty committed to open source, whether they're the leaders or the followers, I do think it's kind of like the steady state today we have is there only going to be three or four people that is very clear.

Uh, but the dynamics that are not clear, like, is it going to be like open AI always in the lead or is this going to be like a ping pong between open AI and Anthropic? Uh, where does Google come in here? It's going to be a three player game or like two player. And how does meta launching something open source like hurt open AI's business or Anthropics business, all these things are remaining to be seen.

Logan: Where are you in the journey of building your own models?

Aravind: So we are, we are, we have a version of our model on our product, called the experimental model, uh, that's been trained to be more concise. And. More neutral, like less refusing to answer questions. So that's been post trained with a model that Mistral released, and it's been fine tuned with a lot of data that we've collected, and a lot of human annotations that we've collected.

So that is the current state at which we are in. Uh, we'll experiment training with the base model that XAI has also put out. This is a bigger model, but it remains to be seen if it's going to add value to us over the Mistral ones. And, um, we're also taking a lot of these smaller open source models like [00:56:00] Mistral 7b or Gemma, um, and, um, using them for other parts of the product.

They're like LLMs that take a query and understand, It's this type of query, that type of query. Should it render an image? Should it render a video? Should it render a knowledge card? Um, and like route it to like appropriate classifiers. Uh, reformulate queries and expand them. So these sort of like other aspects of the product are already running with our models.

With the core summarization model and the chat model, uh, we rely on other people right now, uh, before we can actually get our model to be the default there.

Logan: How do you think about the benefit of cost versus the actual information and answers and building your own model?

Aravind: Yeah, so that's another thing. Everybody thinks you can train a model and you're done if it matches the evals with OpenAI, it's over. Dude, that's just the beginning. From here onwards, you've got to optimize the inference stack so much, uh, and um, get it to be worth it for, in terms of dollar. And OpenAI, every like six months or five, eight months, they keep reducing the pricing by 10x or 5x.

And you're like, oh damn, I invested all this money into like, training my own models. And now I'm like, even if I do have a model that's as good, I don't have any incentive to serve it here. So we are fortunately having a very good entrance team that, and, and working very closely with Nvidia, who's an investor in us to like, uh, constantly keep updating the TRT LLM library and being able to host our own models very like, like, like really efficiently.

And the cost of, uh, even though the cost of new GPUs is going up, The next generation chip is always going to be more expensive. Uh, it's offering unprecedented throughput, uh, while also preserving the latency that lets you build, uh, like sort of the whole product with just like one node or two nodes of GPUs.

Uh, and that's what is amazing. Like. Like, like basically are packing more compute power in smaller surface area, smaller, smaller volume now, so you can buy like fewer GPUs, but more powerful GPUs and serve an entire [00:58:00] product, like millions of people. And that's why I believe like all of this is going to be pretty cost effective over time, even though the price per GPU will continue to go up, you're going to need fewer of them, which is great.

AI's Potential to Transform Work and Empowerment

Logan: Can you paint me a picture of a eyes upside in the coming decade?

Aravind: I think that whatever. All companies today are spending on so many things, be it like outsourcing engineering work or like, um, having like these part time contractors help them out on things or, um, paying for somebody to do background research on something, you know, all these people ping me, oh, we want to, we have a client who wants to do research on Google, we have a client who wants to do, and they, they, they pay me like so many dollars per hour.

They imagine how much they're actually getting paid. To do that work. All this work can be done 80 to 90 percent as effectively by an AI that you don't need to have all these middlemen anymore. So everyone will feel so empowered that you can get so much done with so little that your cost will be much lower and value get way much higher and you'll be able to create your own value in the world through the outputs you create.

So I believe whatever. Big companies spend on all this bag, like due diligence, background work, um, engineering, like, like, like part time engineering hire design, like at least even 10 percent of that, if it goes to like AI spend, AI can like, like, like, first of all, all AI businesses will be so much more valuable and every human individual spend in all these companies will also get reduced.

That's the best way to, that's the best deflationary effect. I saw some tweet, I don't remember who it's attributed to, like imagine you had a whole country of like a billion people ready to work for you. Uh, and they do the work reliably too, or at least like 80%, 90 percent as well, that you only need to do the 10%.

That is the sort of power that having access to a giant data center will give you.[01:00:00]

Logan: There's the society we live in today. If you were to compare it to 50 years ago or 30 years ago. Um, it's amazing. We can do things. Even the poorest people in the United States can do things that a king or president or whatever couldn't have done. 30, 50

Aravind: Yeah. I mean, you, you are already smarter, more empowered than like, say the president of us, like even 20 years ago.

Logan: with access to information and food and

Aravind: exactly. And we are all using the same phones as, as Elon Musk. Like if we can, Right. If you want to, um, and that's the sort of power that everyone's going to get. Like, at least for perplexity, that was one of my goals is, um, not everyone has access to the, you know, the found the best founders or like the best VCs in the world, and they still might want to create a lot of value in the world.

Uh, so who are they going to ask their questions to write? Like, maybe the answers are already out there in the web. The web is a treasure trove, but you need the GPS. It's much easier having a tool like ours to go find the right knowledge at the right time, right? Knowledge on demand personalized to you is a trillion dollar opportunity.

Logan: Do you think that inherently this path of, of access leads to increased happiness or what's the most utopian happiness thing that you can concretely think about that seems tangible that might happen in the, in the coming decade.

Aravind: Look, I say this more like a marketing thing of search like a billionaire, sort of, you know, a play on the shop, like a billionaire of the ad team who ran on, um, Superbowl. I, I, I believe in it actually. Like what do billionaires care about? Uh, they care about their time. They don't care about, I mean, they don't care about money, but the only way for them to make more money is having more time [01:02:00] and that's their, uh, the resource that is, is, is not in abundance for them.

So if you give people back their time, life is a luxury. Like, like, like, and, uh, on the other hand, what do people who are poor optimize for as well? Money over time, they give their time for more and more jobs, part time jobs, in order to make little more money. Now imagine if the people who are optimizing for the money over time flip, they optimize for time over money.

That's what a utopian world would look like, where even the people who are not making much money can do whatever they love for a short period of time every week. They don't have to work 80 hours or 90 hours. Not everyone has to work that hard. Like work should feel fun. Right. And if, if it can be done with the help of so many co pilots that are working with you, uh, and you don't pay much for it, or even if what you invest in it is sort of like investing in buying a car or like.

You know, you buy a piece of land and do farming sort of thing. Then you're going to be able to create a lot of economic value.

Logan: Where do you think the next big wins come from in artificial intelligence?

The Future of AI: More Compute or Breakthroughs?

Logan: Do you think it's a function of simply more compute or do we need some major technological breakthrough?

Aravind: It, it has to be a function of compute. Uh, that doesn't mean we'll not need a technological breakthrough. Breakthroughs have always been about better ways to utilize compute. Transformer was a breakthrough because it made better utilization of GPUs. Matrix multiplies compared to RNNs or convolutional nets.

And by making better utilization of compute, we were able to create amazing models. So the next amazing transformer level breakthrough will be of a similar nature where it's able to utilize the existing hardware even better, even more efficiently and efficiency gains translated scale that like whatever we create with the same amount of compute today will look a lot [01:04:00] better.

Uh, and, and that means also putting more compute at it and creating something that doesn't even exist today. And I believe it's likely to come from ideas that can make models think for themselves, uh, come up, experiment in the world. Draw conclusions from the experiments, go back and design a new set of experiment and iterate and keep getting there till you arrive at a truth, right?

That's, that's very hard. That's very hard to do today. And that's where I think like inference and training have to be together. Like right now, you train one model for like many months, and then you, you post train it, you RLHF it, and then you give it to the user as an API. I think this needs to change.

The model should be interacting with the world during the training process, or, you know, some kind of like generating their own data and training on it. And that should be done at inference time too, where you ask the model something, it could come back to you with an answer one day later, But it did a lot of background research through the process and came back to you.

And you're like, wow, that would have taken me like two months or like to arrive at. And that cannot magically just happen at inference time. It, the model should have been trained for it. That's why inference and training would be merged together. And that's why like all these new advances in video announced this week with GTC with all, you know, even better chips, higher bandwidth, blackboard chips will make all these ideas like 30 X faster inference suddenly is possible.

Now you can collect data from the model itself while it's training. You don't have to already have a pre process and tokens and like giant Azure blobs or something like that. Right.

The Startup Ecosystem and AI's Billion-Dollar Potential

Logan: Do you think, um, the number of, of Startups that will be worth over 10 billion startups has defined as companies that got going in the last couple of years and build themselves into durable businesses that are worth 10 billion. Do you think, do you think that's a [01:06:00] number that proves to be dozens, hundreds?

Do you think most of the value actually gets accumulated by incumbents that are already in these spaces, leveraging a lot of foundational companies that already are out there?

Aravind: I hope it's not just the incumbents that benefit from this. And I actually hope, uh, look, I'm not saying like, you know, incumbents won't benefit more from this than the startups. Maybe they do, maybe it doesn't even matter. Uh, like a hundred billion dollar increase in market cap for Google. Um, it was kind of cool.

It happens like, you know, a week of fluctuation in the stock market, but, um, a hundred billion dollar valuation for a startup is insane. It's such a massive win for the founders, the employees, the early employees, the investors who invest in the company, huge one, right? So we are talking of things where something is insignificant for the big tech.

But massively significant for, uh, for startup and,

Logan: Which is why I'm curious about the number of over 10 billion, not the value. I assume the value will

Aravind: 10 billion is irrelevant for an

Logan: for the big companies. But how many of the startups do you think could be worth over 10 billion? Do you think it's, I mean, is it a dozen or do you think it's a hundred or do you think

Aravind: Yeah. A hundred startups can be possible.

Logan: in AI?

Aravind: Yeah. I don't know when or like how, but. It doesn't seem impossible to

AI Safety and Regulation: A Founder's Perspective

Logan: How much time do you spend thinking about AI safety?

Aravind: Not so much because I, for, for good or bad, I'm not working with the frontier models myself, I'm not building these models. So maybe the ones who are like truly at the edge are concerned and their concerns are valid. From what I can see, it seems far fetched. Um, so if there is a real evidence, like if you're given, if you're all given access to something that seems too intelligent to be like, you know, Uh, giving us like these sort [01:08:00] of existential risks, then yeah, I'll also be concerned.

Logan: When do you think AI should be regulated?

Aravind: Um, not anytime soon. And also if it is regulated, I'm not going to trust. The people who are asking for it to be regulated.

Inspirations and Influences: Learning from Larry Page

Logan: Larry Page has been a big inspiration of yours. Why do you take such inspiration from him specifically? And what are some of the things that you've internalized that, um, uh, that are maybe, I don't know, Larry Page isms?

Aravind: I, I take a lot of inspiration from him specifically because he, uh, comes from a similar background. He's an academic, I'm an academic too. You know, and, um, when the, the, the usual founder examples are the jobs and the gates and the Zuckerbergs, and they're all like undergraduate dropouts who, and like, I, I already competed undergrad and I was not even thinking about entrepreneurship all then.

So it's important to find some example, right? Like you can, you can look to, there were PhDs of startup companies too, but it's all like enterprise stuff, usually all the consumer things come uh, undergrad dropouts, hack projects. So Larry was the only one who converted like an actual research idea into a company.

Uh, and I, I felt like the only way I can do a company is have a product that's deeply grounded in like, AI and like research where like better AI benefits the product. Like he's the only like search is at least the only example I've seen like search and generic chat are the only examples I've seen where like better AI is the only way to make your product better.

And you want to make your company AI complete. Like the mission can only be fulfilled when AI is truly solved. Until then, there's always going to be ways to improve the way an answer is rendered to the user. And I found that very powerful, that insight that he had. And the other thing I kind of have imbibed, and the company has sort of [01:10:00] adopted, is the user is never wrong philosophy. Uh, this was there when like he mentioned this anecdote where he was ready to sell the company to Exide, Google, the early Google search engine. And Exide CEO was looking at his demo where he ran the same queries on Exide search engine and Google and Google gave better links than Exide. Uh, the Exide CEO got mad at him.

Like, Oh, you just, you know, uh, manipulated the demo. Like if you type in the right query, we would have also gotten the right results. It's your fault that our results are worse, not, not ours. And he went back and said, Oh, what did I do? I'm just a user. I typed in like. You know, just like a regular user would, I'm not like trying to like cheat you.

Uh, that means you're not, you're not understanding the user. And so the user is never wrong. Like if they come and type in a wrong, like my mom sometimes is like, this doesn't work. And my first reaction is like, yeah, like why didn't you write the prompt better? But then the real truth is like, your products should figure that out.

This is where we really differ from chat GPT. Also chat GPT popularized this whole thing of prompt engineering and everyone trying to learn and creating better prompts and sharing things. That's kind of interesting, but is that really the, uh, ultimate vision of a winning product? I don't think so. It's like saying like, like, you know, learn how to use Winamp.

It's cool. It's not going to be the winning product at the end of the day. Um, all Microsoft products are designed that way. Like add a lot of buttons. Like Larry Page's philosophy is very different. Like you make the product so simple, so intuitive, like it should be magical that it should already know like what you want and like, you shouldn't have to think, remove as much of thought process on behalf of the user.

Logan: Jeff Bezos said something interesting in an interview recently that, um, we as humans actually aren't designed to be truth seeking. So we spend a lot of time focused on the framing and positioning of things. Do you share this view? And if so, how [01:12:00] does that inform perplexity?

Aravind: I share the view. Um, I've at least like Along with our founders, we've created a set of values at the company for the culture. Um, and, um, like Ben Horowitz has this quote, right? Culture is not a set of things that you write. It's actually like set of things you do.

Um, we wanted what we do as to be a reflection of our product itself. Uh, like, like, like, you know, there's this thing of like, you know, stuff that reminds you every day, like what the culture is like, like Amazon has is frugality as a culture. And so they make desks with. The wood from the broken chairs and things like that doors, uh, to remind people that we are still like meant to be frugal and for perplexity.

What we felt is our product should be fast, accurate, and readable. These are the three evergreen product values that will always matter. It's not like next year, Logan, you're going to come and tell me that, uh, Arvind, I want your product to be slower, or I want your product to be like less accurate, or I want your answers to be like garbage, like really long paragraphs.

You're always going to want some improvement. And if we adopt that in our culture by being for accuracy, we're going to be truth seeking for speed. We are going to be, um, fast, like high, like, you know, fast pace, which is what all startups are meant to be. But. Even more emphasis on that. And for readability, we're going to be like concise in our communications internally too.

You're not going to like, uh, waste time in meetings. We're going to keep Slack messages short. We're not going to write big docs. If we adopt that in our own actions, uh, that means that's the best way of caring for the user, like truly understanding what the user wants to

Logan: Mark Zuckerberg said, uh, something that's maybe a slight derivative of that Bezos quote, uh, which is something ineffective. You can only say meaningful things when what you say and the opposite of [01:14:00] what you say. Are things that people believe can you elaborate on why you think that's an interesting

Aravind: Yeah.

Logan: quote

Aravind: Yeah. Like say for example, take the move fast and break things like it's interesting because you could also adopt the opposite. Like move slow and don't break the things are already working, man. Like I don't want it to break. Like, I don't want the stress from constantly dealing with like production, things breaking or people complaining at queries getting wrong.

Logan: or move fast and break things could have negative implications around data and privacy and

Aravind: Yeah, exactly. Uh, and I think what Zach is saying is pretty interesting. Like just even from an information theory, like, like leave, leave his opinions and stuff mathematically, it's interesting. Uh, when you say something that everyone considers the truth, there is no information at the ad,

Logan: It's a platitude.

Aravind: there are no, exactly.

There are no bits that you, you, you ingested. It's like an AI that's already memorized the text. And it's like looking at the same data again, it's not going to learn anything. There's no gradient. And, um, on the other hand, like when you're saying something very, like completely different that it's not blatantly wrong, you cannot immediately verify and say it's wrong.

Um, you're forced to think that that's interesting. I never thought about that. There, there may be some element of truth to that. It may not be a hundred percent accurate, but. It's interesting. It's a viewpoint I haven't considered. And, uh, that's if, if you want to be truth seeking, you do want to hear such things because your worldview may not be a hundred percent accurate yet.

Nobody says this, right? So you always want to keep hearing some things that. Puzzle you, perplex you, and then update your understanding.

Logan: I heard you say that you actually avoid meetings as much as possible and instead, um, we'll make arguments over slack and the team will as well. Is that something you've kept up with?

Aravind: Yeah. I mean, like I've, I've, I've tried to argue less on Slack because it's not a good look when like, you know, the CEO is constantly arguing in public Slack channels. [01:16:00] People like, I've realized that, um, sometimes only if you take You understand like how others view you, like you have, you learn from that.

So earlier when we were just like four or five people, like we were all like, just people working together. So nobody felt like, you know, one person's opinion mattered the most, but as companies scales and grows, like it's hard to preserve that. And so when I say something like people take it a lot more seriousness,

Logan: It's a lot more definitive

Aravind: And all like, and it might have even been wrong. Anybody, because they said it, they are like forced to think. And I don't want that culture. So this is apparently something I. This is actually something I, I listened to on the Lex, same Lex Bezos podcast. And, and I adopted what like Bezos said, where he said, like in, in any of these meetings where they're making a decision, he speaks, uh, at the end.

Uh, he doesn't speak at the beginning. And there's a reason for that is like, if he had said something in the beginning, uh, it takes even more courage for somebody to like oppose him and defer with him. Um, whereas on the other hand, if he speaks at the end, like he has even more data now to say the right thing.

And he's also allowed everybody else to say what they actually wanted to say. And, um, that way I don't argue as much on Slack anymore. In fact, Slack arguments are like unproductive. I don't want to like, you know, promote that through my podcast or something. I

Logan: focusing on making incremental improvements is a value that you guys have as a, as a company, but also as a, uh, as an entrepreneur, um, how do you think about continuing to better yourself outside of not making arguments on Slack? Uh, like making incremental progress as a

Aravind: mean, as an individual.

Logan: As a CEO and leader.

Aravind: I think like, uh, the sooner you realize that your job is always changing, the better. Like what you did even three months ago is not very useful anymore. You have to constantly upgrade and learn new skills. Um, so it's sort of seeing [01:18:00] yourself in a, I mean, I, I, I have like, I like AI a lot because it's, it's a good way to, for, for humans to, to like live their life. Like what should GPT 4 do? What should GPT 5 do to become GPT 6? It should just like, like work on things it's not. Good at yet. Right. And like find out and evaluate and assess. And so the truth seeking this is very important. Um, I'm still not good at like many things that you would expect CEOs to be good at.

Uh, sometimes there are some meetings where I'm supposed to explain the product to somebody. Uh, another executive at another company that we're trying to do a deal with. And I go on this, um, you know, five minute explanation instead of giving a crisp one or two minute explanation. So there are like so many places I always aim for improvements.

Logan: You spent time at DeepMind, two of the most prominent, uh, businesses in artificial intelligence,

Aravind: DeepMind's not a business though.

Logan: a business unit. It once was a business.

Aravind: Yeah, kind of, but it was, it's a, it's an own subsidiary of

Logan: Now, now it's What did you take away from the cultures of each of those organizations respectively?

What were the differences? What are the commonalities?

Aravind: I think the main difference is the speed of iteration. Uh, DeepMind likes to, like, really think hard about a problem and, and, and try to arrive at very beautiful, elegant solutions that would, you know, amaze you. And, um, OpenAI is mostly about, like, Let's go like solve this, like, you know, the first step in solving it is actually trying to be zero, a solution.

And then we iterate and we get there. We don't, and there are benefits to both approaches. Uh, you're not going to be able to come up with like the transformer and the open AI style. Like, you're not going to be able to, like, take an RNN and keep improving it and get to the transformer. You don't need, like, this 10x or 100x ideas that you might just be able to come up, um, you know, [01:20:00] mathematically think through and analyze.

At the same time, you're not going to be able to just take the transformer and convert it to GPT 3. 5 by just thinking on a whiteboard. It's, there's just something you just learn through iterations that OpenAI did. That's why, like, these two orgs are so different from each other. Uh, commonalities are both really.

Push hard for quality. They have very high expectations of themselves. They don't release anything that's half baked or like low quality. So whenever they come out, it's people always are like, you know, Oh, wow, this is cool. Most of the times, anytime there's like massive PR, it's usually coming with something that's very, very high quality, very impressive.

And I think that push for quality is both organizations share that and, um, really wanting to push the boundaries. Peace. Both organizations share that philosophy.

Building a User-Centric Product: The Perplexity Journey

Logan: I feel like you're uniquely passionate about search in this problem, this answer engine problem. It, it was something that you took maybe a, um, circuitous route

Aravind: Yeah. Very circuitous route.

Logan: business around, but can, can you maybe talk about that? Journey and, uh, the importance as an entrepreneur of. Being passionate about the problem you're solving, like uniquely passionate.

Aravind: Yeah. I mean, I was always a very big Google user, right? Like, I wouldn't type queries like most people. I can find things much faster. It's not a big thing to be proud about. Like, I know, like, so many people in the world exist like this today. But, um, That, that mentality to go and tinker around and do things.

Um, like there's all the side colon tricks and like, uh, all the other prefixes I would use. I knew a lot of like, like ways to get around. And, um, then I would realize like other people looking at using Google and I'd see like how they're not using it well. Same thing with like Facebook's graph search. I really liked using it.

Uh, or like even Twitter searches. Um, so in general, like, like [01:22:00] learning and to use tools and like, Being very good at it was always appealing to me.

Logan: And what, what was that you think, just desire for knowledge and

Aravind: yeah, I like fast knowledge, like quick, get to the speed, like, you asked me to find something on Google, I'm actually very good at it.

Um, may not, may not be the skill that you should optimize for, but instead you would want to train an AI for it, but it's very easy. And, um, And I always liked these new search experiences that Google built, like Google Scholar, Google images, all these things are like very nice. And I, um, you know, the books like in the plex really influenced me of how you can create a company.

Like if you can create a company where like the really smart people come into work every day, it, it's a, it's, it feels like, you know, truly a proud moment. Right. Um, cause they can choose to do their work anywhere else. So why, why are they coming and working here? Uh, you need to create the right incentive structure and the mission and like the roadmap for them.

And if you can succeed at that, that feels like a great achievement. Even if that lasts only for a few years, I don't know. Like I'm still very happy. Right. Um, and, and Google did this at another level, like all the smartest researchers and engineers work there. Um, but they've been too far extreme in the sense that, um, they paid people so much to just come You know, retire and not really realize the potential of their intelligence.

So there needs to be more companies that create units of people that are very smart And really trying to push the boundaries because that's what is good for the world and good for every individual too You need to feel fulfilled And I hope perplexity is one such company.

Logan: And so you started out with this as a personal curiosity, and then you decided you wanted to be an entrepreneur, maybe based on that mission. Did you want to get a great nucleus of intelligent people excited to come to work every day and then went looking for a problem?

Aravind: Yeah, exactly. Like, you know, I wanted to [01:24:00] do something that had the attributes of Google, which is a group of smart people working on hard problems, getting the product out of the hands of users and their usage continually improves the product. Like these were the three attributes I felt, and, uh, little did I realize that it would end up being an answer and like search product itself, but that's sort of what I feel is very hard to build a company with these attributes, unless you work on search for Facebook, you don't need the product constantly improving with the usage.

You just need to launch like poke buttons or like all these other engagement, maximizing ideas. Same thing with like, you know, tick tock is obviously a good example of this user data, improving it, but it doesn't have the other attributes for me of like really smart people wanting to work on it. It's very hard to pick that, like, I, I just don't know, like, you know, maybe if there's some other idea, like I would love to try that out as a company too, but, uh, it somehow ends up always converging to like working on a very hard problem, uh, that has an actual product in the hands of users and having it like continually improve the product

Logan: In the journey of perplexity, what was the moment where you were like, shit, we might really be on this something here.

Aravind: we went on, like, you know, the, we launched on December 7th, 2022. And we thought this is just going to get us some enterprise customers, fine, but the usage just kept going up through the vacation, like through the Christmas vacation. And I was like, first of all, you're a no name company, no name startup, uh, only three people are working here, four people, and why do users still care and use it?

And why is the usage actually going up? And that too, in a time where people are chilling and watching Netflix, that means you have something. So that's when we thought, okay, let's update the product a little more, make it conversational, suggest follow ups and see what happens. And then that. Increase the usage even more.

At some point we [01:26:00] reached like hundreds of thousands of queries a day. And I was like, okay, this is not normal. Like even if the retention is low and like people are just checking out for the first time and leaving that sustained usage still from other people. So let's go and raise some venture capital money and continue the experiment further.

And that continued experiment kept growing and growing and like, we were like, okay, this is it. This is the company.

Logan: I heard you say that the best ideas are those that you say out loud, but people think it's

Aravind: Yeah.

Logan: Why do you think that and how does that tie into perplexity?

Aravind: Why do I think that it's, uh, I think that because, um, there needs to be some contrarian nature to the idea. Uh, if everybody, like, like going and building an AI chatbot for, uh, doctors might not be the most contrarian idea. It might still be hard to execute on because of regulation and like connections to the field and stuff like that.

But it's not a very contrarian idea. Like you can imagine many people accepting it's a good venture capital idea. But trying to go build an AI answer engine that will compete with Google on like, you know, day to day user habits is one of the worst ideas you can pitch. Like, nobody, people might even put 100, 000 into like, You know, a failed startup idea, if it, but not into this.

So that's why it's a, it's one of those ideas where you go and say, I'm going to compete with Google. People are like, Oh yeah, cool. You know, good luck, man. Like after three years, they're going to shut down the company. Um, and, um, same thing. Like the first idea we had was the glasses, like. Watching through a glass and asking questions about everything you see and the motivate

Logan: VR or AR.

Aravind: just, just regular camera on your, on your glass.

Like I'm wearing this glass, I'm seeing you and I can ask questions, right? There's no sure you can, you can use AR and like embed the results in front, but it can just literally even speak back to you and it's And this is a bad idea because like. The technology for that wasn't [01:28:00] there in 2021 or two, um, like, like models that are like as fast as the 7B Lama today don't exist back then, but it's changing.

Maybe it's going to work in two, three years. So you always want to be somewhat well positioned to take advantage of the moments and ship the product when the opportunity arrives. And that's why, like, you want to set up a company for that.

Logan: There's an investor who said something to the effect of it won't matter if you lose competing with Google. And so you decided to go all in on it at that moment. Can you, can you talk about that and how that, um, informed you?

Aravind: Yeah, yeah. There was Nat Friedman. He said, look, you know, we had a discord server and like few users. And I was like, should I launch this? And I, I don't, I don't have the confidence, you know, what if people ridicule me and like, and he was like, are you serious? Like, you know, like you think you're that important?

Like it's not, it's not, nobody cares what you're doing. Sure. You have some funding, but still nobody cares. And even if this is a failure, at least people know who you are, what your company is, and, uh, the fact that you can ship something, all this will be useful for you to be able to like build a business later, but if you don't launch anything, thinking you're so important and like trying to get it right in the V zero, then you're not gonna get anywhere.

So it's like one of those things where the loss does has some benefits still. And the wind has massive benefits. You basically shouldn't even be thinking you should just go launch. Like why didn't you launch already yet? Why didn't, why wasn't this yesterday? So that was the conclusion. And I was like, okay, that's, that's a great way to think about it.

Sometimes it just takes you like, as a founder, you're like so confused, cluttered. Constant information overload at you. And everyone's talking to you about it. Many things that you don't even have time to think clearly. So when you get people who can help you think clearly, almost like a prompt engineer in your mind, that's very useful.

Logan: You referenced Nat, [01:30:00] uh, but I think you, you cold emailed maybe him and Elad originally?

Aravind: Yeah. A lot has also given me similar advice. Like, I think during the first two months I asked him like him, I'm going to be in stealth and he was like, why? And he, I said, because I don't want people to know what I'm working on. And he's like, do you think it'll matter? Do you think like First of all, nobody cares about copying an unproven startup.

Everyone takes themselves very seriously in the beginning, but nobody cares. Even now, if you know this idea, if I, let's say I have a lot more funding today, a lot more employees, if a founder from seed stage funding tells me an idea they're working on, I have like, Thousand other problems to worry about than copying them.

Um, so that's, that's a mistake I made. And Elad was right. I wish I listened to him early. Yeah. I'm I cold messaged both Elad and Nat, um, Elad on LinkedIn, Nat on Twitter. And, uh, both responded to me and, uh, gave me like, you know, committed to one or 2

Logan: got their attention in that, in that cold, cold

Aravind: I mean, obviously the fact that I'm from OpenAI and DeepMind and like, you know, they, they don't have time to like, Actually evaluate an idea.

Like they get like probably thousands

Logan: I'm sure that's why. So you, so you think the qualifications cut through the

Aravind: definitely qualifications help you.

Logan: Yeah. Reid Hoffman concept of a pros and cons list and the issues around,

Aravind: then he takes only the first thing

Logan: so I actually don't know the, the, can you talk through what that is? I've heard you reference it, but I'm interested in, in, uh, your perspective on it.

Aravind: yeah, so I think there's some interview somebody asked him How do you make decisions and he just says like the typical way of making decisions is right the pros and cons and then? Try to arrive at like whether it has more pros and cons or not, but that's like the dumbest way of making a decision because You value you give equal weight to every pro and con but it might not even be like As important.

And so what you should do is [01:32:00] write down the most important things, strike out everything else from that list, except the first and make a decision on that. basis alone. And that's a great way to convert the decision to a binary decision. In general, I feel like the human brain is not as, as, uh, as sophisticated as an AI classifier.

An AI can make a decision on like millions of dimensions. Um, but a human brain is incapable of even making a decision more than two, like, like a single dimension, even two dimensions where there are four choices, it's pretty hard. That's why in a multiple choice question exam, if you're unsure of the answer, what do you do?

You rule out options first, and then you convert it to an X versus Y problem. That's the best we are good at, like, at least where, like, there's a 50 percent chance of getting something right. So try to, you know, Reparameterize all hard problems in your life to binary decisions. That's what I took away from the Reid Hoffman.

Logan: Do you, do you do that internally with, uh, decisions and paths

Aravind: definitely. I do it almost subconsciously these days that I don't think about it.

Logan: in analyzing the opportunity for perplexity as we've sort of pulled back on all these different things in the early days? Did you recognize the business model challenges of Google and this is as you've executed you sort of figured this stuff out? Is there a lesson in that for people?

Aravind: Lesson is if you're not a genius, the only, only, only chance you have to succeed is iterate. Give yourself more shots of success. I'm not a genius. Uh, you can always connect the dots in hindsight Like the great steve jobs said even steve jobs did not come up with the i the the idea for iphone right away The way it happened was iPad was separately in the works and they invented multi touch and, um, he was separately trying to build a phone.

And then he said, okay, why, what if we put, can you try putting the multi touch on the phone? And then all the dots came together. It was not like I [01:34:00] got the iPod done and I'm going to, the way he presented the end of like an iPod, a great communications device and a phone all in one. Like, it's not how it starts.

It's how it's presented at the end. And, um, iteration and like lux surface area is what you should bet on. The moments of genius is pretty hard.

Logan: Is there a branding element, um, that you think about? Outside of the actual technological answer engine that you're building, some of the people that have successfully competed with Google in different vectors as independent companies, DuckDuckGo, Brave, have competed on orthogonal paths around privacy and data and things that Google can't compete on.

Is there a branding? element that you think about outside of the core product?

Aravind: Um, I mean, I view the product as a Swiss army knife for knowledge. I think that's the brand we want to go for. It should give you the 80 20 on anything. Uh, you should feel like you have so much power because someone's always been working for you and doing all the research for you and getting back stuff.

And, um, doing your research is a, it's a thing that people just use colloquially, but every time you use the word like a research buddy, you think of a financial analyst, you think of a McKinsey consultant, but we all do research in our everyday life, like including like, you know, what shoes you should buy or like where you should go for vacation or what coffee you should drink or like, you know, what drink you should try in a new bar that you're going to.

There's always so many decisions that are being made in your day to day life. So I want us to be seen as the Swiss army knife to help your mind. I don't want to adopt this branding of like, Google's terrible, like, like, like, it's a illegal company, it's, it's, it's an evil company. Maybe some, like, there is some truth to that, which is why Dr.

Go and Bray worked to some extent, but I don't want to go for that because that's constraining yourself to a very small market. I want every Google user to be a perplexity user [01:36:00] without having to let go of Google. That's a bigger market for me. And that only comes when you're creating new value, not like trying to remove some bad element in one thing and offer the same thing.

The Future of Perplexity and Changing User Habits

Logan: What does the future for perplexity hold in the nearish term for people that are listening and users or

Aravind: Yeah. I mean, like I want to pass a Larry Page toothbrush test, like, you know,

Logan: which is what

Aravind: uh, a great product, a product is only worth launching and executing on if it can, if it has a path to, uh, being used at least twice a day, like a toothbrush. Toothbrush is a great product. We all use it every day.

Logan: we should?

Aravind: It has a hundred percent retention.

Uh, so I want our product to get there. I just need you to submit two queries a day. I'm happy.

Logan: What's holding back people from using it twice a day?

Aravind: First of all, a lot of people are not aware. Those who are aware, uh, they try it once, maybe some query doesn't work as expected, and they leave, or they're not able to find the immediate difference from ChatGPT, which they are more, you know, familiar with. And these are things that will be solved over time as we iterate on the product, and may continue to add, like, new value, and, and, and, and, like, you know, improve on the three dimensions.

And then There's the bigger enemy is the muscle memory and habits, like habits to change. By the way, people are like, Oh, habits are so hard to change. You dare not try to change them. But every successful product has changed habits. Like Yahoo was still the search engine I remember was being used in India.

Even after Google IPO and Yahoo had more traffic than Google, even after Google IPO. So people like now it's very hard to find a Yahoo user. So people take time to change, but Blackberry is sold more and more every year for another period of four years, despite the iPhone being launched 2008. So it takes time and we are committed to the longterm, right?

This is not a short term [01:38:00] company. Like if it's meant for short term, it's not meant to be a company. It's a project and it'll die very fast. It's a company where it's a project is like at least multi, multi year or even a decade.

Logan: Anything else that you wanted to touch on

Aravind: No, I'm good. Thank you.