[00:00:00]
124 Ali Ghodsi (Databricks) Final Edit YT: People are like, look, we're going to really crack the code and come up with that like super AGI. I don't see that happening because the next model is more expensive and requires even more people. You have to be even more careful about it. I would believe that if you would say, Hey, GPT 5 will cost one dollar and it's going to take one second to create, but we're not heading in that direction.
It's the opposite. Welcome to the Logan Bartlett show. On this episode, what you're going to hear is a conversation I had with Ali Ghodsi. Ali is the co founder and CEO of Databricks, a business valued at over 50 billion. Databricks sits in the center of the artificial intelligence revolution that we're seeing today.
Ali and I talk about the state of AI, as well as what he's seeing going on at the ground level, his view on the federation of models that he sees coming, as well as what he sees for the future of AI over the course of the next couple years. We also go into the founding of Databricks, his journey to becoming ultimately the CEO, as well as the different operating principles that he runs the company with today.
A really fun conversation that you'll hear with Ali now.
Logan: Oh, thanks for doing this.[00:01:00]
Ali: Of course. My
Logan: So, so you've been doing artificial intelligence since 2013 with Databricks?
Ali: with Databricks. Yeah. Before that we were at Berkeley. We were doing also
Logan: When did you actually start?
Ali: I think 2009, we started participating early, 2009 10. Spark, when it was originally created, was to participate in the Netflix competition, which was, hey, they gave you all these movies, and you had to do recommendations, and they would, you know, I think give a million dollars, a half a million dollars to the winner.
So we got the highest score on that. together with another team?
Logan: So you're 15 years in now. It obviously, it was considered somewhat niche and maybe academic initially, and then obviously we're at a very different point today.
Logan: I'm curious, uh, As you, as ChachiBT happened, and uh, that moment where it seemingly went from enterprise and business use to sort of mainstream consumer adoption, did that change your perspective on artificial intelligence in any way?
Kind
Ali: Not really. I think that, uh, it's kind of changed. I think from 2013, 14, 15, [00:02:00] I just didn't understand why people don't really get into AI. Uh, you know, in 2014, I remember having this debate with many of the people at the company that I wanted to call it AI. They said, no, that's like not authentic. We should call it machine learning.
It's ML. And I said, look, once this stuff goes mainstream, you know, mainstream, it's not going to be called ML, right? They're like, no, but that's not accurate. AI is robotics. And, uh, I just never could understand why it's not much bigger of a deal. It was kind of a little bit esoteric, uh, machine learning AI.
Uh, and in, you know, 2022 November there, it just. Made all the sense, but then the magnitude of how much it sort of just shook everybody up, that was kind of a shock. It was surprising to you. It was
Logan: that you looked back on actually in those 15 years that were actually maybe groundbreaking to you that you were, but, but not to the mainstream?
Ali: It was really ad placement, you know, click through, you know, click, [00:03:00] click streams for logs to place ads and so on. So I would say 2014 15 when we said really large enterprises started using this stuff for more interesting things like Expedia, how they listed the hotels that how you ranked the listing of the hotels gave them a huge revenue uplift.
It was kind of a game changer. Uh, Regeneron found a sort of. Um, so they found the marker in the genetic DNA for a disease and they were developing drugs. So you could start seeing like, wow, that has really big impact on a lot of these companies. That was a big deal. Uh, but then I would say 2018, 19, we started seeing the natural language processing breakthroughs coming, you know, with GPT 2 and so on.
It was already pretty, you know, Pretty amazing. So it was pretty clear that the Turing test would be broken soon in a couple of years or so. So, uh, so, you know, it's been many of them, but yeah, who would have anticipated that the whole world would not kind of wake up in this way. And this would be like front news every day.
Uh, you know,
Logan: Yeah, hard to believe. I've heard you compare the chat GPE moment to the iPhone moment.
Logan: Can you, can you [00:04:00] compare that analogy and what the implications are for apps and maybe how enterprises or users should think about artificial intelligence?
Ali: mean, when the iPhone came out, everybody realized kind of what the smartphone revolution would be and what happened immediately was that every company on the planet said, we have to build lots of apps. We have to build our own iPhone apps. Um, and so there was this photo reactivity. All the companies building apps, but you know, did the apps really make sense?
A lot of the apps that they built, you know, I kind of joke about it. And I say, it's kind of like building flashlight application for your iPhone. Uh, but then a couple of years later, the iPhone just has the flashlight built in. And I think 2023 was one of those years that everybody wanted to say, Hey, I own AI at our company.
I am the AI leader. I am the gen AI. So there were a lot of chatbots that were built and, you know, They'll be just like there's an app for that in 2007 or 8, there's going to be an AI company for that, that just does that. Um, so, um, you know, for instance, people building chatbot for HR documentation, you can ask, you know.
What's your policy on paternity leave or maternity leave? Um, [00:05:00] and companies building that, but it doesn't make sense that a company builds that for themselves. It makes more sense that there is a SaaS app for that, and everybody just buys that. And a little bit we're in this moment where there's a lot of like, hey, we're going to build our own flashlight app or whatever, chat app for everything.
And now we're seeing slowly the companies being built and the apps being built that are going to specialize and we're all going to just leverage them. I think we're like in the. First innings of that
Logan: So to take that analogy a step further, do you think that there ultimately will be disintermediation between the end customer and maybe the HRS system where a native AI business sort of takes in that natural language interface and no longer would Workday or ADP or whoever it is on the back end sort of serve as the interface to the user?
Ali: be, I think that, uh, this dynamic is playing out that in this AI era, uh, data is extremely important. Like, so why shouldn't everybody build their own chat app? Well, you know, you should really ask yourself, what's your competitive advantage as a company? What is it you can do uniquely that no one [00:06:00] else in your industry can do?
Everybody can call open AI as LLM. So you just calling, you know, GPT four or five or six, that's not going to help. Um, so what does you have typically comes back to the data. Or some customer relationship, but typically data, you must have some data advantage that nobody else has. Um, so that's what it's going to come down to.
Um, the big companies, the incumbents, they have that data advantage, right? They have customers and they're collecting a lot of data. Uh, so the question is, can they leverage that data advantage and incorporate the AI and do the innovation that's needed? On the other hand, We've seen in these waves before that you'll have these smart founding teams that are just completely disrupted, thinking about things differently and they'll rise and they'll do things in a way that the big companies are just, they're asleep at the wheel and they're not going to do.
I think we're going to see both. I think we're going to see some, we're going to see a lot of company startups that fail just like in 99 when the internet was about to come. Uh, but we're also going to see a few that become the next Google, Facebook and so on. But I think we're also going to see some incumbents that actually figure out how to leverage [00:07:00] the data that they have and do the investments needed to crack the code on this.
We're going to see both of those, how exactly that, you know, fleshes out. I, I would probably bet more on the startups, to be honest, than the big ones.
Logan: Interesting. Um,
Ali: too slow. The big ones move too slow. And you know, if you have like a killer app, uh, you can get the data flywheel probably collecting data on it and grow the usage.
And then you'll have enough data advantage to be able to do that. And if the big guys are not paying attention, um, it'll move too fast. And before you know it, it's too late to react.
Logan: do you think we are in the hype cycle of AI right now?
Ali: I think, uh, I think that we're going to see this. I don't think like, Hey, we're in trough of disillusionment.
I think it's, you know, frankly, I think it's like somewhere at the top and we're going to see that, you know, soon we're going to see more and more companies not doing so well. And I think there's going to be a narrative that, Hey, this, I think it was complete, you know, it was overpromised. There's nothing there.
The quality of the models aren't there. They just [00:08:00] don't work. This stuff doesn't work. It wasn't worth it, all these companies and so on. So I think we're kind of going in towards that very, very soon. Uh, but then I think right after that, you know, when we actually are going to see those very companies start coming out with these applications that are completely game changing for humanity.
Logan: Yeah. The good news is crypto is back. So
Ali: Yeah, we can do that
Logan: bounce back and forth between one and the other.
Logan: I'm curious as we record this, I guess, early November of 24, there's been a bunch of reports about scaling laws and maybe the limitations that we're reaching some of the outer ends and diminishing returns on that.
Do you have any opinion on where we are with scaling laws right now?
Ali: Yeah, I mean, we actually put out a research paper. Uh, you know, my co founder, Matej Zaria. Uh, it's about compound AI systems, we call it. And, it was actually put out. Probably six months ago, or maybe even more. And it kind of predicts this, uh, which, um, what it basically says is that to really eke out that last quality bit now and really like move up the other, we really need to build [00:09:00] these agent systems where we have many parts, many components that work together, and in particular, it's going to be really, really important that we can evaluate.
the quality of that whole AI system on the particular task you want to do. Like just to take that silly example that I gave HR docs Q& A, how well do you answer, you know, questions from employees on HR documentation? Not how well do you pass the bar exam or medical exam, general intelligence, not that, but this specific.
Only once we have this evaluation, we can start building these agentic systems where we can start doing hill climb. So keep improving the quality for that specific task. So we call this compound AI systems. And these, these will also leverage a lot of tools and they will also leverage classic machine learning because classic machine learning has much better numeracy.
It's much better dealing with numbers and so on, calculators, things like nature forecasting and so on. Um, so that's what we call compound AI systems and the, and the beauty of those, uh, is that you can, when you run them [00:10:00] and it gets something wrong. You can debug which agent got it wrong, where was the reasoning step that was broken, and you can then improve it, and you can get better and better, and you can do this hill climbing, and keep increasing the quality of the model, uh, or the AI system.
Scaling laws, you pre bake these gigantic models, you wait six months, And the model comes out and maybe it's bad at predicting something. Maybe it's, it's numeracy is not good. And you can't even know why it's not good. There's no way for us to know, hey, why did it get this thing wrong? I don't know. You do a bunch of matrix multiply, a bunch of attention layers, something plops out.
It's inaccurate. I don't know. Let's go train a bigger one with more data. Um, so it just seems much easier. to do this now more agentic workflows at inference time. So it makes a lot of sense. I think there's going to be a lot of innovation. It is nothing, you know, I think everybody's now focused on this.
Logan: Do you think there's an implication for, like, do you think that results in a federation of model types and different capabilities that people are, uh, respectively good [00:11:00] for?
Ali: Yeah, I think you will build AI systems that are very, very good at the particular task. We see it all the time. Our customers, uh, they typically want to do the same thing again and again and again. Like they have massive data, like they have database with SEC filings, right? And, or they're an insurance company and they have all these insurance documents.
And they want to repetitively again and again on massive amounts of data, repetitively do the same task again and again and again. So then for them, it, it comes down to. Can I, what's the highest quality I can do on this specific task? I don't need to be good at AGI, Artificial General Intelligence. How can I do this again and again really, really well?
And can you give me the lowest cost for that? Because I'm going to do this at scale. It's going to cost me a lot of money. So then it makes sense to specialize a system that's really, really good at that. You know, having a generic, gigantic models just way too costly, and actually it won't do as well, even though it's really smart, it won't do as well at that particular task, repetitive tasks that you want to do.
So we see this all the time. Like one of the surprising things on Databricks is, you know, we have models of all kinds of sizes. You can use LLAMA 400, 5 billion parameters, but [00:12:00] in aggregate, we get more revenue from tiny models. And that was surprising to me because, you know, per model, you make much more on a bigger model, uses much more GPUs and so on, but customers want to take the big models, distill the smallest possible that can get good enough quality and just use that because that saves them money.
And actually in aggregate, we make more money on that. So yes, I think we're going to see that. And you see that in life also, like, you know. Are all the tasks that humans are doing, all the jobs that are happening, are they, do you need Terence Tao for every task? You know, are you getting the smartest person possible on the planet doing them?
Or many of the jobs are repetitive and some things, you know, it's good enough and you want to pay this particular price for it. So it makes sense that you have a smarter model that's specialized.
Logan: What do you think about the concept of, like, general reasoning or the path to, to, uh, maybe not AGI, but, but, um, a more holistic understanding and reasoning that sort of underpins these distinct models? Do you think we're, we're approaching a step function in that or do you think that that we've reached some certain plateau in it and now it's about the data and the different functional areas?
Thanks.
Ali: I mean, look, the [00:13:00] scaling laws, it's like on a logarithmic scale. So moving to the right, we are going to move to the right. And yes, that means that we're going to get improved intelligence, but you know, the next thousand X is not going to be as, you know, it's not going to yield as much as the previous thousand X.
Plus, you know, um, one of our researchers put this out, you know, over the last 11 years, um, you know, we've had a hundred million times improvement in, compute that we've thrown at the problem. We're not going to be able to get the same over the next few years. We're going to improve it, but not as much. You know, there's just not going to be enough energy compute to throw at it.
And we're not going to have enough data. Like, you know, Lama was trained on 15 trillion tokens. What's that? 10x the size of the web. Are we, you know, are we going to 100x, 1000x the amount of data that can do that? Uh, getting that high quality data, it seems just, it gets exceedingly expensive. Um, it's much easier to build these Compound AI systems, these agentic systems, where you can actually debug and get really good at the task.
So, yeah, I think there's going to be a [00:14:00] slowdown. Um, I also think that a lot of those things where people are like, look, we're going to really crack the code and come up with that, like, super AGI, that kind of would require some kind of, um, you know, recursive self improvement loop. I don't see that happening because the next model is more expensive and requires even more people.
You have to be even more careful about it. I would believe that if you would say, hey, GPT 5 will cost, you know, 1 and it's going to take one second to create. If we were headed in that direction. Then you could see like, okay, well then it can spawn off and we can just apply a genetic algorithm, do lots of them and pick the best and so on.
But we're not headed in that direction. It's the opposite. It's like, we're going to do a very specialized data center. We're going to build a model over one year period of time. We're going to be very careful. Lots of humans involved. Uh, so, um, I have, I don't have a crystal ball. But my guess is that for many more years, we need humans in the loop and we need the stuff to be verifiable.
Like we can't just, just like we saw with cars or with, you know, uh, you know, [00:15:00] autopilots and in, in airplanes, we want to have a human in the loop that can verify that it's not veering off in the wrong direction. And it's just accelerating the work that they do today. And I think for that, we're going to use more of these agentic systems.
Logan: So it sounds like a lot of lead bullets. There's no silver bullet that you sort of see on the horizon at this point. When you hear people talking about like 50 billion models or, you know, throwing all this very good for your business, I'm sure as people think about this, but, um, do you think that, that all of this stuff we were just talking about, it means that, uh, that that's a little pie in the sky and, uh, and overly ambitious for diminishing returns as people sort of talk about these models at that level of scale?
Thanks
Ali: Well, again, I don't have a crystal ball and maybe they're right. And if they aren't, it's
Logan: You've been pretty good historically in predicting some of these
Ali: well, I think that, you know, the reasoning that they have makes sense, which is like, Hey, we're freaking creating AGI. It's worth any cost. It doesn't matter what it costs. And it doesn't matter what dilution and your economic model is done.
I don't care about them because look at what we're creating. Makes sense to me if you actually achieve it. [00:16:00] But, um, you know, I would probably lean more on the skeptical side. Um, you know, spending that much money, like my, I guess I'm too, I guess I'm too much of an operator running a company that needs to think about, you know, uh, you know, growth rate, LTV, CAC, you know, you know, so on and so forth, operating margin that I think, you know, if you've got to spend that much money to produce a model and Then you have to do inference on that model to, you know, for it to sort of pay back the investments that you need to train that model.
And it seems right now the case is that before you've even gotten the money back for how much it costs to train the model, it's already commoditized and there's a better version by someone somewhere. Um, so then economically it's just right now, not feasible to do that. Um, it seems that for those particular tasks, The cost to create these, uh, you know, compound agentic systems that worthwhile and ROI is there.
Why not do that? And it can drive a lot of value and continue doing that. And then maybe we'll be surprised and one of these, you know, labs [00:17:00] come up with that super model. But I'm skeptical. I think that there's going to be a barrier. We got to go through some, we got to overcome some big barrier to get to that complete autonomous that can, you know, just get the accuracy right.
And it's reliable.
Logan: If we froze, uh, model development right now, do you think that the existing capabilities that we have in AI is enough to Uh, catch up at some point with all the tooling and the applications and all that with the excitement and the promise that we're seeing, or do you think these incremental gains that we're sort of talking about in the, the more niche sort of specialized models, we need to keep making those improvements to, to catch up with, you know, the potential of what we could actually accomplish.
Ali: Absolutely. I think that, uh, you know, humans are the longest pole in the tent here. If no human uses any of this stuff, it's absolutely useless. You can't even come up with that super, you know, AGI thing. If nobody's using it, it doesn't matter. Humans take a very long time to change all of their behaviors, [00:18:00] habits, processes in these companies and so on.
It's just very, very hard. Uh, so I think with what just we have now, there's huge. untapped potential, and it's not yet been applied in every workplace, in every function, in every process and everything that we're doing. And, and I think we can even improve what we have today with those kinds of techniques that we discussed.
So I think like, even if we freeze it and say, Hey, that's it. Actually, we didn't know how to create a single model. That's even better. This is the best we'll see in the next 20 years. I think there's going to be massive implications. on the economy and, you know, and what we can do and so on. So I would still be extremely bullish on this.
Um, but that's really truly what I believe in.
Logan: Is there something that you see from your seat and purview that you have that maybe wouldn't be intuitive to the average listener that's, you know, paying attention to all this stuff, but you're, you're able to see so clearly the same way that you maybe did 10 years ago, you're like, Hey, why isn't anyone, why does anyone care about this AI thing?
Is there another insight, maybe not at that quantum, but something else that you sort of [00:19:00] see that you feel like might be mainstream or well understood in a couple of years that is underappreciated today. Okay.
Ali: Yeah, I mean, I think it's the fact that you're focusing the task. I kind of said it. You focus the tasks on the repetitive stuff that today is costing you a lot of money. And if you can just get the reliability up on those specific tasks, it's massive improvement for these organizations. And it's pretty game changing.
We haven't actually reaped that yet. Those use cases are sort of right now being worked on. Yeah. Folks are building them. They're not in production. It hasn't had impact on all the companies that are using this stuff, but it's just around the corner. It's like guaranteed it's going to appear. Uh, it will probably take two, three years before we see it.
And then we'll see the impact of it. And actually, you know, I fear or suspect that there might be before that we hit this trough of disillusionment and then actually these use cases land and we'll see the companies that are doing that. So it's, you can see it baking already. It's just not in production yet.
Logan: Are there use cases that you've seen, uh, people use Databricks or other systems for that have gotten you particularly excited back to the [00:20:00] Regeneron analogy that you were saying 10 years ago? Maybe something today that you guys see on the field that is exciting or makes you, you know, particularly, um, looking forward to the future of what could happen?
Ali: In every vertical, in every industry, we see lots and lots of use cases, you know, like financial services, you know, there's so many financial advisors, FAs out there, you know, they basically now we generate automatically for one of our customers, you know, large bank automatically in advance. You know, what advice they should give to you, it's customized, they understand everything about you, your wealth situation and so on, they can give you customized advice, you know, it's affects, you know, thousands of thousands of these folks that are doing that, right, in insurance, sifting through massive, massive amounts of documents to assess risk, right, it's like they can do it and by the way, one thing that people underestimate is that the models are very fast at reading data.
Right. They're slow at producing text, right? They're, you know, much slower. The other insight I would say is like, you know, that's surprising is, you know, it's probably at least a 10 to one ratio between [00:21:00] how much people are using the models to read stuff than to produce they have when they use Databricks.
They use us. 10 times or 100 times, sometimes more or even more to read stuff and then produce very little text or very little data out. Whereas people are focused on that output that comes out, but it's usually much, much smaller. Uh, so, you know, um, uh, underwriting is another one that we're seeing. Uh, there's so many going through all the SEC filings and figuring out what's happening, you know, extracting signal.
So, uh, Figuring out alpha, uh, we see it in all the financial sector. Um, so there's so many use cases in healthcare, electronic medical records. There's just so many of them. They're all handwritten, uh, being able to actually finding structure in that. Uh, it's, you know, that's a treasure trove by itself. So use cases around that.
So there's, you know, hundreds of hundreds of these you see all the time in every industry. They just haven't been productionized yet. They're not fully being used by the workforce.
Logan: Do you think that read write? to write. Uh, [00:22:00] ratio. Do you think that stays around that? Are there any trends that you sort of see as read going up and comparison to right? Or you think at some point the, the writing and the creation catches up with the reading and summarization
Ali: I mean, hard to see that because, uh, you know, how much are we going to read as humans? You know, how much could we, I mean, there's only so much and we read slowly, right? Uh, whereas having the AI crunch a lot of information, um, you know, so that it has that context makes a lot of sense. So if anything, That probably increases.
And if we can do that faster, uh, so that, that's, that's where the trend is headed. We're seeing even more and more of that when we look at our data than when we started.
Logan: as you counsel customers, be it big enterprises or, or smaller ones, uh, about figuring out where AI and automation can, can exist within the organization and how to, um, continue to dive deeper into that journey. Are there any, um, Takeaways or key points that you keep hitting that, that you feel like, Hey, if you could tell all customers, this is [00:23:00] how to set up a team, or this is how to think about it, any frameworks or primitives or whatever that you
Ali: I mean, first of all, you got to get your data strategy, right? Like a lot of companies now want to sort of skip that step and jump straight to the AI. And, you know, the data teams have been at it trying to get the data of their company organized, secured, Uh, you know, get it in one place so they can actually get access to it, not in different silos.
And, you know, that's, has been a journey and it's not finished work. Um, it hasn't, that work has not been on the CEO's radar of the Fortune 500. Uh, that's, you know, it's been sort of esoteric topic. Maybe IT was, uh, sort of worrying about it. Now the CEOs are waking up after chat GPT and they're saying, I need AI.
It's game changing. I see it now. I've used it myself. Give me AI. Uh, and they're sort of skipping that step. So that's, it's extremely important that you go back to the basics. Uh, the data is extremely important. So you have to get that in a good place. And if you don't have that in a good place, you're not going to actually be able to succeed with AI.[00:24:00]
The second thing is focus the use cases of AI that you want to do on where you have a data advantage. So for your company, what is your data advantage? And frankly speaking, oftentimes I find myself talking to customers telling them, maybe don't do this project. And I say, why not? We're very excited about it.
We're going to do this and that. We are ready to go. We'll pay you a lot of money for it. And it's like, yeah, but what's your advantage in this? You know, there'll be an app for that. Uh, by someone else. So let's focus on places where you have a competitive advantage against your competition. And it typically has to come back to something you have around the data, and you do have it.
Uh, so let's focus on those use cases, because we want our customers to be successful. Only then, long term, will we succeed as well. So, I would say that's the second thing I'd say. The third is There's a lot of people problems in large organizations. You know, it's, there's now many people that are fighting over owning AI in the organization.
Like I'm the person who owns AI at our company as well. Uh, so figuring out a way so that you can get the strategy aligned in large [00:25:00] organizations is, I know it's boring. But this is the hardest part of a large organization with so many different people running in different directions, having so many ideas of which vendor to use, what kind of models to use, what kind of people to hire, which project to do.
Uh, so if you can concentrate, uh, sort of. The decision making to one person in the organization and not have lots of departments have political fights over it, it streamlines and moves things faster. Typically you actually have the head of that department being a change agent person. It's someone that doesn't do things that everyone will love, but they'll come in and make the difficult decisions.
streamline the data processing, the security around the privacy around it, because everybody's worried about that. And then focus on those use cases that actually matter for that organization. This is as important as a technology. I care a lot about the technology and geek out on that, but this is just as important.
Projects will get not funded or to get stuck in security or they go nowhere because different departments are fighting each other. [00:26:00] So we need to overcome these things as well. This is why I'm saying it's going to take a few years before we see the impact of these things.
Logan: As you think about the, the landscape and, uh, uh, the term that you used earlier to sort of talk about was a compound. What was the compound data and all that? Yeah. Compound agent systems. Um, how do you think about what areas to potentially pursue? Um, there's a bunch of venture, uh, companies in spaces like inference or evals or whatever it is like as you think about different areas to potentially go into for Databricks as a business, how do you evaluate, uh, no pun intended, I guess, uh, those, those markets to potentially pursue.
Ali: Yeah, it makes sense. Look, it's just for us, it makes a lot of sense is you already have all your data with Databricks. You already trust us that it's secure. You know that we can handle a security to privacy. So the question is what's really adjacent to that. And for me as a CEO, where is it? We can actually bring extra value to the customer that they otherwise couldn't get elsewhere.
Typically comes back to the data that they have with us already. So typically comes with governance around that data, [00:27:00] uh, with guardrails around that data. You know, you don't want to build. Uh, this actually happened with a customer. They had a chatbot, and if you asked it, what's the best, you know, car I could buy, and it, this was a car manufacturer, it recommended a competitor's car.
You know, uh, so how do you put guardrails around that? Uh, you know, how do you deal with PII data? How do you deal with AI regulation that's coming? A lot of this we can standardize in Databricks. So it makes sense. We can, like, automatically produce documentation, So you're compliant with the regulation that's coming.
We can put guardrails so that it doesn't do those kind of things. And since we have the data, we can create this data loop so that you get this flywheel of, when they use the model, we create the evals and automatically feed it back to making this model smarter and smarter and doing that. It seems we can do that.
because they already have the data with us. So it's just advantageous for us to do that. Um, I think in this space, like for startups, they should focus on the applications. I think applications are going to be where a lot of value in the AI space is going to accrue in the future. We'd love to be their infrastructure.
So they use [00:28:00] us for their data processing. For their governance and for their AI, uh, the applications. I mean, there's a lot of creativity. There's a lot of things that you're seeing happening. I can't predict what's going to be the apps that actually going to win in the future. It's kind of like in 2000 trying to predict, predict Facebook or Twitter.
No one could have done that. No one ever would have said, Oh, you know, there's going to be a company that does, you know, 140 character tweets. It's going to be the same thing now. So we just want to be that infrastructure for them so that they can build their AI systems. On top of us and we help them with that data file wheel.
Logan: We back up a little bit, um, to the original early days of, of spark. So we talked about the Netflix competition and that, uh, I had heard that maybe you tried to actually sell the technology in the early days and there weren't really any takers. Can you just go back to that early, uh, story? Like, what did you see?
And what was your path to getting it off the ground into some commercial entity?
Ali: Yeah. I mean, what was really happening is that we were seeing that the Silicon Valley tech companies were leveraging AI. This is 2009 or 10. We were seeing what Facebook was doing. [00:29:00] We were seeing what Google was doing. And it was very clear that doing AI on the data is going to be the most important thing.
At the time, everybody was stuck on doing basic analytics, backwards looking. Call it BI, business intelligence, which is backwards looking, like how was my revenue last week? And they were excited that they could do that really, really fast. But we were asking, hey, what about, what's your revenue going to be next week?
AI. And there was just not much appetite for that, uh, at the time. And we went around and we tried to push the technology out because we just wanted to do research. But it was very hard to actually get anyone to actually see this. You know, you might remember the phrase, it was the Hadoop wars, Hadoop days.
It was impossible to get anyone's attention for that stuff. So it was kind of a little bit out of despair. That we, we started the company and said, look, if we don't do it ourselves, um, no one is going to do it. I remember actually one of our first customers, uh, we said, Hey, this is open source. Just take it.
Cause we don't want to make money on this. And I said, no, you got to charge us for it because we want an enterprise that's behind this. [00:30:00] Uh, and that's kind of when we realized that, okay, there needs to be like an enterprise business. No one else. Is going to do that. We probably have to do it ourselves.
Logan: Was it an issue of, um, just, just pure focus for these vendors, or was there something from a data structure standpoint that needed to actually happen to have the market evolve to the point that they actually were able to do a lot of the things that you wanted them to do?
Ali: I think there's these hype cycles that come and everybody's just focused on something. So that era was that Hadoop hype cycle and they were obsessed with it. And we couldn't, for the life of us, tell us AI is more interesting. It's just that, you know, you open the newspaper and everything is just about that.
And it's happening now too, right? It's like, I think the market works like that and to create a successful company, you have to kind of bet on something that's non consensus at the time, and you have to be right about it. So, um, for us, it was AI. It was the cloud. It was open source. Those were the things that we were excited about, but there wasn't much excitement around that.
And actually, we were told that this is not going to work. All the data is going to be on premises. Um, you know, open source, you cannot [00:31:00] monetize it. Um, and so on and so forth. So, um, I think it's similar now, right? Um,
Logan: Were you intellectually, I heard a story that you, that a customer was going to offer you, I don't know if it was seven figures or eight figures or something to build an on prem version of the product and you rejected it at a point in time that maybe you could have used the this now, now less so, but at that point in time you definitely could have used the seven or eight figures or whatever it was.
Did you just see this as an inevitable path you were building towards and therefore anything but that was a distraction along the way or what was that philosophical
Ali: Yeah, 100%. Like, our view was, we have to have an asymmetric bet on something that no one else believes in, and we better be right about it. That's the only way. So if we're gonna go on prem and do what everyone else is doing, then there are way bigger sharks that can do it than us. And then this is going to fizzle out and it's not going to work.
So yeah, it was a month or so in to my CEO job, 2016, that one of the largest banks told me, you need to bring this stuff on premise. [00:32:00] And I said, no, we're not going to do that. And they said, well, how much? And I said, how much, what? They said, how much would we pay you for it? And I said, we're not going to do it.
And they said, five million. I said, we're not going to do it. And they said, 10. I said, we're not going to do it. They said, 20. I'll give you 20 right now. And at the time, 2016. You know, I guess we ended the year, um, with 10 million revenue that year. So, you know, 20 million revenue was a lot. Uh, but the idea was like, if you go on prem, they're a much bigger company.
So we have to have an asymmetric bet. Uh, that we believe in and we continue to double down on. Uh, and then once that's bet becomes true, we'll be big enough that it's too hard for others to catch up with us. Similar thing as I said, is going to happen in this AI and LLM era right now. Um, I think you have to pick something that you think is different, uh, than the rest of the market, uh, and then bet on it and then hopefully you're right.
Logan: um, as you reflect on the journey, one of the stories that I had heard was, um, giving a lot of credit to, to Ben [00:33:00] Horowitz and actually turning this into a business. Can you tell that story? Typically, I don't allow other VC plugs, but I thought it was an interesting one that he actually was, uh, pushing you guys to think big and turn this into a commercial product.
Ali: Yeah, I mean, look, we came out of the research lab. We really wanted to raise only two, three, 400, 000 seed and just code away for a year or so. And he's the one that pushed hard. No, let's do the series. A. And I want to be exclusive investor. And, um, you know, we were actually not very excited about it, but then he gave us a price we couldn't refuse.
And he said, no, go big. And we gave us, you know, uh, 14 million at the time, which is a lot of money. Uh, back then, I know it's not much these days. Uh, and he said, go, go, go, go, go. So he really pushed us go aggressive, disrupt the market, think big. Um, you know, he always was a couple of orders of magnitude.
bigger sort of vision for Databricks than we ever had. Like, whatever was our biggest, grandiose vision of Databricks, he always was a couple of orders of magnitude ahead and was pushing aggressively saying, no, no, no, this, you're thinking too small, like way too [00:34:00] small. And I think he really, truly believed it as well.
So that kind of set the tone from the early days for us.
Logan: the early founders, I don't know if you remember these numbers, but you guys bet, you remember the actual number. So, so what were the bets for the early, the
Ali: we were in there around the table when we started the company said, Hey, what's like a great outcome for the company. And I think when we rented out around the table, the numbers were something along the lines of like 150 million exit for us or 200 and someone said 250 and so on.
Uh, so that's how we were thinking. And Ben was immediately like, no, no, no billions for sure. We're going to do. And then when we got to that, you know, it was like, this is easily going to be a hundred billion dollar company. And you know, now where we are now, he's like, definitely a trillion is what you need to think about.
Uh, and you know, I would also give credit for Mark Andreessen in one of the meetings. I think we early days, we said, um, he said, what's your biggest bottleneck? I think this was 2017. Uh, so, Hey, we can't hire engineers. We can't get the engineers from Facebook and Google. They're just too expensive. And I said, huh, what's so what's the so fang is the problem?
And I said, [00:35:00] yeah, fang is the problem. And it's okay. So you just need to add yourself to that list. It needs to be fang DB. Huh. And that's it. And then he moved on. But after that meeting, we're like, yeah, okay. He kind of is right. We should actually just offer up and, you know, hire those people and become like one of those tech magnets.
And it changed our mindset. Like we went back, we changed our compensation. We changed how we hire just based on that one comment
Logan: And I'm curious, like at a tactical level, I mean, that's very aspirational. It sounds great. Uh, but, but like in changing the compensation or thinking bigger, like what did you then go do?
Ali: We literally went back and moved the compensation for engineering to 90th percentile of the market.
You know, we did the math and we saw how much is it going to cost us? Are we going to be able to do this? Is it going to be, is it something that we can afford as a company? What's the dilution going to be? This was a long time ago. And actually when we ran the calcs, We were surprised that actually we can do this and it's sustainable.
And it was surprising to us. We're data companies. We crunch the data and all the numbers. Um, what we found is that, you know, it's really your [00:36:00] market cap divided by number of employees. That's how equity rich you are when you're paying someone equity. And these companies have a lot of employees. So then it turned out actually.
We are actually richer than we think if you take our market cap and divide it by the number of employees. Uh, cause that's, you know, that's not how you're doing financially. That's not about your revenue growth, but that's about what you, you know, what could you offer someone in equity? And it turned out that actually it's absolutely within our reach to do it.
So let's explore it a little bit. And it started working and then we were able to hire the people that we couldn't hire before. And they came in and they completely revamped what we were doing with the product. We just couldn't do it. increase the feature velocity and the innovation cycle at Databricks massively.
And that actually helped us kind of slowly start separating from the pack. Cause in 2017, if you had talked to us, you know, we would have been one of the startups among hundreds of other ones out there. Uh, so there's these small things kind of that they did to push us think really, really big, go way bigger.
And we did that, but we always kind of did it judiciously. We didn't want to kind of. Go crazy and, you know, burn down the bank account. We wanted to be careful about it. [00:37:00] Yes. Sorry. Yeah, I was, uh, you know, running product and engineering, uh, which I loved. And actually in that phase it made a lot of sense. 'cause you know, what do you do the first two, three years of a company? When we start a startup, you wanna get product market fit, build something that just works. Uh, so that was the focus for us.
Just how do we get one customer that wants to use this, and then how do we get a customer that wants to repeatedly use it? And then how do we get a customer that repeatedly uses and pays us for it? And, um, so that was the first two, three years of the focus. The technology that we had innovated with, with Spark exploded in 2015.
And, um, you know, at that point we felt like we had succeeded, but we just didn't have any monetization. So we went through a difficult time in 15 and we had to figure out, okay, if we can't monetize, maybe there is nothing here. Uh, and the company kind of went through a little bit of, um, [00:38:00] sort of introspection.
And then we said, okay, let's kind of do a little bit of reset of some of the things and rethink some of the stuff and start anew in 2016. So I became CEO in 2016 and we made a bunch of changes that actually started helping us with the monetization.
Logan: The interim CEO or were you the CEO and they were still talking to other Uh potential outside candidates
Ali: Well, I thought I was, well, first of all, I heard through the grapevine that they're interviewing people. So I found out, okay, well, they're actually interviewing other external folks. So
Logan: were you the CEO at the time or
Ali: but they were like, Hey, do you want to be the CEO? I was like, sure. And then I found out, okay, they're actually interviewing other people.
All right. They didn't tell me about that. But then eventually I was like, okay, I think I am CEO. They've announced it to the company. I am CEO. I'm running it. But then I kind of, you know, my compensation didn't change.
Ali: And I learned. Kind of soon that, Hey, I'm actually interim CEO. I didn't know that,
Logan: You didn't have the title
Ali: Yeah. So then it's, it turned out, no, no, no, actually it was just a test period. And then eventually I would say after one year, um, I kind of, I think they really kind of like, okay, no, we're going to go with him[00:39:00]
Logan: So so the first when you came in in 2016, there were three things that you kind of focused on Uh in it sounds like monetization was one of them as well But uh, can you talk through what those three were? I think it was up leveling the management team sort of changing the focus of the the customer type and then monetization You
Ali: Enterprise sales, total pivot into enterprise B2B. We were PLG. I know it's still hot. Everybody wants to build PLG companies, but we learned the hard way after two, three years that PLG doesn't really work, um, might work for very few companies. It's a great thing. We still do PLG. Uh, but if you guess what, if you want to sell a data platform, we want to manage the most sensitive data of a large organization.
Uh, you need to get the buy in of that organization. Nobody's going to just swipe a credit card and start giving their data to you and you're going to succeed. So PLG just won't work in that model. Um, in B2B in general, it's hard. If you're going to have big contracts with large enterprises, PLG probably doesn't work.
Does not work. Um, so, um, so number [00:40:00] one, going all in on enterprise sales. Uh, we had tried earlier to just, be this PLG product led growth. Uh, the second thing is up leveling the management. So we up leveled all the exec team. Like every person on the exec team was new. So I basically got a new exec team, um, that had much more experience.
They knew what they were doing. And a little bit went more with, uh, experience over just, Hey, we're just going to be smart PhDs that can figure this out on our own. We went instead with, you know, someone who has done sales for 20, 30 years, or someone has done marketing at scale at Oracle knows how marketing is done and so on.
So that really helped also with the execution, because I would say we were really weak on go to market. We were great at product innovation research. Uh, so those really helped. And then.
Ali: A change to how we actually innovate in open source, just open sourcing, everything and giving it away is not going to work.
So we need proprietary secret sauce, sauce that we're not just giving away to the hyperscalers. So that was critical also in changing, and that's difficult to do. If you're a pure open source company, you come from [00:41:00] academia. It's, it's, it's difficult to get all your employees to be all in on that. Uh, but it was the only way to succeed.
We wouldn't have been around if we hadn't done that pivot.
Logan: Did you have those views as you were running product and you sort of came in prepared to do that, or was there some point of introspection that ultimately led to those conclusions? Yeah, I think 15
Ali: I think 15, we were not doing super well. And in 15, it was sort of like, okay, we were trying PLG. It's clearly not working. Right. Yeah. We don't want to do enterprise B2B because we're not excellent at it. We don't have experience in it, but you know, uh, we ended 2015. With 1. 6 million revenue, 2015. So if we only have 1.
5, 1. 6 million revenue from what we could glean from blogs and whatnot, an enterprise sales person can sell a million dollars, a million and a half of software. That's just one AE. Why don't we hire three of them? And try it out. What do we have to lose? So that kind of emerged in 2015. The other thing was, it was very clear that no matter what we did in the innovation cycle side, [00:42:00] the hyperscalers immediately had the same thing.
They would just pick that. And in fact, they would add a little bit stuff on top of it. And then they had the whole platform, the whole cloud. So they were always the good enough vendor. So the customers were like, well, you're great. You have great innovation. But I can also get it from the hyperscaler. I'm going to try that first because I already have a contract with these folks.
I got security approvals. I've already paid them a bunch of money. Try that first. And if it's not good enough, I'll come back to you. So again, pretty clear that we have to be able to articulate then. Yeah, you can go over there, but you won't get the XYZ if you go over there. Like we have XYZ that they don't have.
And so if you couldn't articulate then there was nothing you could give your sellers to sell. Um, And then, you know, it was the experience that we were lacking. So I think throughout the hardships of 2015 kind of made it pretty clear that we have to try something new. And these three directions seem to be pretty viable.
Logan: I mean, the one that stands out the most as like potentially a one way door is the licensing or monetization shift, and it's very different [00:43:00] than the, the culture, uh, I would say of the people that are very, uh, open source purists in some way. Was that, was that a debated decision in a very meaningful way or was it just kind of declarative?
Hey guys, we have to do this to be able to build a business.
Ali: I think when I took over, I was just a little bit more forceful in what needs to be done, but it was big debate internally. And I made it pretty clear. I think I used the analogy and I said, Hey, we have a gun and it has one bullet left in it. We not going to do a thousand things. We have like one bullet left.
We better aim it really clearly and we better hit the target. If we don't, we're done here. And so I made it pretty clear that this is existential. The things that we're going to do. And, um, and I think when I kind of got people to wake up, because, you know, in the startup days, you're excited, you're doing stuff and, but if nobody makes it clear to you that, hey, we're actually going to go out of business in a couple of years, it's not all great, just because we have a big valuation, I think people got in line and, and started thinking about, okay, what are the ways we can actually do this?
Logan: Did they quickly start showing up in the business or did it [00:44:00] take a few quarters?
Ali: No, I think pretty soon the engineer started thinking, and I made it very clear to them that this is great that you built this, it's literally available now also on the hyperscalers cloud. So what's the differentiation literally and I made it very viscerally clear to them and I shared it that we had a extreme transparency culture and make it really clear so I get them to buy and let them understand themselves that we can't win we're guaranteed going to lose if we continue this way.
Um, and you know, they kind of soon got it that we want to be successful here. So they started coming up with innovations that we're gonna this maybe we don't need to open source.
Logan: Now, as you think about expanding into new products, uh, I've heard you actually will set up, uh, different teams for each product as it gets going. What have you learned about product management and organization as, uh, as you continue to scale and iterate on these different product
Ali: Yeah, I think that's when you build a product zero to one. you kind of have to do the same thing that you did in the original kind of first two, three years of the company. Um, so there's a lot of processes in any company. Once you're over 150 employees, let alone thousands. Um, [00:45:00] so you got to cut and say, look for new products, you don't need to satisfy all the security stuff.
You don't need to have all the processes, have a small core team that iterates really, really quickly, uh, figure out, first of all, who's the sort of team leader? innovator type that you have in those early startups, put them on the project. These are people that are not necessarily good at when the company scales and they're leading mature products.
You have to have one such anchor person that you put everything around and you put them in charge, they start building that product. Uh, and then you let them iterate really quickly. So get them off the iteration cycle that the big company goes on. But get them on this much quicker, smaller one. And, you know, encourage them to just iterate inner dev loop, outer dev loop, that's super, super fast.
And just focus on getting users and getting product market fit. Once that happens, you can kind of move it back into the mothership. Uh, so, you know, ideally even do it outside of, you know, The whole product, if you can. So we have a product right now called genie, which lets you just, you ask an English questions and it can answer it immediately produces, it understands all the semantics [00:46:00] of your data.
We call it data intelligence. So it understands your data and it then programs and ask the questions and give you just straight up the answer. But to know that it's not hallucinating, you can actually see how it coded it up and behind the scenes and so on. That product, we actually just built it completely outside of Databricks code base.
And it didn't satisfy any of this stuff. We didn't build on anything. We just iterated quickly, just do that outside of the norm, like a startup almost, and then only when we saw that the results are really good and people like it, we went through the kind of painful process of, okay, let's rewrite it, let's move it into the mothership.
So it's almost like an acquisition at that point happens and you have to integrate it into the. whole system and make it secure and so
Logan: So you'll give people autonomy to make decisions. the decisions on not, not the shared primitives necessarily. Or if you guys are using some type of, I don't know, single sign on or whatever cloud structure underneath, you'll get people full autonomy to do it. And then you'll come back and reconcile it once it actually finds product market
Ali: literally do it outside in a separate repository, do it on a separate GitHub repository [00:47:00] outside of the big system and do it, you know, iterate very quickly is the most important thing because that's what you're doing as a startup. As a startup, you operate three to four times. I've seen it. We've done acquisitions.
And if you measure it, the startups move three to four times faster than, you know, a company like Databricks. And I think we move very fast and we're very innovative, but still it takes us three, four times longer. Why is it? It's the security processes. It's the legal sign off. It's making sure that it works on all the clouds in all of the environments.
And the biggest thing that slows you down is that you have thousands of thousands of customers that you can't regress. Um, so you gotta unshackle the new products. Otherwise, you're just gonna move too slowly. And that's what we're seeing. Honestly, they're really, really big companies. They operate that way.
It takes them too long. That's why you have startups in the ecosystem. So you have to kind of go back and reproduce the kind of environment that you had when you had a small startup.
Logan: I'm curious. Uh, so, so now you're, Uh, coming up on, I guess, nine years ish at the CEO job. Is that right?
Logan: Um, as you reflect on like your leadership [00:48:00] style today, you sort of seem like the canonical leader of, of Databricks and it's like the job that you've always done. I assume leadership style has kind of evolved in some ways over the course of, uh, of this period of time.
Are there anything that you wish you could, anything you could wish you could go back and tell your earlier self taking the CEO job for the first time, or maybe your earliest founder self when you're sort of stepping into the seat?
Ali: Yeah. I mean, I think that you would want to grow leaders earlier on, like build trust with few leaders that you groom and continue doing that because what you find nine, 10 years later, and I have a lot of awesome co founders that are still active and they run the company where it's almost a partnership where we run it together.
Uh, but it's not that many people, like, you know, even though we're almost 8, 000 employees, How many people are there that are really, can really help you build a business together? It's, it's hard to find those leaders that build your trust. Uh, with them and, you know, grow the company with, so I would just invest more in that much more actively and always be a lookout on the lookout for those kind of people in the company.
You know, we were just busy executing and we didn't pay attention too much to that maybe. [00:49:00] So that's one thing I would do differently. Um, the second thing I will do differently is. When these trade off decisions come up and it's really hard to decide, do we do, do we go on prem or not or whatever it is, um, bet, make a bet that, you know, is much more for the future.
Don't worry so much about, you know, the implications that it has on our finances or on the revenue and don't, like, don't let short termism creep in. Easy to say now. But in the early days when you're, it's far from clear that you're going to be successful as a company and you know, the VCs are asking you, how's it going?
You have your board meetings, they're looking for your revenue targets. They are super obsessed. Think, how's that? Is it hockey sticking? Is it going up? What's it looking like? How do you compare with others in the industry? Continue making the bets on the future and don't get carried away on those numbers.
Um, that's another advice I would give. Cause we took many shortcuts that backfired, um, they were not worth doing. We kind of knew it, that this was a shortcut, but it was [00:50:00] like, it was worth it because you know, this is going to bring so much revenue if we just take this shortcut, I would have avoided many of those shortcuts now, um, you know, uh, err on the side of the future than the present now, of course, maybe it's path dependent and I'm saying it just because we're where we are, if we had done the things that I'm saying we should do, we wouldn't have succeeded, but, uh, I still kind of On balance feel like we could have been more futuristic in our bets than we were.
Logan: I get the feeling you, you focus on, um, empowering individual leaders within your organization and having them, um, deputize to make the best decisions for the organization rather than, uh, you serving as a central hub for, for each department. Is that a fair
Ali: I think it's both. I think I also micromanaged them.
Logan: Okay. Maybe they would disagree.
Ali: Yeah, they would probably not say,
Logan: How do you think about that tension between, um, Uh, you know, a central figure that's able to make definitive decisions that sort of sit across an organization versus more decentralization and empowerment at an individual level.
Ali: yeah. I mean, you get more clarity if [00:51:00] it's one person and it's more consistent and. You know, that's great. The disadvantage is that it slows things down and it bottlenecks things. Uh, and it's maybe less empowering. So you have to strike a balance and you have to do it in places like, you know, you build trust over time and you figure out which are the parts which you can completely delegate and they're going to do an excellent job and you know, they're probably better than you at it.
So don't pretend that you're better than them. And what are the areas where you really need to deep dive? Uh, and then there's some areas that there's things that only you can do as a CEO. Like, you know, it's literally tie breaking. Uh, it might sound silly, but over time, overlapping product areas appear.
And you get leaders that are kind of building similar ish things, and the Venn diagram intersects. And if you leave it to them, no matter how many times you tell them, you know, Figure out how to unify this, do one. We're not going to have two similar offerings. They're not going to be able to do it on their own.
So you've got to step in and push. You've got to use the CEO card judiciously. I think that matters a lot. And you can build a [00:52:00] much simpler product. So I think if you want a really simple unified product that makes sense, one person makes much, much more sense. You just have to watch out for bottlenecking things and slowing things down.
Logan: Why is a willingness to say the same thing over and over again an important part of the CEO's job in your mind?
Ali: Because I think people are not paying attention to what you're saying. Like, you know, we all have busy lives. We're paying attention to our own life. You know, I didn't even notice this thing that you said. I wasn't even there. Maybe you said it. I didn't really rock it. I was on my phone and I had actually fired that day and I was having other issues that I was dealing with.
So I didn't really, I didn't really internalize it. I didn't really get it. So it's many, many times you tell someone something for the third time. And then only then you notice that they are like, you know, and then they repeat it back to you as if it's like, They came up with it. That's great. Um, I think it's just, you got to repeat it.
But what I tried to do is figure out a way to repeat it in different ways. Don't repeat it the same way all the time. It gets boring and old. Challenge yourself and come up with a new angle. Find a customer that says the same thing that you want to say, uh, or use case. That's a little bit different. Use different slides, make new slides.
Um, and you'll be [00:53:00] surprised. Because, you know, people didn't really get it the first time they'll say, Oh, this was completely awesome. What you just said now, even though you're like, well, that's literally what I said like a month ago, but I just refurbished it a little bit. So I think it's super important.
Logan: Being truth seeking, I think is one of your values. Is that right? Um, how do you implement that in practice or how does that be manifest itself as a cultural, um, uh, cultural practice for you
Ali: Make sure in the hiring process that you test folks untruth seekingness, you know, it's not that hard actually to ask someone, you know, tell me what's your biggest screw up. Tell me when you really made a mistake. Who did you really not hire well? Why did you make that mistake? How was it? And so on. Push them, push them, push them to see, you know, until they get uncomfortable.
Do they start to come up with lots of excuses? Well, I really actually didn't want to hire this person, but I was, my boss made me hire them and that was a bad hire. I knew it, but yeah, that was a mistake I made, but technically my boss's mistake. So like, you know, keep pushing them and you can, in the interview process, actually suss out if someone is truth seeking and honest or whether they're trying to a little bit, you know, make it look much nicer.
Um, you know, so that's, that's number [00:54:00] one. Number two, don't promote the people in the company that are not truth seeking. Like at least if you made the mistake of hiring them in, then at least make sure you don't promote them. So in the promotions of the company, make sure that you're really evaluating the cultural values.
Like really do it. It's easy to have the culture values, but are you evaluating the promotion? I have a rockstar. Rockstar is doing great. They moved the needle on the business. Should they get the promotion or not? You don't want to be the person saying, well, wait a minute. Can we like look at if they were truth seeking?
Oh, they're not. Let's not promote them. It's like, are you out of your mind? This person had huge impact on the company. Yeah, but they're not truth seeking. But if you mean it, if you want that to be the culture, you have to actually do that in the promotion. And then of course, if it's a extreme case. We might have to part ways.
You have to do that as well. You can't have people around that don't fit the culture. Um, so that's what I would say.
Logan: You spend 30 40 percent of your time on on hiring. Is that right? One of the things you said is that you, um, when you're hiring executives, you'll you'll Just find reasons to keep in touch and find a bunch of different touch points. Can you maybe talk through that as a philosophy or how you [00:55:00] go about doing
Ali: Yeah, I think that's actually the bigger philosophy is that I don't believe in interviews, uh, because I think some people interview well and they might not be good at all. And some people interview really poorly, but it might be excellent. So I actually just think it's a very noisy, Uh, channel, you can't, there's not that much value in it.
Uh, so then, what can you do? Um, one of the things, this is the one that you mentioned, but there are other ones that we use as well. Is, I try to actually have them do the job. And I think, we think, oh, you know, in programming we can do that. Ask them to program this thing, and then if they do it well. But we can't do it for any other job.
And I disagree with that. Just if you're hiring someone for head of marketing, work with them to fix something that's broken in marketing right now, whatever it is, if it's a personnel problem or if it's the webpage, or if it's the product marketing positioning, work with them, call them, call them, you know, in the evening, call them, you know, in the morning, ask them, go through it, like work with them, basically make them do the job as if they were already an employee in the company.
And you will have a very good sense of culturally how they are, You know, are you in the same [00:56:00] wavelength? Can they actually do a good job at this? The kind of stuff, especially pick the things that are not working right now in your company, because that's, those are the things you would give them to fix when they come in.
And if they could fix those things, they're clearly a good hire. Uh, so I'd rather do that. Then coming in in an interview and just asking them, you know, the, you know, the gotcha questions that they might have practiced, you know, tell me your own weakness, what's your biggest weakness and so on. Um, so that's, that's one of the big things that, uh, we do.
The other is backdoors and I believe you should do the backdoors upfront. So. Don't go through a lot of firms, especially recruiting firms. They will do run you through a cycle to have you meet all these people. And then at the end, they want to just check the box, but they'll even do the references for you.
Uh, which is a big mistake. I think you should, uh, references upfront already know who you're hiring. Uh, so that the first time I meet a person, I already know a lot about them. I've already asked. And I've also found this thing that if you do five to 10 backdoors on people, their bosses, previous bosses, you actually kind of know the person better in some regards, not in their [00:57:00] full entirety, better than they know themselves.
Cause you know, all the things they went through and you know, the perspective of three, four people that saw that happen to them in the previous job, they might not actually fully comprehend what everybody thought about that situation. Of course they have a unique perspective. So I think that backdoors are extremely important, um, you know, and then the others just look at their trajectory of work.
You know, where were they? Have they seen great? Have they done something stellar? Are they hopping around a lot? Um, you know, those are the three things that we kind of emphasize in hiring
Logan: What are your techniques on the backdoor references? Uh, are there, are there specific questions you like to ask or ways that no one likes to talk poorly about someone they know? So how do you sort of drill in to make sure you're getting the actual high quality
Ali: yeah, it's really dependent on who you're talking to and they're all different and you can tell immediately. Some people are not just going to tell you the truth. This is going to be, you know, they're going to just say good things or they don't want to say anything. So, you know, that's that. So you just move on and end the call.
Uh, but, uh, and I guess I'm revealing my, uh, you know, uh, recipe here, so it will never work again. But, um, I think that [00:58:00] If you can find the former person that, you know, opens up, I think one thing that I like to grill them on is really, really ranking of how good the person is, and don't ask, you know, score from 1 to 10 and they say 10, really ask them, like, does it rank number 1, number 2, how many have you, and then go really deep, who did you, who's number 1, who's number 2 in your list, by the way, you go after those as well, of course, and then really ask them, you know, go through scenarios, like, it's just the follow up questions, like, what did I do well, tell me something that went wrong.
Give me a scenario. What was that scenario? Really push the boundary. You got to kind of almost to the point that they become uncomfortable. So many times I've done it so far that, you know, they call back and say like, you know, this is, you know, what's going on here. Uh, but yeah, you want to, but you'll find when you're doing the backdoors that, okay, that guy and that gal.
They're like super forthcoming. That's a straight shooter. They're going to tell you the whole thing and others are not front door references, the references that the candidate gives you, uh, are similar, but kind of inverted there, you know, the vast majority are actually people that the person is going to hire next.
If it's a manager, they're just literally giving you their. And that's [00:59:00] useful too, because you can interview those people and you know who they're going to hire next. So you can gauge the quality of the people that they're going to hire. But occasionally you find the front door that also tells you the real truth and tells you like really what's going on.
Um, and you learn a lot there too. Uh, and when I do these references, I'm not looking for, Hey, what's your one fault that I'm going to cancel, you know, the, I'm not going to hire you for. It's just to get a complete picture. You're going to have faults. You're going to have strengths. We want to understand that full picture before we go ahead and hire you.
Uh, So that's kind of what we're going for. But yeah, for some execs, I've done 15 of these backdoors before I hired them.
Logan: From a, uh, as a leader, you've had to ramp up in different functional areas, one of which was enterprise sales. I don't think you had experience to it. How did you go about learning in that area? And I think you copied maybe best practices or sort of internalized stitch together to build your own model from a bunch of different organizations.
Maybe you can talk through learning a new area or discipline and how you went about it.
Ali: Yeah. I mean, first of all, I would just say, you know, we hired Ron Gabrisco who, you know, took us from essentially zero to now over 2 billion [01:00:00] ARR, right? Basically 3 billion ARR. Um, and so he's been phenomenal
Logan: And what did you see in him or, uh, Yeah,
Ali: Um,
Logan: the number one person and
Ali: I thought that he was extremely smart. So I thought he's actually different from many of the salespeople.
He is the classical enterprise B2B sales guy, you know? So he's not like the engineer type that's, you know, he is the classic archetype of a sales leader, you know? With all the raw, raw and motivational speech and get everybody riled up. So it's like, he has all that. So that's great. That's the stuff we didn't understand or have.
He had the big enterprise experience because there's a lot of sellers that sell to be to mid market SMB. But he had actually sold to really big, gigantic enterprises. So that's also something we don't understand anything about. And he was thinking very big. Like when he came in, you know, it's like, oh, we're going to do a 10 million contract or 100 million.
We're like, no one's going to ever pay. 10 million for Databricks. It was our mindset. So that was also useful. But, um, but he also was, um, you know, he was cerebral and he was, um, [01:01:00] extremely smart and went by first principle. So there was a compatibility. He had an engineering degree, so he was sort of the unicorn of.
Kind of someone who like spoke our language and couldn't understand us and was extremely smart, but also had this whole other thing that we, we couldn't understand. Um, so that hire was important. We took us a year to hire. We went through a lot of candidates and I think we kind of nailed that one. It was luck.
Um, so that's important. Um, the other thing I would say is in the, in the interview process, when you're searching, like we took us a year to hire a salesperson. Um, Take the, I've seen people make this mistake. Take the exec hiring process super seriously. Um, it's a learning process. You're basically going to school to learn what a head of sales looks like.
So in that talk to all the top head of sales and grill them. And pay attention and use them also in the interview. Since I don't believe in interviews, as I said, use the interviews a little bit of as a tutorial for yourself to learn how does excellent sales look like in your organization for your company, for your company, for your company.
And [01:02:00] you get the, after, you know, talking to 20 people. You get a pretty good picture of, okay, this is kind of it. This is what they're doing. They all have that. All the stellar ones seem to have this in common. Uh, this is what it's about and you can go deeper. And then in the backdoor process, you figure out who's who.
You figure out who are the top people in this industry that you can't hire. Try to get some time with them as well to just learn from them and build a relationship with them or get them to be advisors. So really put yourself in the, hey, you're going to university to learn. Um, that's, that's what that process is.
You know, you're not an expert and trust the people that are the best in the industry at that. Try to find the number one person and go with them.
Logan: Databricks, um, are there certain things that you feel you uniquely prioritize or the types of people that succeed within the organization that are, is maybe unique to Databricks?
Ali: well, so our culture principles have been reverse engineered to exactly answer your question.
Logan: So you didn't start with the principals and went the other way. You started with the people that succeeded and came
Ali: succeeded and failed. We also looked [01:03:00] at people that had perfect CVs that we hired and they bombed out. Why was that? And we went back and said, okay, we couldn't agree on the hiring bar. They were like, I've worked with these 10 people. They're stellar. And we're like, no, that doesn't look like our bar. That was big issue.
So raise the bar. Don't settle. We'd rather have more, uh, you know, we'd rather have more false negatives. Uh, then have false positives. So we'd rather reject someone that's really great by mistake than hire someone by mistake that's not great. And that's painful. Um, we learned out that way. Truth seeking. A lot of the execs from the big companies had this attitude of, um, you know, uh, everything's going great.
I got this. My team's got that. We're working on this. Here's a nice slide deck. Manage up. Nothing to look for here. Um, but. That didn't work for us. It's like, that's where the truth seeking principle came in. We really need to know like what's broken. Who did you, who in your team is not working out well?
What's not, what are we not doing well? Where are we not satisfying the customer? So that was the second principle. [01:04:00] Um, you know, third principle, I've done this before. I know how it's done. This is how much bigger, better companies doing it. We should do it this way. So that was that first principle. We needed to know exactly.
how and why and go from first principles on every decision. So, um, so being first principle became another culture principle. So our culture principles are the recipe for succeeding at Databricks. Like execs ask me all the time at the end, you know, before just we shake hands and I hire them. They say, Hey, so you think I'm going to be successful here?
What do you think is going to be needed? I mean, it's like, it's super simple. I swear to God, read the culture principles. If you really do those, you will be, I mean, of course the results have to, the output has to be great, but you will work out here. If any of this stuff doesn't look good to you, let's argue about it now.
Cause that's, that's going to be an issue. Those are really the principles of the company. So our culture principles aren't like the. Aspirational cool stuff that you'd like to have on the wall. It's literally what's worked and didn't work in the past empirically.
Logan: When, um, when looking for people within, uh, or to come in Databricks, I think people that have a chip on their shoulder a little bit, um, do you, is that something you [01:05:00] can, do you try to tease out in the conversations? What are the types of, uh, things that you'll ask to try to figure that out?
Ali: I mean, the people that, you know, really, really, really went against all odds, really push, they have grit. One of the things that Ron said in the sales, uh, you know, interview that kind of, I remember now from 2015, I think November was, he said, Hey, if, if, if there's a deal lost, Yeah.
Yeah. You know, I want to know like, wow, because there's no way anyone else on the planet could have won that deal. Like, I just know it's humanly impossible, because I will have done everything humanly possible to win that deal. So, I just want to, and it was a side comment he was doing with something, and it was like clear to him that this, this guy is going to go.
over and beyond. And then, of course, the negotiation with him to hire him was a real pain. So that was also reaffirming that, okay, he's probably going to be great at this job. Uh, you don't want to hit a sales that's, you know, terrible at negotiating their own comp for themselves. Um, so, you know, you want to suss [01:06:00] out, you know, is there something, something that the people who have something to prove, Are they trying to prove to you if they're super content and happy with things, you know, then it's probably not going to work.
It's probably for people that are a little bit broken like all of us are like we're all we have something to prove and we're going to work our ass off. We're gonna do everything in our power and you can suss that out in interviews. It's not hard.
Logan: You mentioned work your ass off there. I, uh, that's something that I think has went out of Vogue working hard for a while. It's sort of come back in Vogue. I'm curious. You had said something along the lines I had here. You never met someone successful who didn't work hard. Can you elaborate on that point?
Ali: In research, in academia, like genius professors, you know, who had won amazing awards, you know, and that's like, you know, you can be really, really smart in research and maybe you don't need to work hard, but all of them that I met. They were, you know, working harder than us. And we were like, shocked, they're there 24 7.
And then in industry, the same thing. So I just never ever met someone who's just phoning it in. And they're super successful. You find talented people, [01:07:00] but they're not successful. It has to go with extremely hard work. So we had that from day one at Databricks. And we never deviated, whether it was in Vogue or not.
We were a super hard working culture from day one. We continue to be. And we said, look, we have lofty goals. To achieve them, we're going to work extremely hard. Totally understandable. If you don't want to do it because maybe you have family, maybe other things you want to do in life. That's totally understandable, but then probably Databricks is not for you. So it's been always, always true here at Databricks. I, in fact, we had, um, you know, our work life balance scores weren't great in surveys and, you know. I try to be upfront. We don't want to actually have terrible work life balance. We actually focus on teams that have terrible scores and try to up, you know, change things.
And we got actually improvements in those teams. But I also was upfront. I don't want to be best in class and have the highest work life balance score in the industry. That's not, we're not going to win that way. Um, so it's as simple as that. Now I try to myself also match that. So I don't want anyone to work harder than me.
Like, so I got to kind of match that myself. Um, But, [01:08:00] and it has to be sustainable. What I mean by that is I've been doing this for a decade more, uh, and I want it to be the same way for the employees. So if they're like really burning themselves out and they can't do it for many more years, that's not, I don't want them to do that.
I want them to be able to do it and it's sustainable. And then hopefully we can achieve great things together. And if they don't, that's okay too. Um, no hard feelings, uh, you know, or maybe in a period of their life, they want to do it, but they don't want to do it another, that's okay too.
Logan: One of the hardest things our companies deal with in scaling is when the people that got you there, they're starting to question if they're the ones that can get you there to the next step along the journey. It sounds like when you came in, you made that decision as CEO to uplevel a lot of leadership.
And I'm sure along the way you've, you've kept Ron and he's continued to scale in his role, but I'm sure there's other functional areas that you've decided to upgrade. Um, How have you gone about making the decision that, hey, it might be time to part with someone who's done a great job up to that point, but you might need someone new to take you to the next level?
Ali: I think you have to just be truth seeking with yourself. So number one is like, when do you know yourself that, you know, I think [01:09:00] we kind of know there's the signals, but we just don't want to see the truth. We want to ignore it because it's, it's so hard because this person helped build a company with you.
It's, so you don't want to kind of admit it to yourself, but deep inside, you know, deep, deep, deep, deep inside. So just immediately when that is there. Just recognize it and face the truth. Be truth seeking with yourself. Don't lie to yourself, at least, as an exec. Okay, once you know it yourself, now you have to take action.
I found it easiest if you can explain to the person and get them on board that it's not really working. Don't make it about, like, just make it like, is it working or not? Are you winning or not here? Are you winning or not? Just focus on that. And if they realize that, hey, okay, I am not winning. Well, let's discuss why.
Then I think everything else becomes much simpler. It's like, okay, but then what do we do about, you know, What about you then? So after, you know, like, okay, I'm not winning. It's not quite working. There are many reasons. Okay. Let's figure out a way to make this work. Then you can change, steer the conversation towards what's the best thing for you, your career.
I want to help you. I want you to be successful. I have found that there are new roles for people where they can continue to crush it. I have had people that [01:10:00] they, in that role, it wasn't working anymore. And we found a new person that leads it or you do the layering or whatever, but you find a new role for this person and they crush it and they're awesome at it and they'll be happier actually.
You know, after a while goes on, they'll be like, A, Ali's no longer bugging me, you know, about it not working. Second, I'm actually happier when I'm succeeding. When I'm winning, I'm happier than when I'm constantly, it's not working and everything is going, you know, I'm having a bad day every day. So I've actually found that many of those folks, you can keep them in the company.
But you should also part ways if they're like just phoning it in and they're not interested anymore and they're just there for some memory of what happened in the early days when everything was great. It's you better off also parting ways too. Let's not waste their time and our time.
Logan: Maybe zooming back out as we, uh, as, as we wrap, um, as we, if someone's listening to this in a couple of years time, and we're sort of looking forward with your crystal ball that you claim not to have, but I think you're pretty good on it. Um, are there, are there Elements of, [01:11:00] um, the world around Databricks or what you guys are operating that you, you think are on the cusp of being possible in the next five years, just things that get you really excited to get out of bed every day and do this job and just the potential maybe at an industry level, not even at a company level that you think we're on the cusp of seeing.
Ali: Yeah, I think what's going to happen is that we're going to see many of the things that we see today are going to be disrupted. And there's going to be a sort of AI app for those things in the future. We don't even know what they're going to look like, you know, but, um, what do they all need at their core?
Like what did apps need in the last 10, years? They needed an Oracle database under the hood. Uh, you know, the joke is like, you put all the complex, you stuff, all the complexity into a database, and then do you build workflows and UIs on top or the two tiered or three tiered model? Um, I think in the future you need the AI database.
under the hood. And if you have an AI database, you have all your transactional data. You have all your analytics and AI in there with the model, and then you can do [01:12:00] wonders on top. So I, I think we're going to see an ecosystem of these applications in the AI space being built, and they all need an AI database under the hood.
And that's what basically Databricks basically AI database. We don't call it that. We don't market it that way, but essentially it's an AI database under the hood. And you will need that for every application, every company in the planet will need that. And, you know, I hope that we can be the company that powers all of these applications in the future.
There'll be others also that do that as well. Um, but, um, that's, that's the goal. And I think, you know, we're seeing already the seeds of this happening.
Logan: The seeds of it, I guess I'm curious, just to elaborate on that, like the seeds of this happening. Are you seeing just particularly exciting use cases or just customers?
Ali: Look, there's a lot, look, in the same space, they need to do massive log processing securely and detect zero day attacks. Okay, sounds, that's like kind of what Databricks bread and butter is, real time processing of massive logs with AI built in. In the observability space, same thing. You need to observe, you know, your log processing, get, [01:13:00] you know, understand the metrics, detect anomalies, and so on.
In the customer 360 space, you know, CDPs and, you know, Understanding what your customers are doing. Same thing again there. It's like, you know, massive data that you have to collect from many, many different places and then get telemetry on it. It's huge data processing. You need to do it in real time. You need to do AI to get the insights and doing those.
So each one of these spaces, I think, are going to be built on basically an AI database. I hope we can be the AI database that powers all of them. And this disruption, I think, is going to happen in the next two, three, four years.
Logan: It's exciting. Well, thank you for doing this.
Ali: Likewise.
Logan: Thank you for joining this episode of the Logan Bartlett show with co founder and CEO of Databricks, Ali Godsey. If you enjoyed this conversation, we'd appreciate it if you shared to anyone else that you think might find it interesting, as well as subscribe to us on whatever podcast platform you're listening to us on.
We look forward to seeing you back here next week with another great guest on the Logan Bartlett show. Have a good weekend, [01:14:00] everyone.