[00:00:00]
Logan: Welcome to the Logan Bartlett show. On this episode, what you're going to hear is a conversation I have with Alexandr Wang. Now, Alexandr is the co founder and CEO of Scale, a company most recently valued at 7 billion that helps companies use their data as an input into the development of artificial intelligence models.
Alexandr started this company at 19 after dropping out of school, and it's scaled into one of the most important companies in the world of artificial intelligence today. Really interesting conversation with Alexandr about the future of artificial intelligence, including what The risk of catastrophic doom is as well as his concerns about the potential for artificial intelligence to create further inequality in society.
We also talk about his operational lessons, including hiring people that actually give a shit about the problems you're solving. Really fun conversation with one of the world's youngest billionaires in Alexandr that you'll hear now.
Logan: Alex, thanks for doing this.
Alexandr: Of course. Thanks for having me.
Logan: So there was a phrase that was pretty [00:01:00] ubiquitous about a decade ago, that data was the new oil. Can you talk about why you reject that view?
Alexandr: So I think there's a lot of the phrase gets right. I think that's one framing is. So if you went back like two decades, the largest companies in the world were all oil companies. And at that point and less the case now, but oil and petroleum were like the the bringers of sort of power and leverage and mostly economic leverage.
So I think the way in which data is the new oil is that. It is by and large going to be the main lever for economic power and economic sort of influence over the course of the next few decades. I think the thing that the, that it gets wrong is that data is not a commodity in the same way.
It's not not all data is created equal in the same way as oil. Oil. By definition is it's like a, this like scarce commodity. But data is it's far richer than that. Data has multitudes, you could have data specific to [00:02:00] code or data specific to language or data specific to law.
And each of these pieces of data is quite different. And therefore, when you think about it strategically it's a different framework you have to apply. You're not just going around hunting for. Data wells and just try to mine them up and resell them. You need like a thoughtful strategy by which you're stitching together useful, qualitatively different data sources.
Logan: What is data is the new code mean, and how did that serve as a primitive to the founding of scale?
Alexandr: The basic concept is that the, what is the building block that enables the next generation of applications? And I think that building block undeniably for the past. Let's say 50 years has been code is has enabled many revolutions in technology. Most notably the Internet and mobile and everything that's happened.
And code was that fundamental building block. And I think as you peer forwards towards the era of AI. In a world where, models and algorithms [00:03:00] more and more start to be what we interact with govern the applications we use, be the sort of like core primitive of our, of the, of our technology, technological lives, then data actually becomes the building block.
And, the formative experience for me here was, I was in college at MIT right when Google released TensorFlow. And and it was like the first, it was the very early moments were like deep learning and large neural networks were starting to become democratized. And I remember using, it was like, I used the exact same algorithm to detect facial emotions as to detect, whether or not my food had gone missing inside my fridge.
And nothing was, nothing had changed, just data. The code at all was all the same. The algorithms were all the same. You run the exact same commands on the terminal and it was just data was changing the performance of the algorithm. And so the form of experience is basically if I, if you think about the [00:04:00] next.
Call it like 50 years of technology that's going to be built. What is going to differentiate one application for another? And what's going to be, what are those building blocks that are going to compose on top of one another, that's going to make an incredible, an incredibly differentiated thing or something that like delights consumers.
And that thing was data which, which gets at the heart of, I think the importance of it going forward.
Logan: So that's the insight. Can we walk through to a specific example, what a use case was in the early days that kind of got you going around this?
Alexandr: Yeah. So the earliest use case was all autonomous vehicles. And go back to 2016, 2017 in Silicon Valley the, probably the mega trend was autonomous vehicles and self driving and there were many companies being started. A lot of the automakers were starting their own programs.
There's the GM cruise acquisition, which was maybe the starting gun for the entire industry. And And, all of these, all these autonomous vehicles, one [00:05:00] requirement to be self driving is that you can fully see everything that's on the road, yet that these cars can drive down the road and can see, oh, there's a person there, there's a car there, there's a bicyclist there, there's a, cone, construction cone over there.
This is what the, this is what the traffic light says, fully understand the environment around them. And to be able to do that, they required, they had to build algorithms that ingested huge amounts of data of, basically tons and tons of examples that were, where the algorithm could learn from, which are basically in this scenario, this is where all the cars were in this scenario, this is where all the people were, this is the scenario where all the pedestrians were and and then train off of millions and millions of examples like that to build these robust vehicles.
It's come full circle now because you have, in San Francisco, you have self driving cars driving around everywhere without drivers in the vehicle. And it's it's now finally become a reality.
Logan: What did scale play in that. value chain of getting autonomous cars going, like where did you fit in versus where crews stopped or Waymo or whatever the right example is.
Alexandr: [00:06:00] Yeah. It was specifically in this, in this data refinement stage where the cars would collect huge amounts of data. They would drive around, you would get tons of footage video footage, LiDAR data, radar data, all this all the sensor data all together. In none of that data were there actual examples marked of this is where a person is, this is where a pedestrian is, this is where a bicyclist is, this is where a car is.
And so the algorithm had nothing to learn off of. So what we did is we went from raw data to what's called labeled data or high quality data from machine learning applications where all these examples were marked so That the model could actually learn, in what situations, what does a person look like, what does a pedestrian look like, what does a car look like, etc.
And one of the things that we like to say while I disagree with the framing that data is the new oil, if data is the new oil, then scale is the refinery. And we underwent, underwent this process by which you would convert large amounts of raw data to very high quality data that can empower your algorithms.
Logan: And why was that a problem that they wanted to outsource to [00:07:00] a third party rather than bringing that in house and building that competency out themselves?
Alexandr: I think in general, if you look at the overall AI industry, the sort of like large scale building blocks or the large scale ingredients for it end up being just such big problems that, that companies deserve to be built to, to occupy those infrastructure slots. So another way to think about this is when I was starting scale, very inspired by Stripe and AWS, these sort of like large scale infrastructure companies that felt very visionary because they basically, they realized that there were like.
the same problems that every company in a sector, every company in the startup industry, we're going to deal with. And they basically took those and built just almost like consumer level experiences for the developers. And built them to a point where it was like so easy to use and the economies of scale were so clear that they just became the defaults within the industry.
So if you look at that for AI or for for [00:08:00] machine learning there were three main ingredients. There's compute, so GPUs and other chips to power the, incredibly data intensive and compute intensive algorithms. And as we've seen, almost the entire industry outsources to Nvidia at this point.
There's talent which there's no way to outsource really, but talent is this place where, these companies obviously are spending huge amounts of money. Engineers at these firms are making millions and millions of dollars. They have teams of. Of hundreds and hundreds of people.
So they're spending on the order of billions of dollars on the talent, full stop, and then there's data and the, each of these three ingredients. Were such, there's such big pieces of the overall AI componentry that if there were companies that could solve them in a very high class way and a very high quality way they were going to be used.
Like infrastructure, the industry demanded, an infrastructure layer at the, at, for each of these components. So that's really the way I look at it. I think that there's, [00:09:00] for each individual company, they have this option, which is. Do I build it in house or do I use the sort of like industry infrastructure?
And most companies take an approach, which is there's a few things, which, where it makes sense for me to build things on my own to differentiate myself, I, you have to accept that you're going to do those things like generally speaking, less efficiently than the industry standard because of the economies of scale and network effects that the infrastructure providers have.
Logan: So you got going around that and obviously your use cases have been expanded today. We've seen what generative AI looks like, companies like open AI and Anthropic and many others. So where, where do you all play in the reinforcement learning, human feedback paradigm? Can you apply the similar primitives that you got going on there to, to now this world of generative AI?
Can
Alexandr: Yeah. So I think one of the craziest things about modern day AI is that most of the capabilities of these models. are taught by data. It's not, you don't have the AI system. [00:10:00] You still don't have AI systems that are just learning on their own and just randomly get, demonstrating these very human skills.
They're taught to them by by large scale data sets and human data. What we do, We build what we call a data engine, which is basically, similar framing, the refinery for raw data in the ecosystem, but that data engine powers every leading LLM in the industry today, effectively, basically every large language models is powered using scales data engine and the specific technique or the specific approach is where you just mentioned reinforcement learning with human feedback.
Logan: you explain that for people that maybe don't know that term?
Alexandr: Yeah. So this was a A technique that we actually worked with OpenAI back in 2019 on the very first experiments of. But the basic approach is is that you teach a model. what good looks like. So you teach a model how to assess whether or not one answer or one response is better than another.
And it learns that through a bunch of examples where human [00:11:00] experts are teaching it. So a human expert will say, this one's better than that one. And here's why. And the model can learn off of that and then know what good looks like. And so then when it, by the time it gets to actually producing results.
It has an internal sense of what good looks like and what bad looks like, and what's better than another thing. It's called a reward model. And then it does what's called reinforcement learning. It basically uses that internal sense of what good looks like to optimize its own responses. And what that means is that this allows the models to actually exceed human performance in a lot of cases.
It's how every human in the world can be a movie critic, but almost none of us can make a movie. Each of us can say ways in which a movie could be better or could be improved. And but, obviously I can't make a movie. In the same way if humans can teach the model, what better looks like and how to improve, then the model can keep improving even far beyond what human capability is.
Logan: You're in such a unique position to see how customers are leveraging AI.[00:12:00] Are there any interesting anecdotes or observations you've had in the last couple months or year, whatever it's been about? Enterprises, big companies, leveraging both you and one of the model providers as well to do something that you can speak to.
Alexandr: The really interesting opportunity for enterprises now is that if you look at the models, the best in class models that are built today, they're trained off of predominantly public data. So predominantly data from the open internet. But if you think about the total Data that's available, or the total addressable data, let's say.
99. 9 percent of that is actually private, proprietary data of some form. One way to like benchmark this is of the words that you type, that each of us type, what percent of those end up on the open internet? Like a vanishingly small percentage. Most of it is in messages, or emails, or, memos.
These things that, will never end up on the public internet, unless you're subpoenaed or something. So the, what that means is that like most enterprises, whether they know it [00:13:00] or not are sitting on troves of data that far exceed the amount of data that's accessible and these other formats are on the public internet.
And so much of the opportunity for enterprises is figuring out ways to take great base models that are trained off the public internet, but then intermingle them, fine tune them and specialize them on top of their own data, on top of their own business, their own customers, all of that context to produce things that are like quite uniquely theirs and proprietary and generally differentiated because of, all this data that they've amassed in the past.
Broadly speaking, that's where we think that the, this is where we, this is the direction the world's going to go is enterprise are going to be able to build models on proprietary data that sort of have unique capabilities. And the the exciting thing that's been happening over the past few months is is our work.
With OpenAI and others and other model providers, we partnered with LLAMA2, Meta and LLAMA2 as well, and then taking these general purpose models and fine tuning them on [00:14:00] top of on top of enterprise corpuses. anD so we've built a platform EGP, which enables this basically enables enterprises to, take their own enterprise data fine tune it on top of, whether it's GP three, five or other or Lama two or other base models over time.
And build things that are like uniquely capable for their own use cases, whether it's for customer care and support or for legal applications or for. Development and their own development capabilities. And I think that this is it's incredibly exciting because it's a way for enterprises to get the best of both worlds, all of a sudden I'm leveraging all of the incredible development that's happening in, among this small handful of foundation model providers, while also adding something to it that makes it uniquely mine.
So I think that this is the. This is the paradigm of the future for enterprise. There's obviously a long way to get there, but I think this is very clearly what the future is going to be.
Logan: Let's say you're an executive or a founder at a [00:15:00] startup or an enterprise and some decision making role in the core business isn't related to artificial intelligence. What? What should you be doing right now or what recommendations would you have for someone that isn't at a fortune hundred that they have people specialized in thinking about it, but the average executive or founder, like, how do you go about discovering what you could potentially use artificial intelligence for scale for open AI for,
Alexandr: You should basically first go through and catalog, okay, what are my unique data assets? And think through Hey, if let's say, one mental model to use is let's say there were a person who, was just a superhuman could read through all of that information more quickly than anybody else.
What are the things that person would be able to do better than anyone else in the world? And that's not like a, that's a pretty rough approximation of what this looks like for the model. The models are better at storing information than human brains. And we're not as time limited as time limit as human [00:16:00] brain.
So they can read through everything. And then what are the unique capabilities that you get from some, from a system that is able to have done that. And so it goes through that mental exercise and I would think about, okay, What are the unique things that I can do from there? So both cost reduction, customer care is a pretty clear example of cost reduction or optimization.
Or what are the offensive things I can do? And then I would just seek to build those out with knowing AI partners, ourselves, OpenAI, Anthropic these companies that are seeing the entire ecosystem play out. And I basically race to do that because they think, I certainly believe that, not all businesses immediately, but in a pretty short timeframe, it's going to be very clear which businesses have embraced AI and which ones are still not running on the models.
And it's going to become very evident from consumer experience as well as financial
Logan: How do you compare the Okay. Advent or the last five years of artificial intelligence to pass trends like the personal computer, internet, smartphone, iPhone, whatever it is, in your mind, [00:17:00] the like societal impact, GDP lift, productivity gains, whatever the right framework is, how do you think about it?
Alexandr: My honest take is it's going to be bigger than all of them. But you can look at it from a few different lenses. I think at minimum A. I. S. Clearly a new consumer paradigm, and it's a new It's a new way in which people will expect to interact with technology. And so in that way, you can say it's at least another mobile.
In the sense that mobile was just this sort of this mobile and personal computing, these were massive changes in paradigm and accessibility of of a lot of the base technologies. Same thing is happening here with AI. Chatbots are very clearly an extremely popular delivery method for technology.
And and as a baseline, you can baseline it as like new consumer paradigm for technology. But the upshot is like AI has been a very hype technology for a long time for good reason. It is the holy grail of unlocking human productivity. Because, take the framing of productivity, which is, let's say roughly speaking GDP per capita.
What's the economic [00:18:00] output divided by the number of human human heads you have? All of a sudden, if you have technology, so algorithms or AI systems that can start doing pretty meaningful chunks of what would otherwise require humans and people, you have a potentially, ridiculous unlock on, on productivity.
Another way to look at this is if you take all of US GDP, it's roughly 27 trillion of US GDP. Software and IT services is about 2 trillion of that. So everything that, you know you or I spend all of our time on is thinking about that two trillion dollars of that two trillion dollar bucket of the overall spend, which is not nothing.
But it's not even 10%, right? 16 trillion of us GDP. So more than half is in services. The biggest buckets of which are healthcare and the second and the next biggest bucket is financial services and the potential disruption of this 16 trillion dollars of services GDP, I think is. That's the tan, that's the potential of artificial intelligence.
That's where [00:19:00] you can totally, you can potentially transform to be 10X more productive, 10X better for the consumer, 10X better for the, 10X economically more efficient in every way. And that's just you can't imagine an economic opportunity bigger than that. In many ways, I think it is like, it is the biggest economic wave until obviously the future.
There's some new technology that has the ability to be as impactful. And. And the, the key question you would ask is, okay, so to unlock that, you just need to believe that the models will keep getting better pretty quickly. Because no matter what, if the models keep improving at the rate they're improving now, we're going to end up in that world where the...
The opportunity to disrupt the economy is just like totally unprecedented. And, I think we don't as a, as an AI community see that slowdown happening anytime soon. So we're just in, in the midst of potentially one of the greatest economic engines of the world being invented and that's, I think will be one of the most special technological changes we see,
Logan: You said the next two to three years of [00:20:00] AI are going to define the coming two to three decades of the world. What did you mean by that? Was that related to a lot of this productivity gain stuff? Was that geopolitical in that comment?
Alexandr: I generally take Take the stand. There's two ways to look at the world in terms of, let's say the balance of power, the balance of countries and whatnot, I think you can look at it from an economic standpoint and you can look at it from a hard power standpoint. Probably most of the history of the world before world war two was dictated by hard power.
And then most of the history of the world for the past 80 or so years has been dictated by economic power. And. You could certainly ask the question, which is going to define the, the next 80 years, but at minimum, it's like one of the two. So if you take that framing, so I think one of the things that's quite shocking is the next two to three years of AI development.
Everything we've seen over the past three years. Or four years of AI development is shocking in 2019 with GPT 2 couldn't count to 10, it would spit out [00:21:00] gibberish English, it was totally unintelligible. And then now four years later, GPT 4 is probably more convincing and eloquent than like most people in the world.
And and that happened over the course of four years and a roughly. On the order of 1000x scale up of the models. GPT 2 is roughly 2 billion parameters. GPT you ask, is somewhere between a trillion to 2 trillion parameters. It's been across a, roughly 1000x scale up. We've seen just this transformation from worm level intelligence to to something quite convincingly human.
The next, Um, two to three years, many companies are on the record for going, undergoing another hundred X scale up. These people will go from spending hundreds of millions of dollars on these models to tens of billions of dollars on these models. And my expectation is that's going to deliver, very powerful algorithms that have the ability to impact both of these spheres, both economic power and hard power.
Okay let's say we're in this takeoff scenario, the technology economic power, I think the case for economic power is pretty clear. If you believe what I just said around it being [00:22:00] the most important thing for for global productivity or for economic productivity, then whoever gets there first, whoever integrates into their economy fastest whoever is able to actually like leverage MBA first, whichever country or whichever society does that first is going to have this meaningful leg up from an economic standpoint.
And then from a hard power perspective, you believe the technology is. is of the similar vein as the atomic bomb, which, we can certainly dive into, but if you believe it's that kind of technology with the ability to, both deter and project hard, deter conflict, project hard power to that degree, then it's also going to fundamentally change the balance of military power.
So it feels to me like no matter how you slice it, this technology, while today we think about it as like a chatbot is like at the core of. The balance of power globally for the next, 50 years,
Logan: What part of the atomic bomb analogy do you agree with? What part do [00:23:00] you reject? You obviously it's near and dear. You grew up in Los Alamos, so you have something familiarity with with elements of it. What do you believe about that comparison versus not,
Alexandr: there's a bunch of interesting nuances here. So the atomic bomb was obviously primarily a weapon of war. So it was, it is a weapon and it's something that pretty clearly, you know, we as a, we as an entire world could pretty quickly agree we didn't want to use that anymore. And so it very quickly became, went from, after a few uses to being this like very clear deterrent for conflict.
And this huge stabilizer for the globe. The difference with artificial intelligence is that No matter what, we're going to have to use the technology for economic purposes. So there's no scenario in which, the countries of the world are going to get together and say, Hey, we're not going to use AI anymore.
And artificial intelligence is a pretty difficult technology to detect the use of. So part of the issue with [00:24:00] AI is Russia could be using it for cyber attacks today. And it'd be very hard for us to actually like, know that's what they were doing. It's almost impossible to hide the fact that you used a nuke, right?
So because of that, it makes it pretty hard to set the right international standards around the use of the technology, the fair use of the technology, how, set standards around how terrorists can and should use the technology. And the only thing that makes it that presents challenges as a world, which is that this is a hard technology to keep in any sort of box unlike nukes.
The way in which it's similar, I think, are that it's a technology that, that has a very steep technological curve and has very clear benefits to scale. So to the degree that the United States can be the leader or a few, a small set of democratic countries can be the leaders in this technology.
Then I do think it has the potential to be a huge deterrent towards other countries that are behind on that curve. And that's been, [00:25:00] that's certainly been the case with atomic weapons and nuclear weapons.
Logan: do you worry about it? A catastrophic risk scenario of that fast takeoff and anything that's more nefarious with AI itself, rather than being used by foreign entities for things, I don't know, buying weapons or whatever you want to compare it to. Do you worry about it in and of itself?
Alexandr: My, my taxonomy on AI risks is there's three buckets. The first bucket is the AI qua AI risk. The AI itself become the threat to humanity. That's personally speaking, not the bucket that I am most worried about or concerned about. And I can, I'll speak more about that.
There's the AI misuse category. Authoritarian countries or terrorist groups misusing the technology. I think that's a very real risk. I think that's like the most real risk that we have. And then there's the last risk, which is like a, a second order effect, which is with massive with massive labor [00:26:00] displacement, you'll see, all sorts of political instability, domestic instability populism, social, these kinds of trends in, in many developed countries.
So the misuse one, I think is the is very real. I think it's we're seeing overall an increase in terrorism in the globe. And I think that the potential for misuse of the technology is very high again for cyber attacks, for bio attacks, bioweaponry for information warfare and even stuff, the version of this, I think is like almost the most direct or clear is there's these companies like character AI and replica that where.
You can have an AI model that becomes a genuine companion to huge percentages of the citizens of various countries. And if you had a foreign run and foreign operated AI companion company, I think that's The most effective intelligence agency that you could possibly have. There's like a lot of, there's a lot to be worried about in the realm of [00:27:00] AI misuse.
And it's something that I think is, that's certainly like very concerning. It's something that we as a country, we as a society need to think about. How do we mitigate those risks? There was the executive order from the Biden administration. I think we're certainly thinking about those.
Hey guys, Rashad here. I'm the producer of the Logan Bartlett show and wanted to take a quick second to make an ask.
We are close to 10, 000 subscribers and are trying to get there by the end of the year. If you're enjoying this conversation and these episodes, please consider subscribing to the YouTube channel. Now back to the show.
Logan: What's something that you believe inevitable about artificial intelligence in the next five years that maybe isn't mainstream or the average person wouldn't fully appreciate?
Alexandr: I think there's a bunch of things I'll mention. I think one that most people in AI see and believe but certainly is not is not super, is not yet fully mainstream. It's just that these models are going to become very quickly some of the largest Investments in most countries.
If you believe these go from hundreds of millions of dollars to billions of dollars, tens [00:28:00] of billions of dollars to hundreds of billions of dollars, there's not that many countries that can afford a hundred billion dollar investment, either funded through private industry or funded through the public sector, through the government itself.
And so this very quickly becomes a One of the largest economic projects or like scientific projects that the world has seen, which which I think it's like maybe surprising people that it isn't that yet, these models, they've cost hundreds of millions of dollars, but a lot of people can afford a few hundred million dollars very quickly, it's going to be almost like particle accelerators or like these like massive scientific projects in terms of scale of investment.
I think the other piece that, that many people don't think about or. Yeah. I think is just going to slowly blend in is that the percent of time that humans are interacting with other people versus to a model directly, that split is just going to keep accelerating the direction of the model.
There's no, there's truly no reason outside of regulation, you would believe the percent of time[00:29:00] that percentage, the percent of time I spent, percent of my total time I spent interacting with models is going to decrease. At any point for the next few decades, so that's going to increase monotonically.
It's already pretty high for me. I interact with Chachapiti quite a bit already. And I think that's a very weird sort of sociological scenario for us to contend with, which is no matter what these models are going to start, eating into all the time you spend talking, interacting with other people.
If you believe that the AI, that the models are only going to get better, if you believe that they're only going to have like more interesting data, if you believe the products are going to get better. The monotonicity of the improvement is going to be very weird to think about. And, maybe these don't happen in the next five years, maybe these happen over 10 years, 15 years, who knows when they happen.
But at some point, people are gonna spend more than half their time talking to models versus humans.
Logan: There was once a concept or belief that it would be low level sort of manual jobs that would get automated through artificial intelligence. I think increasingly we're finding that what these models are [00:30:00] good at are entirely orthogonal to our understanding of what is difficult versus not. How do you think about that orthogonality and what AI is good at versus what it isn't and it's off to the side.
Alexandr: Yeah, I think this just all boils down to data availability. Going back to it, right? Data is the lifeblood of all these algorithms. Everything they learn, everything they are capable of, they've learned from data. And so it turns out that, by using the internet over the past few decades and by commenting on Reddit and, uploading stuff to the internet, we've happened to have been creating the largest data set of of human behavior ever.
So anything that like we did on a computer, which most of it was fundamentally knowledge work or knowledge related or intellectual because, By definition, it's abstracted away from the real world. That's what the models have a lot of data on. So they have remarkably little data of, what it's like to pick something up, or what it's like to throw a ball, or what it's like to [00:31:00] manufacture something, or, all the things that...
are embodied in the real world. It has very little actual bearing on and very little data. And and that's going to be true for a long time. The the sort of like digital presence of these models and digital intelligence is always going to surpass for probably perpetuity will be far more advanced than like physical embodied capability.
If you think about it from a, just like data availability standpoint, I think it makes perfect sense. And obviously where it gets really weird is. The economic impacts of this and the sort of what does that mean for, the future of labor?
Logan: You've touched on the three components of model development, being talent, compute, and and data. What do you think the most limiting factor is today? And what do you think it will be in, I don't know, five years time or 10 years time?
Alexandr: I think data and compute are definitely the limiting factors today. And I'll, compute has a very clear limit because of manufacturing capability. So the supply chains for the, for both of these are, I think are worth diving into.[00:32:00] So the, a hundred percent of high end GPUs that fuel these models are manufactured in Taiwan today.
There are these fabs that TSMC has put tens, if not hundreds of billions of dollars into CapEx. To build and continue to refine and improve. And that's just like a very strong upper bound limit for the compute capability and capacity for these models. So by definition, like if you believe in like continued exponential scaling, it gets pretty hard unless you have an exponential scaling supply chain as well, which is something that, again, economically speaking is not really is not really feasible today.
So compute is both the pinch point today. Obviously we see how much Nvidia chips sell for and how, how much startups want them, but it's also just a clear limiting factor to the exponential growth scenario. Data is as well. So I think a lot of a lot of people observed that or is there more pre training data out [00:33:00] there?
Have we run out of high quality tokens? And and there's certainly like some very lucid arguments by some folks that show that. Some of the scaling laws will be tough to keep up because we just don't have that much more high quality data on the internet. And, this is argument is video data, high quality data is video data, not high quality data.
These are the sort of questions. Text is a very unusually compressed form of knowledge and information. Video is much less much less compressed. So then if you don't have enough pre training data. Where a lot of this is made up for or where a lot, where there has to be a big scaling to make up for all that is in our LHF and post training data.
And I think we're going to start seeing, again, similar kinds of bottlenecks where, The amount of human experts who are really what's needed to fuel this sort of RLHF stages. Human experts become GPUs in in their own right that, basically the number and quality of human experts who are fueling model improvement is gonna, in and of itself, become another supply chain bottleneck for the industry.
Logan: as we've looked at GPT [00:34:00] 2 to 3 to 4, it's it's seemed to be outside in almost like linear development that we're actually, that's going on, but clearly there's more stair step functions along the way. Do you think with the constraints we have and what we just talked about, we're going to hit some plateau at some point that's going to require a much bigger unlock of one of these things to really reach that.
Next major step function.
Alexandr: When you talk to people at the leading labs, They spend all their time thinking about the supply chains for these models. So I think that implicitly, if nothing happens, these will be really big bottlenecks. That being said, I think that this is potentially the greatest human engineering project that, we've ever seen.
And so I think we're going to figure things out. I think what that means is you're going to start seeing some pretty crazy pretty crazy actions to try to secure and ensure that the supply chains can continue scaling. But again, I think that's the technological imperative that we operate in.
Logan: Do you think we're under [00:35:00] appreciating as a society, the reliance on Taiwan and the political position that Taiwanese find themselves in there and what that means for artificial intelligence for us?
Alexandr: One very clear indication of the degree to which like we don't appreciate it is just in the the multiple gap between, NVIDIA and TSMC. TSMC trades a dramatically lower multiple than NVIDIA. NVIDIA is a higher margin company, of course some of it's, it's very well, deserved.
But TSMC, I think, from my talking with public market investors, they get dinged because of this geopolitical risk. What happens Taiwan is just at the sort of this pressure point for the world.
Logan: What's your perspective on open source versus closed source models? It seems to be a big debate these days. Do you have any opinions on that?
Alexandr: As a company and my personal point of view is to be quite agnostic to how the technology develops. I think that AI is an incredibly powerful and good technology. I think that. All [00:36:00] development on these models is great. And as long as you have safe open source development as well as safe closed source development, both can be done poorly and unsafely and both can be done safely and well.
And if you have safe development on both it's great. I think that open source models are probably a requirement to ensure that AI achieves the full economic impact that it can have. There's. A lot of scenarios where you need you just don't have very much compute. You need a small model running somewhere that probably needs to be an open source model of some form.
Doesn't make sense for there to be some small closed source model just to fit that need. And so I think it's good for, economic growth and economic prosperity that we have open source models.
Logan: I've heard you talk about the competing curves of AI. Can you talk about inequality and the competing curves of scale and democratization a little bit more?
Alexandr: Because of the scaling laws as the models become ridiculously expensive to train, tens of billions, hundreds of billions of [00:37:00] dollars, potentially even trillions in the future, that very clearly limits the accessibility to the underlying technology. Just in the same way that like, none of us have access to particle accelerators.
And that is like the poster child non democratized technology is a particle accelerator. There's this avenue where the where that becomes the version of the world. So that's very clearly going to happen. And that's one major sort of like tent pole for how the technology develops.
And then the other one, which, there's so much will and might within the community to accomplish this is how do you push all of these models down the cost curve? So quickly that it's like a few years after you have these incredibly powerful closed source models, you have very good open source models that just get where the cost group can be climbed down to really dramatically quickly.
I think we're seeing that in open source models. I think we're seeing like GBD three, five level models that have happened very quickly and actually are very [00:38:00] small. I think there's some recent things that show that, these. These 10 billion parameter or even smaller models can perform at the level of GPT 3.
5. So this this pretty rapid improvement of the democracy. I think basically you have this, so one curve is the scaling and the other curve is the speed from a frontier model results to democratization of that technology. And these are the push and pull of the entire industry.
Logan: What is the Turing Trap and why is that significant in your mind?
Alexandr: Yeah, so the train trap comes from this comes from this great paper that this economist Eric Bernolson professor at Stanford and others wrote. And and the basic premise is, The starting condition of AI, the, many ways the invention of AI came from this concept of the Turing test, which is, at what point do you have an AI that can fully imitate a person?
And because of that framing, we've thought about AI as a as a replacement for humans predominantly. So we think about when we [00:39:00] have AI, it's going to replace humans in the workforce and that will be its impact on the economy, which is. As Professor Bernolson argues is a trap because what's actually going to happen is you're going to have AI systems that like slowly walk up the capability curve.
And as they come in, most of the value is going to be generated from basically hybrid human AI systems. And it's going to be like through some very interesting and complex and nuanced interaction between human capability and AI capability that you're going to get. These very economically valuable things to occur.
And because of that, AI, uh, in most outcomes, or in most versions of the world, actually ends up being a pretty strong creator, or net creator of more jobs, or net creator of more demand for human labor. And that's, I think, the, one of the very important messages, which is, there's this perception that AI will just take all of our jobs.
No, the answer is AI is going to create, a [00:40:00] fundamentally different economy, which has a fundamentally different mix and kind of job, but that will probably net create greater demand on human labor.
Logan: There's a lot of the world's obviously complex and there's a lot of complexities and nuances associated with artificial intelligence. That being one of them. Is there another one that's a general misconception that people have that you would like to clarify or express your opinion, dissenting opinion of?
Alexandr: I think one of the major things that people, when they think intuitively about AI, that they get wrong. And I think this is I see this in a lot of places is this it's a very easy tech technology to say, you have an AI, use GPT 4, you realize, Oh, it just hallucinates all the time.
And then you throw your hands up and you're like, Oh, this technology is fundamentally limited. And it's never going to go anywhere because it hallucinates. And I think the tricky thing about AI is like, very hard technology to bet against. Because every prior instance where you would like, use an earlier version of models, if you used GPT 2 and you said, [00:41:00] Ah, this thing can't count to 10, and you like, throw your hands up, it's there's no future here.
Or, GPT 3, you would use it, and it's it it can't solve a simple math problem. You're like, you throw your hands up, and it's this isn't gonna go anywhere. I think that a lot of people, even in the AI industry fundamentally don't actually believe in model improvement. And... It's a big, it's a shame, honestly, because I think the reality is the models are going to get a lot better.
And I think it's hard to imagine how the models will get a lot better, but they will. And and we need to be thinking about a world where we're just on this continued track of model improvement.
Logan: You're a student of geopolitics and how artificial intelligence, plays in that, so much so that you recently did a a TED talk on the subject. Can you speak to the battle that you see playing out in artificial intelligence within the geopolitical world that we're in particular China and the U.S.?
Alexandr: So one of the ways in which AI has been surprising is the degree to which it's become a clear, objective and imperative [00:42:00] for many countries and many geographies around the world. So obviously most, much of it was invented in the United States at Google and OpenAI and DeepMind, et cetera. But very quickly now you look, China is obviously trying to move very quickly.
The Chinese tech giants have bought an aggregate of over 5 billion worth of Nvidia chips. That's a lot of chips. The UAE. Particularly, but the UAE and Saudi moving very aggressively into the technology, building large data centers. The UAE is open source to successive open source models.
One of which is 180 billion parameters. These are very big and serious models that they're building. In Europe, you're seeing some of the best open source models coming from European companies, European startups. And, from my conversations with people from many other countries, there's certainly.
There's many others who have like clear aspirations in the, in, in AI. And so at minimum, it's becoming this technology that a lot of countries are looking at as Hey, this is really important for our future. And what's really like more concerning is the degree to which [00:43:00] the degree to which certain countries, particularly China are very clear eyed about the monumental impact this technology can have, one of you know There's a number of PLA people the army, the DOD equivalent of China.
There's a number of PLA documents that talk explicitly about how AI and other breakthrough technologies could allow. The PLA to leapfrog its adversaries, most notably the United States, which is the most powerful military in the world because, we're going to over invest into our legacy platforms and just upgrading our legacy platforms versus the new breakthrough technologies.
They'll over invest in the new breakthrough technologies and they can leapfrog us just like China leapfrogged the United States in fintech and payments technology. Where we pay and all their digital payment infrastructure is most people believe surpasses the sort of like state of payments infrastructure in the United States.
This is the question. The question is what is the, 40 percent of [00:44:00] global GDP US GDP plus Chinese GDP is 40 percent of global GDP. So these are like the two behemoths in the economy. And the key question is. Is AI the catalyst for China to overtake the United States or at minimum, dramatically gain ground versus the United States?
Or is it the technology that allows the United States to ensure that, we can maintain global stability as via, by by persisting and continuing Pax Americana? If you talk to a lot of political scientists, there's a pretty a pretty clear consensus that If Chinese military capabilities catch up to that of the United States, that's a very unstable world.
Whichever side you're on that, that definitely results in greater levels of global instability. Because, the, a lot of the global stability or one major portion of the last 80 years of relative peace has been because America has been the clear, hard power superpower in the world.[00:45:00]
The, if you have two superpowers, you get a high level of you'd get greater entropy in the system. There's more proxy wars. There's more there's more overall instability. There's more war, there's more death. So I think that the, in this like broader battle between democracy and authoritarianism and the, these like different government systems and these different ways that the world can organize AI is.
One of the major chess pieces in that game. And and that's why I think it's critical that, we as Americans or American general is able to maintain that pole position.
Logan: Maybe speak to the proportionality of what China's spending versus the US today.
Alexandr: For the past few years, at least China has been spending the PLA. The Chinese military has been spending between roughly one and 2 percent of their budget on on AI technologies. And then in that same time period, the U. S. DoD has been spending 0. 1 to 0. 2 percent [00:46:00] of our budget on on AI technologies.
What the PLA had been forecasting is actually playing out in reality right now, which is we, we're over investing into just our legacy platforms, our legacy technologies, under investing in the breakthroughs. They might reach a breakthrough before us and we might be left, in a situation we don't like.
Logan: By the time people hear this, you will have already been to the UK AI Summit. I think you did a great job there, by the way. I think it was really well done. You're heading out tomorrow. Why is intending attending this important to you? And what are you hoping to accomplish?
Alexandr: There's a few threads here that I think are interesting. I think one is is ensuring that there is a track for global cooperation on AI. I think regardless of what you believe, if this technology is as important as I think it is, as many think it is it's something that, that requires many of the countries in the world to have a to have a clear and open dialogue around, you don't want anybody going off track and doing things in a way that is opaque to the rest of the world.
[00:47:00] That's certainly a driver of instability. So I think at minimum, there's a huge amount of just intrinsic value to the world by being an open dialogue between all the countries to discuss the technology. And many of the countries are going to be there, which is great and kudos to the UK government for creating such a forum.
I think that the other piece that's critical is ensuring that we're thinking about the right risks and the
Some of the existential risks. And I want to make sure that we're also thinking a lot about the risks of misuse and what are we doing about those and how do we think about those? So I think it's important to me to ensure that we have a broader view of particularly geopolitical risks at play.
And and and ensure that shapes the sort of global dialogue around the technology.
Logan: What role do you think the government plays in regulating ai?
Alexandr: Yeah, I think it's a, it's obviously the question of the day, literally with the executive order coming out. So far, the approach has been to take a very[00:48:00] quite a light touch on regulation of the technology, particularly because we're in such an early stage. And one of the worst things that you can do for a technology as high potential as AI is to squander the opportunity early on by over regulating it.
So I think that's been, I think that's been smart. I think that the key is... The government needs to ensure that the misuses of the technology or the ways in which the technology can be used to create meaningful consumer harm or meaningful harm to the citizen base that those don't happen, or at least that those are, very highly punished and limited, very limited and difficult to do in some way.
And so to that end, I think that the, and this was a key part of the executive order. One of the most important things is ensuring that there's the proper. testing and evaluation regime for AI systems. So how do we as a society agree certain AI systems and use cases and applications are fit for purpose and ready for prime time versus, [00:49:00] totally inadequate.
And there's versions of this that exist in all sorts of ecosystems. The FDA approves drugs. We don't, you can't just buy like random molecules off the internet and and ingest them and expect that to go well the the there's similar kinds of regulation on planes, obviously, and cars, and, these technologies that are potentially very dangerous.
And even the, even Apple does a version of this for for apps in the app store. You don't have to be approved by the app store. So I think this is the key. This is the key question. And in my conversation with folks in the white house, it's this is the industry that needs to exist that, that doesn't exist.
And, we at scale, we're trying to play a big part in this topic. We worked with the white house and DEF CON on some of the first public evaluations of these models a few months ago. And our view is that, you need to have a pretty clear regime of, testers in the private sector with pretty clear regulation and guidelines given by the public sector and a very clear opt in from the model [00:50:00] providers and those implementing AI technology.
Logan: I want to back up to the founding of scale and transition from some of those broader topics. So what was the original insight behind the business? What did you recognize at that time that sort of led to its founding?
Alexandr: Yeah, I think that the the key insight was that, uh, simply put, it's if AI were going to grow. the needs on data we're going to grow exponentially. It's pretty And, I had no idea at what timeframe that was going to happen, or when that was going to happen, or at what scale, size, and magnitude that was going to happen, but I had a pretty strong conviction that, um, neural networks and AI were going to be more and more ubiquitous, and if you believe in that, you believe there had to be, infrastructure for data to, to meet that challenge and meet that growth.
That certainly played out, I think, even in a way that's been surprising to us, which is that the amount of data [00:51:00] required for these AI systems and the sort of like hunger for new data has far exceeded, I think, what I originally even would have conceived possible by this time frame.
Logan: And so you spent how long at Quora before going to school?
Alexandr: I grew up in in Los Alamos, New Mexico parents, physicists at the lab a lot of physicists at that lab and and then went to work at Quora for about a year. That was my foray and taste of what technology was like with the
Logan: How old were you when you were working at Quora?
Alexandr: Working? 17. When I worked there, I was 17. And and it was pretty eye opening in the sense that you really get, it's like the Steve Jobs quote that I think every new employee at Apple hears about, which is you realize that everything around you is built by people no smarter or no more capable than yourself.
My colleagues at. Core were brilliant, but it was like crazy to think about this was a site I was spending a lot of time on as a teenager, and it was built by, a team of a hundred or so people. And and it was this like very empowering experience. Then I went to [00:52:00] MIT, started training neural networks of my own and and the rest is history.
Logan: And so you went to MIT and you got bored with the learning aspect of academia and wanted to go be a practitioner in the field. Is that fair?
Alexandr: Yeah, I think that like the, I think one thing that kind of stuck with me is that the, it was already playing out at this point in 2016, when I started scale, which is that it was pretty clear that the amount of resources that you would need to fully accomplish AI or to see the, see AI through, through the fullness of time, we wouldn't vastly exceed what was available in academia.
And obviously that's true in an almost ridiculous degree now with, hundreds and billions of dollars being used to train the models. But but that was. That was probably the key the key driver.
Logan: What inspiration have you taken from Amazon with operations and technology being combined for scale?
Alexandr: Yeah. Huge, huge amount. I think Amazon Amazon in many ways is one of the most countercultural tech [00:53:00] companies in certainly the world. I think the key insight of Amazon or they have many key insights, but one of, I think the key insights of Amazon was that operational excellence is actually a huge driver of.
of tech surplus and tech value. And so Jeff Wilkie, who ran consumer or operations there for many years and was the CEO of Worldwide Consumer. So everything outside of AWS is a very close mentor of mine. And you learn pretty quickly that there was a way of thinking there that is you just don't see it any other tech company, which is a deep embrace of operational complexity.
And operations as a discipline, a deep embrace of the sort of like amarriage of technology and operations to produce things that are produce linear combinations that are uniquely powerful and uniquely capable and an extremely pragmatic approach to the business decision making and those in combination created, I think, one of the, greater economic engines of our time.
So learned a lot. And a lot of what we do at [00:54:00] scale is taking that same. Approach and playbook and philosophy, which is how do you marry operational complexity with fundamental technology breakthroughs to drive an entire industry forward.
Logan: And Amazon's also been canonical in parallel execution as well. Is that something that you guys think about when executing across the suite of different products you'll offer?
Alexandr: Yeah. Yeah. And the key, like the beauty of that insight or the key of that insight is that, you figure out how to architect problems such that you have as few dependencies as possible. And so you have as many things that you can bet on in parallel at once. Which is something that investors, I think, um, are obviously understand quite well, and if you have enough independent bets, then then you can double down on the ones that work out and it ends up working quite well.
Logan: Can you talk a little bit about, I've heard you reference the dichotomy of how businesses are rewarded for predictability, but actually benefit from. Elements of random discovery, maybe using Amazon as a, as an example.[00:55:00]
Alexandr: Yeah. So if you think about Amazon as a company it was an online bookstore and then it was the everything store online, the online everything store they created prime. So they create this membership program. And then. It became the largest data center provider in the world. anD that last piece sounds so non sequitur if you like tell it that way, that it just, it almost seems like it's what like a bad author would write into a book that you had this, you had the everything store and they were so big and bad and then they like ran all the computers globally.
It just sounds so, and now if you look at Amazon's market cap, depending on who you talk to, most analysts attribute the vast majority of the value of the company to to AWS. This very unpredictable event that Amazon was going to invent AWS and then build that business is actually.
The core driver of its market cap and value today is a very, like pretty crazy thought. Cause talk to most growth investors. [00:56:00] Most growth investors are trying to very directly understand what will happen to, the revenues company over the next few years. How predictable is their growth?
What's that exactly going to look like? But, the thing that affected their earnings the most was this totally unpredictable event of AWS being invented. And so he was this, like this pretty confusing property. of companies, which is that on the one hand, investors like think that they're betting on, the next few years of execution for the company for the best companies, what they're really betting on is in continuous reinvention.
I think NVIDIA is actually the best modern example of this, which is that NVIDIA GPU company. selling gaming and graphics chips for decades, literally decades. And and like 15 years ago they noticed that people were starting to use NVIDIA GPUs to train algorithms, train AI algorithms because of the parallel computing [00:57:00] capability.
And they just started investing a huge amount of time, effort, R& D and their attention towards supporting that use case. And required a huge amount of conviction at that point to like conviction in AI to start that investment that early and, keep leaning into it so much, even long before it was a needle mover on the financials of the business.
But today NVIDIA is a trillion dollar company almost purely because of AI. And if you're an investor in NVIDIA stock 10 years ago, again, it's a very similar thing. It's you're evaluating the ability of the company to execute on. graphics chips and gaming chips.
The thing that actually matters for whether or not you're going to make a ton of money on the investment is whether or not they invent themselves to be an AI company. And so I think this is the, this is this is the core of markets where the core of companies that a lot of people don't understand is that the the bet the thing that you're almost always actually betting on is the capability to reinvent.[00:58:00]
Logan: How do you manifest that culturally within scale? You guys obviously it was scale API once upon a time and scale AI focused on mostly autonomous vehicles. Now it's much broader than that doing stuff around RLHF, but it sounds like this is something you study and think about how do you. How do you make sure that exists culturally within the business?
Alexandr: Great questions. What I spent a lot of time thinking about. And there's a few things that we do, and there's like certainly a lot more that you can do at all times. I think one is that we create a culture of as much as possible pure meritocracy. And one that leans heavily into people who are usually more junior at the company, who have ideas that are good, being almost like thrown into the responsibility of having to run with those ideas and turn them into something big.
And this kind of culture of if you have a great idea, first of all, anyone can have a great idea. And if you have a great idea, you have like almost full accountability to realizing it and making it happen. This [00:59:00] kind of culture, really is not how most companies operate.
Most companies like everyone can have a good idea and then like some director or some VP steals your idea and then makes that into their career move. This kind of culture is is pretty unique and, and we really lean hard into it to make it very clear that the limit and I've talked to a lot of I always talk to new people joining the company and people have been with the company for a bit to make sure this is always true, that the true limit to your impact and future at scale is just like, it's limitless depending on how much you apply yourself, how good your ideas are, how innovative they are, et cetera.
That's one. I think another is that we try to be very We try to always put ourselves focus on big problems, if that makes sense. I think a lot of times this is, Amazon's version of this is like focusing on the customer, but I think if you have the right sort of fixed point in your, in the system, which for Amazon is the customer for us to think about the big problems in the industry then you'll always end up finding, stumbling upon [01:00:00] opportunities like.
Continue to be bigger and bigger and bigger. And by that, we were focused on autonomous vehicles for a very long time, which is a huge problem, very, a very big, complicated, interesting problem. At a certain point, it became pretty clear that a lot of what we talked about with geopolitics and sort of the the importance of AI to the future of the sort of balance of power between countries that we had pretty high conviction that was going to be the case.
And we leaned very hard into working with the U. S. government and the U. S. DoD. And and a lot of this technology that we built up in servicing the autonomous fuel industry was pretty applicable. But, we then took on this much larger problem of how do you ensure American leadership? And how do you ensure that the U.S. stays ahead? And that's such a big problem that, in the course of serve serving that problem, we stumbled upon. Much larger opportunities than the original opportunities in autonomous vehicles. And the same has been true now with, the big problem is helping to ensure [01:01:00] the maximal progress in the AI industry.
Like how do we ensure that these models. Are the most impactful version of themselves that we pushed for the maximum amount of progress in the AI industry. And that's, the biggest problem of our time. So I think pushing ourselves to be continuously ambitious for what is the North star of the business, I think has been critical.
Logan: I want to ask about interviewing. You said your favorite interview question is what's the hardest you've ever worked on something. Why do you like that question?
Alexandr: Yeah. So I generally think there's there really are two kinds of people in the world. There's. And this is this is a psychological term, but there's having an internal versus an external locus of control. So if you have internal locus of control, it means that you believe the things that happen in your life are actually more a product of what you do and the actions that you take.
So you believe a lot more in you're holding the reins on your own life. And if you have an external control, it's the opposite. You believe that the things that happen to you are mostly the outcome of things outside of your control. Like the world's very [01:02:00] deterministic and you're like a pinball in a big pinball machine.
And and I really if you know how to look for it, this really is like a very clear dichotomy between how people think about their lives. And I find that, I only want to work for people, work with people who have an internal locus of control. And and one way to look at that is, or one way to like index off of that is seeing how hard do people work at things that matter to them, right?
Because, there's things that matter to everybody, everybody's things that matter to them. But if if they have an internal locus of control, then they're going to work their ass off to make sure that the things that matter to them happen the best possible way. If they have an external locus of control, things matter to them, but they throw their hands up and let the world take the wheel and so by, by seeing how hard people work on, the things that matter the most to them and like by really like actually quantifying and getting a sense for how obsessive were they, how much do they really care, how small of details do they sweat, you get a pretty clear indication [01:03:00] for how much how much control they believe they have on their life outcomes.
Logan: What's the single trader characteristic that you're most looking for in hiring? Is it that locus of control or is there something else that stands out?
Alexandr: Yeah, I think there's a few, there were like, we had this early document that we wrote up around what do we look for in people that we hire? And there were there were four traits. One is internal locus of control. Two is problem solvers. Fundamentally people who are like very good at creative problem solving, you give them a problem and they'd figure out like.
Sometimes it would, you couldn't solve it just by tackling head on. They'd figure out like a way around the roadblock. That's a really important trait. Third, we look for people who are impressive. We look for people who, when you talked with them and you worked with them, you were we're genuinely impressed by them.
And it's a shorthand for people who are just like constantly upping the bar of the organization. Which is the like, if you're impressed by somebody, you're gonna be very motivated by coming to work every day and work with them and learn from them. So we held a pretty high bar there.
And the last one was people were collaborative. I [01:04:00] think that you can have people who are like, high levels of control good problem solvers, very impressive, but just suck to work with. And so those were the, those were like the North stars for the organization. It's carried us pretty far,
Logan: You've spoken about how the prestige around big brands and tech actually kind of perverts and distorts the perspective around hiring in Silicon Valley. And how do you think that's the case? Like these big brand names, people stay at for a long time. Why is that kind of a contra signal that you'd look, not look for?
Alexandr: I think one of my favorite lines around this is if you're recruiting organization looks like a college admissions office, then, you should be pretty scared of something along those lines. And I think it's true, which is that the reality is it's very hard for somebody At a big tech company to have any sort of real impact.
I'm not, this is not too much of an indictment of the big tech companies, but they just hire so many people. They have, a limited scope of problems that really matter. And so a lot of the people they hire just ended up working on like [01:05:00] a teeny piece of a teeny piece of a teeny piece of a teeny piece of a problem.
So if you think about the selection bias, the people who get selected into these very large brand name tech companies are those who. They're almost, they're over optimizing for brand and status relative to impact. By contrast, small startups are like, are literally the exact opposite. Like you're joining a small startup because you're like, wow, I see the five people working on this thing.
And I know I can come in and have a big impact when you say they're all doing, they're doing a bad job, but I know I can have an impact but it's not gonna be a cool thing, like I'm not gonna be able to tell my friends about working in a startup and they're not gonna think oh, wow, that's really awesome.
And so a lot of hiring. A lot of it is like skills based, but a lot of it is also just quote for least testing people and you really want these people who don't care about status, care a lot about impact. And I think, yeah, I think big tech companies negatively select for that.
Logan: We were talking about zero to one before we got going and how it's been normalized in startup culture. And I think once upon a time it was [01:06:00] very revolutionary, but now I think a lot of the things that they wrote about or Peter wrote about in the book has become kind of status quo in a lot of startups.
Is there something that you've read or internalized? Today or recently about startups, that's non consensus that you think will be at some point in the next couple of years around how to operate or work with companies.
Alexandr: I think one thing that is certainly non consensus in, in the context of the ecosystem, but I think is really, is certainly even in my experience is that you really the value of very hardworking people who are not necessarily super experienced. In your company. And it's like pretty surprising. There are two, there are some kinds of companies where it's a small group of very experienced people who build something incredible, like that certainly exists.
But for the most part, I think most startups are a, the sort of like chaotic. buzz or hive of people who [01:07:00] are not necessarily super experienced very hardworking, very high aptitude, very capable, and just almost like gradient descent to building these incredible things. And it's.
I think that's not super well understood or super adopted by the entire tech ecosystem. A lot of tech ecosystem I think is really focused on hiring the experienced people who who've been there and done that. I think the other thing, and we're talking a little bit about this is the importance of having a strong point of view.
It's quite interesting. The last generation of tech giants, you think about the Googles and the Metas, even the apples of the world the startup advice or the sort of like classic business advice is to have as neutral of a point of view and as neutral a brand as possible. So that you have, you can distribute your product broadly.
You're not offending anyone. You're not, you're having as wide scale impact as wide, as broad based appeals possible. And and I think we're very quickly entering a [01:08:00] very different era, which is that the right thing to do is to have a pretty strong point of view and to be very loud about that point of view, because that allows you to a, attract the talent of people who agree with you.
So it's it's incredible for building a positive culture and building a very high talent group. It's also very important for your customers because more and more customers. Whether it's enterprise customers or consumer customers care a lot about working with people who philosophically agree with them and share their points of view.
And and it forces you to keep your company authentic. And that's it's like kind of a subtle thing, but I think, I look at a lot of peers in enterprise software and these enterprise software companies just become, very quickly they stand for nothing. And early on, every company was the product of like founders who care a lot, who really like sweat every detail.
And then invariably, every enterprise company becomes like a, another widget that in the like bag of tools or whatever. And I think it's important [01:09:00] for companies to maintain a sense of identity and maintain authentic, remain authentic to have any chance at the reinvention component that I talked about before.
Logan: I've heard you say that scale has never been a particularly cool business. Can you elaborate on that and if that's been a net negative or a net positive for the company over the years?
Alexandr: Totally. I think that it's funny we've always operated in very cool spaces, self driving cars, the current AI. Revolution, we've never been the cool people in the, in those spaces because fundamentally we're an infrastructure provider, infrastructure is not that sexy.
We actually, for our company, we don't want the people who just want to be cool and flashy and work on this exciting new technologies. We actually really want the people who are willing to roll up their sleeves, get their hands dirty and work on the unsexy problems in AI that are really damn important.
So I think it's been a it's been very important for building, the company in a way that I think is true to the work that we need to do. And and I [01:10:00] think that the impact has been that we, the people who join Scale know what they're getting into. They know what role in the ecosystem we play, and they care a lot about that.
Logan: You have a wonderful office that we're sitting in right now. I've heard you say that you believe you actually should spend money on a nice office space and that's a important thing to do for your employees. Can you talk to me for a little bit about why that's the case?
Alexandr: Yeah, with the, at the risk of sounding somewhat woo, I do think that like the the spaces that you're in impact a lot about your thinking. Personally being in spaces with a lot of natural light is one of the best things I can do for the quality of my thinking. And I think there's a lot of fractal effects here where pretty subtle differences in either quality of your space or the amount of natural light or the sort of like configuration that you're in with your coworkers can have pretty big impacts to the ultimate, end outcome and the quality of thought.
anSo it is one of these things that I think is like almost insidious and how much it matters. Unintuitive. Yeah.
Logan: What about structuring your [01:11:00] day? How do you structure your day for maximum productivity?
Alexandr: Yeah. What I often find is the best thing to do is to It's to set some pretty clear goals at the very start of the day to say what are the most important things for me to get done today. And they can start out pretty small and then over time you'll find what your limits are and upsize them.
And and then, I'm in a ton of meetings every day which is part of the job. But, continually I'll check in how am I progressing against, the clear goals I set. I think that's probably the best the best thing I would do.
Logan: I've heard you say that in both math and physics growing up, there were clear, right answers, but it was violin that was super influential to you. Cause it wasn't just about getting the notes right. Can you elaborate on that point or expand on why and how the violin influenced you?
Alexandr: I think one of the things that is like somewhat maddening for people who are very quantitative in business is that you're like constantly [01:12:00] operating in a little bit of a of a gray area in the sense of you'll never really know if your decisions were like, lacking, were like fully correct or incorrect.
And and most things that matter are like quite hard to measure and you just have to operate via instinct. And so I think the sort of almost like fuzzy thinking or this sort of like more intuition driven thinking is something not super well trained in math and science and much more well trained in, in, in the arts.
So that, that's the primary way it's been formative. And I think another thing that's, it's been quite important or quite valuable as part of that is also developing a sense of taste. I think that so much of, the product of a company is an outcome of taste and the degree to which you take that taste seriously.
And taste in people, taste in aesthetics. Taste in product, taste in in, in how to organize. And Apple is [01:13:00] probably the best example of this. One of the most tasteful companies in the world. And I think it's been important to me to have been in a field where you have to develop taste to be effective and apply that to the company.
Logan: Your dad's a physicist and mom's an astrophysicist, right? How did your childhood most influence the CEO and founder that Alexander Wang is today?
Alexandr: We have a great example for this. Cause it was just I just spent the weekend with my parents and to them, it was like really important that, the people they worked with and the people and their leaders had the sort of very deep like almost inexplicable passion for the sort of place and the work that they did and the sort of and the history of the field.
And and some almost pretty similarly, like my parents both watched Oppenheimer many times. And they told me that, we kept, we had to keep rewatching it. Cause we had to we had to figure out who were all the physicists, like which, there were like physicists in the movie that had a single line or didn't have any lines and they were like, Oh, we had to like [01:14:00] really figure out who played each of the physicists.
And so I think there's like this level of like inexplicable passion for the field of physics that both my parents have. And this level of like fundamental care and love of the field that I think really rubbed off on me. My mom would teach, had been teaching me about physics ever since I was born, basically.
And I think that level of like deep enthusiasm has been quite effective.
Logan: You wrote a blog post, hire people that give a shit that I think ties into that. And some of the hiring things that we were speaking about earlier, is there anything else that you would say about how you try to suss out if someone uniquely has the passion for your company versus any other business when you're recruiting and the hiring process?
Alexandr: One thing we do, we often ask people like why they're interviewing at scale. And I think you can tell a good answer by how obscure it is. If people are just like, Oh, AI is the next big thing. And I want to work in an AI company. And they're like, Oh, okay. But as they're like, yeah, I like I was working with one of my friends to train a model.
And [01:15:00] we spent five hours just looking at the data and there was one little bug in the data that caused the whole model to not work. And I realized that. And then I realized that this problem was like really deeply interesting. And then I applied to scale because of that that's like the right kind of answer.
One of the things that you look for and Paul Graham, I think has written very elegantly about this topic is a reason for people to care about things when there was, when there's like an irrational reason to care about things, whether it's because of some curiosity or some sort of like quirk or something like fundamentally irrational, some like reason they care about what we do.
And I think that's probably the thing we look for the most. It's like something that is not, that is fundamentally irrational and fundamentally like hard to explain about their passions.
Logan: similar to music, right? Practicing music, maybe people will never know if you cut that last corner, but if you know it, if you really practice it, then it could, it's something that's innate to you.
Alexandr: Totally.
Logan: I've heard you [01:16:00] say, maybe you tweeted or something, you've been weird your whole life and that everybody you've ever respected has also been weird.
Why do you think being weird is a important trait to being an interesting person and the types of people you resonate with?
Alexandr: Yeah, like purely statistically, if you're like, if you're normal, that means you're like in the bell curve and and it's hard to be in the bell curve and accomplish, great things or to have a huge amount of like differentiated impact on the world. So I think it's like a pure statistical argument, but I think that's the thing that I found I find the most like interesting here is being normal is like a, it's like some approximation for having generally speaking, like pretty mainstream beliefs. And and there's nothing wrong with that, but it means that this is maybe an indictment, but if you're normal, it's pretty easy to simulate a conversation with you.
And it means that there's in some ways low information content from having that conversation. Whereas if you're weird, and you say a lot of very unexpected things, and have a lot of unexpected thoughts. [01:17:00] That's very generative experience. So I think interacting with people, surrounding yourself with weird people ends up being quite valuable because you just get to bathe in a more entropic and more like fundamentally interesting and diverse pool of ideas and thoughts.
I think that's like the, that's the greatest gift that, you could have,
Logan: What has you most excited about the future of AI as we look out? Five, 10 years from now,
Alexandr: it's hard to not be excited about, what we talked about, which is. Potentially the greatest economic invention and the greatest economic engine that humanity will have ever invented. So that's it's hard to, again, it's that's like fundamentally so incredibly exciting.
It's as if we're inventing the steam engine times a million, right? What is this thing that will generate so much economic surplus that lifts... So many people into better living conditions that kind of elevates humanity to such an insane degree. It's such an exciting Proposition [01:18:00] and double clicking in that the deeply Exciting components there are again the sort of the elevation of the human condition, right?
So take health care alluded to it before Right now globally speaking. There's roughly a 10x shortage of doctors So because it takes so much training and it's so expensive to train people and take so much time and resources there's really a global perspective, just like way too few doctors.
And even with those doctors, the way healthcare mostly works right now is extremely reactive. Like you go to the doctor you will you go to the doctor, you have a problem, you go to the doctor and it easily, sometimes it's extremely expensive to fix. Most of the time it's very expensive to, to resolve.
And then sometimes it doesn't work out. Fundamentally we need a more proactive healthcare system by which you're constantly measuring a lot of things and, you can deal with these problems very early. And and healthcare is just an entire field that like, without technology breakthroughs, we're stuck as a species, humanity is like a little bit stuck in the, [01:19:00] And like how good you can make healthcare without real fundamental technological advances.
So if AI was all of a sudden can give everybody a doctor in their pocket that, enables them to, as soon as they feel something weird or they think something weird is going on, or there's a, there's a weird bump or whatever they can be proactive about that. It's pretty incredible.
It's just one way in which like, that could have one of the greatest effects to the longevity of anything that we do global lifespan. Those are the things that get me really excited. It's like the full knock-on impacts are gonna be pretty great.
Logan: Alex, thanks for doing this.
Alexandr: Yeah. Thanks for having me.