[00:00:00]
Sam: We've had like the idea of voice control computers for a long time. They've never, to me, felt natural to use. And this one, the fluidity, the pliability, whatever you want to call it. I just can't believe how much I love using it.
Logan: Welcome to Logan Bartlett show on this episode. What you're going to hear is a conversation I have with co founder and CEO of open AI, Sam Altman.
Now, if this is your first time listening to the Logan Bartlett show, this is a podcast where I discuss with leaders in technology, as well as investors, some of the lessons that they've learned in operating or investing in businesses, mostly in the technology field. And this discussion with Sam is a little bit different in which I pushed on a number of things related to artificial intelligence, as well as where open AI is headed, given how topical it is in the news, and Sam's perspective on such a leading frontier that is artificial intelligence.
You'll hear that discussion with Sam here now.
Logan: Thanks for doing
Sam: Yeah, of course.
Logan: All right, I want to start off easy.
Logan: Uh, what's the weirdest thing that's changed in your life in the last four or five years running OpenAI? Like, what's the most unusual shift that's happened?
Sam: Um, I mean, quite a lot of things, but the sort of inability to just be like mostly anonymous and public is very, very strange. I think if I had thought about that at the time, I would have said, okay, this is like a weirder, this would be a weirder thing than it sounds like. Um, but I didn't really think about it and it's like a much weirder thing.
It's like a strangely isolating way to live.
Logan: You believed in AI and the power of business. So did you just not think through the derivative implications of running something
Sam: Yeah. I think I didn't think there were all of these other things I'm like, Oh yeah, it's going to be like really important. Okay. I was gonna be really important company. I didn't think I would like not be able to like go out to dinner in my, in my own city.
Logan: that's weird.
Sam: That's weird.
Logan: Uh, you made an announcement earlier today.
Sam: We did.
Logan: Multimodal for, Oh, but it's the Omega sign, right?
Sam: Oh, just the like, oh, like Omni?
Logan: yeah. Omni. Okay. Sorry. Uh, it works across text, voice, vision. Um, can you speak to why this [00:02:00] is important?
Sam: Um, because I think it's like an incredible way to use a computer. This fact, this idea, we've had like voice, the idea of like voice controlled computers for a long time. You know, we had Siri and we had things before that. They've never to me felt natural to use. And this one. Many different reasons. What it can do, the speed, adding in other modalities, the inflection, the naturalness, the fact that you can do things like say, hey, talk faster or talk in this other voice and that it's the, the, the, the fluidity, the pliability, whatever you want to call it.
Uh, I just can't believe how much I
Logan: Yeah. Spike Jones would be proud.
Sam: would be proud.
Logan: Are there use cases that you've gravitated to?
Sam: Well, I've only had it for like a week or something, um, but one surprising one is putting my phone on the table while I'm like really in the zone of working. And then without having to like change windows or change what I'm doing, using it as like another channel. So I'm like working on something, I would normally like stop what I'm doing, switch to another tab, Google something, click around or whatever.
But while I'm like still doing it to just ask and get like an instant response. Without changing from what I was looking at on my computer, that's been a surprisingly cool thing.
Logan: What actually made this possible? Was it an architectural shift or more compute?
Sam: I mean, it was like, all of the things that we've learned over the last several years, we've been working on audio models, we've been working on visual models, we've been working on tying them together, we've been working on more efficient ways to train our models. Um, it's not like, okay, we unlocked this one thing.
Crazy new thing all at once, but it was putting a lot of pieces together.
Logan: Do you think you need to develop like an on device model to decrease latency to the point for usability?
Sam: For video, [00:04:00] maybe it would be hard to deal with network latency at some point. Like, like the thing that I've always thought would be super amazing is to put on someday a pair of AR goggles or whatever, and just like speak the world in real time and watch things change, and that latency. But for this, uh, you know, two or three hundred milliseconds of latency feels super.
Like, it feels faster than a human responding to me in many, in many cases.
Logan: Is, is video in this case images?
Sam: Oh, sorry, I meant video if you wanted, like, generated video, not, not, not input video.
Logan: it, got it. So, so currently it's working with actual video as
Sam: Well, like frame by frame. Frame by frame,
Logan: So it's, okay, got it. Um, you alluded recently to, uh, chat. GPT, maybe not being there, the next big launch, not being GPT 5, it feels like there's been sort of an iterative approach to model development that you guys have, have taken it.
Is it fair to say that's how we should think about it going forward, that it's not going to be some big launch, here's chat, GPT 5, but instead.
Sam: We honestly don't know yet. Uh, I, I think that, definitely one thing I've learned is that AI and Surprise do not go well together. And although, You know, the traditional way a tech company launches products, we should probably do something different. Now, we could still call it GPT 5 and launch it in a different way, or we could call it something different.
Um, but I don't think we've figured out how to do the naming or branding for these things yet. Like, it made sense to me from like GPT 1 to GPT 4 at the launch. Now, obviously, GPT 4 has continued to get, you know, Much better. We also have this idea that there's going to be like, you know, maybe there's like one underlying kind of like virtual brain and it can like think harder in some cases than others, or maybe it's different models, but maybe the user doesn't care if they're different or not.
So I don't think we know the answer to how we're going to like product [00:06:00] market all of this yet.
Logan: Does that mean maybe that the, the, uh, the needs of the compute to make incremental progress on models might be less than what it's been historically?
Sam: I sort of think we'll always use as much compute. As we get, now we are finding incredible efficiency games and that's really important. One of the, you know, the cool, the cool thing that we launched today is obviously the voice mode, but maybe the most important thing is we were able to make this so efficient that we're able to serve it to free users, like best model in the world by a good amount.
If you will look at that little thing served to, uh, like anybody GPT for free. And it was a remarkable efficiency game over GPT four turbo. And we have a lot more to gain there.
Logan: I've heard you say that chat GPT didn't actually change the world in and of itself, but maybe just changed people's expectations for the world.
Sam: like I don't think you can find much evidence in the economic Measurement of your choice that chat GPT really inflected productivity or whatever,
Logan: Maybe customer
Sam: maybe some, maybe some areas, but like, if you look at like global GDP, you know, can you detect when chat GPT launched? Probably not.
Logan: Is there a point that you think we'll be able to determine a GDP
Sam: yeah, I don't know if you'll ever be able to say like, this was the one model that did it. But I think if we look at the graph a couple of decades in the future, like something changed.
Logan: Are there applications or areas that you think are most promising in the next 12 months?
Sam: I'm sure I'm biased, just because of where, what we do here, but coding I think is a really big one.
Logan: Kind of related to the bitter lesson, you spent some time recently talking about the difference between deeply specialized models, uh, trained on specific data for specific purposes versus generalized models that are capable of true reasoning.
Sam: I would bet that it's the generalized model that's gonna matter.
Logan: And what is the most important thing there as you think about like someone that's focused singularly [00:08:00] on a data set and all the integrations associated with something very narrow?
Sam: If the model can do generalized reasoning, if it can like, figure out new things, then if it needs to figure out how to work with a new kind of data, you can feed it in and it can do it. Um, but it doesn't go the other way around. Like a bunch of specialized models that, I don't think, a bunch of specialized models, Put together, can't figure out the generalized reasoning.
Logan: So the implications for that of coding specific models, probably.
Sam: I think a better way of saying this is, I think the most important thing to figure out is the true reasoning capability. And then we can use it for all sorts of things.
Logan: What do you think the principal means of communication between humans and AI is in two years? Hmm.
Sam: language seems pretty good. I'm interested in this general idea that we should design a future that humans and AIs can sort of use together. Um, use in the same way. So I'm like more excited about humanoid robots. Then I am for other forms of robots, because I think the world is like very much now designed for humans.
And I don't want that to get reconfigured for some more efficient kind of thing. Uh, I like the idea that AI is, that we talk to AI in language that is like very well human optimized and that they even like talk to each other that way, maybe, I don't know. Um, but I think this is, I think this is generally an interesting direction to push.
Logan: You said recently something to the effect of, uh, the models might ultimately get commoditized over time, but the most important thing would likely be the personalization of the models to each individual. First, do I have that right?
Sam: I'm not certain on this, but I think it's like a thing that I would, that would seem like reasonable to me. Yeah.
Logan: Then beyond personalization, do you think it's just normal business UI and ease of use that ultimately wins for end users?
Sam: Those are, those will for sure be important. They always are. Um, you know, I can imagine other things where there's like a sort of marketplace or a network effect of some sort that matters [00:10:00] where it's, you know, we want our agents to communicate. There's different companies in an app store, but I sort of think that the rules of business kind of generally apply.
And whenever you have a new technology, you're tempted to say they don't, but that's always like fake news and not always usually fake news. All of the traditional ways that you create enduring value will still matter here.
Logan: When you see open source models like catch up to benchmarks and all of that, um, what's your reaction to it? Is that,
Sam: I think it's great. I mean, I, I think that there are, you know, like many other kinds of technology, there will be a place for open source. There'll be a place for like hosted models and that's fine. It's good.
Logan: I'm not going to ask about, uh, any specifics related to this, but there have been
Sam: I might answer.
Logan: There's been press reports related to, uh, looking to raise major amounts of money. Uh, Wall Street Journal, I think, was a credible one to galvanize investment in fabs. Um, Semi Industry, ATSMC, and NVIDIA have been ramping pretty aggressively to meet expectations of the need for AI infrastructure.
Uh, you recently said that you think the world needs more AI infrastructure, and then you said a lot more AI infrastructure.
Sam: I do.
Logan: Um, is there something you're seeing on the Demand side that would require way more AI infrastructure than what we're currently getting out of TSMC and NVIDIA.
Sam: So first of all, I'm confident that we will figure out how to bring costs to deliver current systems way, way down. I'm also confident that as we do that, demand will increase by a huge amount. And third, I'm confident that by building bigger and better systems, there will be even, even more demand. We should all hope for a world where intelligence is too cheap to meter.
It's just wildly abundant. People use it for all sorts of things. And you don't even think about whether like, Oh, you know, do I want this? Do I have to, you know, do I want this? [00:12:00] Reading all my emails and respond to them for me, or do I want this like curing cancer? Of course you pick curing cancer But the answer is like you'd love for it to do both things And I just want to make sure we have enough for everybody to have that.
Logan: I don't need you to comment on your own personal efforts here, although again, if you want to, uh, please let me know, but, uh, humane and limitless and some of these like different physical device assistants, what do you think those have gotten wrong? Or where do you think the, um, the adoption maybe hasn't met user, uh, desires just yet?
Sam: I think it's just early Um, I I have been an early adopter of many types of computing I had and very much loved The, uh, Compaq TC 1000, like when I was a freshman in college, I, I thought it was just like so cool. And like, that was a long way from the iPad, uh, long, long way from the iPad, but you know, it was directionally right.
Um, then I got a, a trio. I was like the, I was very not cool college kid. I had like a old palm trio and when it was like a, that was not a thing that kids had and that was a long way from the iPhone, but we got there eventually. You know, these things feel like a very promising direction. That's going to take some iteration.
Logan: You mentioned recently that a number of businesses that are building on top of GPT 4 will be steamrolled, I think was your term, by future GPT. Um, I guess, can you elaborate on that point? And second, like, what are the characteristics of AI first businesses that you think will survive GPT's advancement?
Sam: The only framework that I have found that works for this is you, you can either build a business that bets against the next model being really good or a model that bets on that happening and benefits from it happening. So, uh, if you're doing a lot of work to make one use case, Really work that was just beyond the capability of GPT oh no, [00:14:00] and then you get it to work, but then GPT 5 comes out and it does that and everything else really well, uh, you're kind of like sad about the effort you put into that one thing to get it to barely work, but if you had something that just like kind of worked okay across the board and people were finding things to use for, but you didn't put in like tons of work to, to make this one thing kind of possible and then GPT 5 or whatever we call it comes along and it's just way better everything you like, you got the rising tide lifted all your boats.
Effect, you know, what I would suggest is like, you're not. Building an AI business in most cases, you're building a business and AI is a technology that you use in the early days of the app store. I think there were a lot of things that like filled in some very obvious crack and then eventually Apple fixed that.
And there wasn't, you know, you didn't keep needing like a flashlight app from the app store. It's just like part of the OS and that was like going to happen. Um, and then there were, I think, things like Uber that were enabled by having smartphones. But really built a very defensible longterm business. And I think you just want to go for that latter category.
Logan: I can come up with a lot of incumbent businesses that leverage you all that fit that framework, uh, in some ways. Are there any like novel types of concepts that you sort of think is in that example, the Uber, and it doesn't need to be, it could be a real company if you think of one or even if it's a toy or just something that's interesting that you think is like enabled in that way.
Sam: Um, I would actually bet on the new companies for like many of these cases. A very common example people use is trying to build like the AI doctor, like the AI diagnostician and people talk about, Oh, well, I don't want to do a startup here because, you know, Mayo clinic or take your pick is going to do it.
And I'd actually bet it's a new company that does something like that.
Logan: Do you have any advice for CEOs beyond that who are want to be proactive about preparing [00:16:00] for These types of disruptions,
Sam: I would say like bet that. Intelligence as a service gets better and cheaper every year, and it is necessary, but not sufficient for you to win. So the big companies that take, you know, years to implement this, you can like beat them, but every other startup is, that's, you know, paying attention is going to do this too.
And so you still have to figure out like, what, what's the long term defensibility of my business now that the playing field is way more open than it's been in a long time. There's incredible new things to do, but. You don't get a pass on like the hard work of building enduring value, even though you can now do it in more ways.
Logan: is there a job title or a type of job responsibility that you could envision existing or being mainstream and five years because of a I that like is maybe niche or non existent today
Sam: That's a great question. And I don't think I've ever gotten it before. It's people always ask, like, what job is going to go away? The new one is a more interesting question. Let me think for a second. Um, I mean, there, there's like a lot of things that I could talk about that I think are sort of less interesting or less huge.
Uh, what I'm trying to do is. Like come up the areas of like, what will a hundred million people do or 50 million people do? Um, the broad category of new kinds of art Entertainment sort of more like human to human connection I don't know what that job title is gonna be But I think and I don't know if this like we get there in five years But I think there's gonna be a premium on like human in person like fantastic experiences I don't know what we'll call that, but I can see that being like a very huge category of something new that we do.
Logan: the most recent public tender of open AI was 90 billion or something in about there. Um, are there one or two things that you sort of look at as milestones? [00:18:00] That will get open AI to be a trillion dollar company short of AGI.
Sam: I think if we can just keep improving our technology at the rate, we've been doing it and figuring out how to continue to make good products with it and revenue keeps growing, like it's growing. Uh, I don't know about specific numbers, but I think we'll be fine.
Logan: Is the, the business monetization model today, the one that you think creates the 1 trillion equity value. Do you, do
Sam: I mean, the chat GPT subscription model, like really works well for us. Like, surprisingly, I wouldn't have bet on that. I wouldn't have been confident it's going to do as well as it has, but it's been good.
Logan: you think post AGI, whatever that term actually means, we'll be able to, I don't know, ask AGI what the monetization model is that might be different.
Sam: Yeah, should be able to.
Logan: I think we've maybe saw in November, not to rehash, uh, That that the existing open AI structure left some things to be desired, which I don't think we need to rehash in total.
You talked about it enough, I think. But, um, you spoken to making changes along the way. What do you think the appropriate structure is? Going forward.
Sam: Um, I think we're close to being ready to talk about that. Uh, we're like, we've been hard at work on all sorts of conversations and brainstorming there. Uh, I think like hopefully in like this year, I think we'll be right on this calendar year.
Logan: You tell me first
Sam: We'll see.
Logan: when Larry and Brett Taylor got battlefield promoted aboard directors. I was waiting for, you know, my call never came through. But one of the interesting things I think, uh, about preconceptions around AI to your point on the monetization model and all that is, I think we've all I've heard you speak about it manual work obviously first followed by, you know, white collar followed by creative.
Obviously it's proven to be kind of the opposite in some ways. Are there other things that [00:20:00] are Yeah. counterintuitive that you've looked at being like, well, I would have presupposed it to be this way, but it's actually proven to be the exact opposite.
Sam: That's definitely the mega surprise to me. Uh, the one that you mentioned there's other Like, I don't think I would have expected it to be so good at legal work so early, just because I think of that as like a very precise, complex thing. But, but no, definitely the big one is the observation of like physical labor, cognitive labor, creative labor,
Logan: For those that haven't heard you make the point about AGI and why you dislike the term, can you elaborate on that point?
Sam: because I know, I don't, I no longer think it's like a moment in time. Um, I, I, I obviously have so many naive conceptions when you start any company, uh, and, and particularly in a field that's like moving around as much as this one is. But. My naive conception when we started is that we would like get to a moment where we didn't have AGI and then we did and it would be a, a real discontinuity.
And I still think there's some chance of a real discontinuity, but on the whole, I think it's going to look much more Like a continuous exponential curve where what matters is the pace of progress year over year over year. And you and I will probably not agree on the month or even the year that we're like, okay, now that's AGI.
We can come up with other tests that we will agree with, but even that is harder than it sounds. And yeah, GPT 4 is definitely not over a threshold that I think almost anyone would call an AGI and I don't expect our next big model. To be either, but I can imagine that we're like only maybe one or two or some small number of ideas away and a little bit more scale from something that we're like, This is now kind of different.
And I think it's important to stay vigilant about that.
Logan: Is there a more modern like Turing test? We can call it the Bartlett test, uh, where [00:22:00] that you think like, hey, when it crosses this threshold,
Sam: I think when it's capable of doing better research than like all of open AI put together, even one open AI researcher, that is like a somehow very important thing that feels like it could, or maybe even should be a discontinuity.
Logan: does that feel Lowe's?
Sam: Probably not, but I wouldn't, I wouldn't rule it out.
Logan: What do you think the biggest obstacles that you see to reaching A. G. I. It sounds like you think maybe the scaling laws have have runway currently and holding for the next couple years.
Sam: Yeah. I think the big obstacles are new research. Uh, and you know, one of the things I've had to learn shifting from like, Internet software to AI, uh, is research does not kind of work on the same schedule as engineering, which usually means it takes much longer. It doesn't work, but sometimes means it works tremendously faster than anyone could have predicted.
Logan: What is that? Can you elaborate on that point that it's like not as linear in progress?
Sam: I think the best way to elaborate on that is like historical examples. I'm going to get the numbers wrong here, but I'm sure
Logan: will try to correct you
Sam: someone will. Um, I think the neutron was first theorized. You know, in the early 1900s, it was maybe first detected in the tens or twenties, uh, and the work on what became the atomic bomb started in the thirties and happened in the forties, like from, from not really having no idea that such like that there was even the idea of a thing like a neutron to being able to like make an atomic bomb and just like break all of our intuitions about physics.
Um, that's like wildly quick. Um, there are other examples that are sort of less pure science. Like there's the famous quote about the Wright brothers. Again, I'm going to get the numbers wrong here, but let's say it was like [00:24:00] 1906. They said they thought flight was 50 years away in 1908. They did it, whatever, something like that.
Sam: And then many, many other examples throughout the history of science and engineering. There's also plenty of things that we theorize that never happened or take, you know, decades or centuries longer than we thought. But. But sometimes it does go really fast.
Logan: Interoperability. Where are we on this path and how important is that long term for AI? I
Sam: There's different kinds of interpretability. There's the, like, do I understand, like, every, what's happening at, like, every mechanical layer through the network. And then there's, um, can I, like, look at the output and say there's a logical flaw here or whatever. I am excited about the work going on at OpenAI and elsewhere in this direction.
Uh, and I think that interpretability as a broader field seems, like, promising and exciting.
Logan: won't pin you down. I assume you'll have a nice announcement when you're ready to say something. But do you think that that is going to be a requisite to mainstream AI adoption? Maybe within, you know, Enterprises or something.
Sam: 4 is like quite widely adopted at this
Logan: Yeah, that's fair.
Logan: There's maybe a few things that I think Uh, you could ask questions about or maybe accused is too strong of a term, but, uh, that people are suspicious about, uh, one of which is, um, I think there's this needle threading that exists between being excited about A.
G. I. But also feels like you have a, um, uh, personal kind of apprehension about you, Sam, um, Open AI generally being the ones to harness it and unilaterally make decisions, which has led to some, you know, some body, some governmental structure where there's elected leaders instead of you making these decisions,
Sam: I think for like, I think it'd be a mistake to go regulate, heavily regulate like current [00:26:00] capability models. But when the models, which I believe they will pose significant catastrophic risk to the world. Um, I think having some sort of oversight is probably a good thing. Now there is some needle threading about where you set those thresholds and how you test for it.
And it would be a real shame to sort of stop the tremendous upsides of this technology and letting people that want to go train models in their basement be able to do that. That'd be really, really bad. But you know, if we have international rules for nuclear weapons, it's a good thing.
Logan: regulatory capture group, which I'm sure we can think of, which VCs fall into that bucket of accusatory around this regular regulation. Um, what do you think they don't? See about the potential risks inherent in AI.
Sam: Well, I think they just don't, I don't think they like on the whole seriously wrestled with AGI. These were also people who like some of the loudest voices about AI regulatory capture were, you know, totally decrying it as a possibility not that long ago. Not all. I do have empathy for where they're coming from, which is like regulation has not been really bad for technology.
Like look what happened to the European technology industry. Like I get it. I really do. And yet I think that there is a threshold that we are heading towards above which we may all feel a little bit different.
Logan: Do you think open source models themselves present inherent danger in, in some ways?
Sam: current one does, but I could imagine one that could.
Logan: I've heard you say that, uh, safety is kind of a false framing in some ways because it's more of a discussion about, um, what we explicitly accept, like airlines.
Sam: it's more like safety is not a binary thing. Like you are willing to get on airplanes because you think they're pretty safe, even though you know they crash [00:28:00] once in a while. And what it takes for, to call an airline safe is like a matter of some discussion. Some people have different opinions on and
Logan: a topical point right now.
Sam: topical point right now, they have gotten just unbelievably safe overall, like triumphantly safe, but safe does not mean no one will ever die We really
Logan: the side effects and some people have adverse, uh, consequences around it. And then there's the implicit side of safety as well, like social media, right, or things that have negative association. Um, Is there something that you could imagine seeing on the safety paradigm that would cause you to act differently than pushing forward?
Sam: Yeah. We have this thing called our preparedness framework. That's sort of exactly that saying that, you know, in these categories at these levels, we'd act differently.
Logan: I've had Eliezer on the podcast before.
Logan: How was that? It was wonderful. We sat for the longest podcast I've ever done. I think it was four hours of us, uh, going
Sam: more free time than me. So I apologize. I
Logan: listen, I, we can, we can do multiple sessions. We don't need to do them all now. I think, uh, that his, his points, I think, stay
Sam: I'm grateful. He exists.
Logan: He's a very interesting guy to sit down with for four hours and talk, uh, we went a bunch of different directions, but I'd be remiss as a, as a friend of the pod did not ask a fast takeoff. Question. Um, I'm curious. Like, there's so many different fast takeoff scenarios. And one of the constraints that I think we point to today is just a lack of infrastructure.
Right? Um, and, uh, I guess if there was some some researcher developed a modification to the current transformer architecture, where suddenly the amount of data and hardware scale needed drastically reduced more like human brain or something like that. Is it possible we could see like a fast takeoff scenario?
Sam: Possible, of [00:30:00] course, uh, and it may not even need a modification. Um, it is still not what I believe is the most probable path, but I don't discount it. And I think it's important that we consider it in the space of what could happen. I think things will turn to be more continuous, even if they're accelerating.
I don't think we're likely to go to sleep one day with like pretty good AI and wake up the next day with genuine super intelligence. Um, but even if, even if the takeoff happens over a year or a few years, that's. It's still fast in some sense. There's another question about even if you got to this, like really powerful AGI, how much does that change society on the next day versus the next year versus the next decade?
And my guess is in most ways, it's not a next day or next year thing, but over the course of a decade, then the world will look quite different. I think the inertia of society is like a good, helpful thing here,
Logan: One of the things I think people also find, uh, uh, they, they have suspiciousness around.
Logan: I read, imagine the questions you don't love getting are, uh, Elon, uh, equity and November board structure. Those are probably the, the three, uh,
Sam: which I've answered them a lot of times.
Logan: which one of those do you like the least?
Sam: I don't know. I mean, I don't know. I don't hate any of them. I just don't have anything new to say on any of them.
Logan: Um, well, I guess I'm not going to ask the equity one specifically because I think you've answered that in more than enough ways, although it is, people still don't seem to like the, the answer that enough money is, uh, a
Sam: yeah, if I like made a trillion dollars and then gave it away, it would just, it would like fit with, I think the expectation or the sort of way it's usually done
Logan: There was another Sam that
Sam: that. Oh, that's true.
Logan: trying that in some way. Yeah. Comparatively.
Sam: No, I just mean like most people who make a ton of money.
Logan: Yeah. Yeah. Um, what, uh, what do you feel like your motivations, this pursuit of AGI, like outside of the equity? I think most people take solace in the [00:32:00] fact that like, oh, well, even if I have some higher mission, uh, I still get paid for it, uh, in some ways.
Like, what are your motivations now coming into work? Every day. Like what's the most fulfillment derived from?
Sam: Look, I tell people this all the time. I, I'm willing to make a lot of other life trade offs and sacrifices right now, because I think this is the most exciting, most important. Like best thing I will ever touch and it's an insane time and I'm happy I won't be forever like, you know Someday I get to go retire on the farm and I'll remember this fondly but be like, oh man Those were stressful long long stressful days, but it's also just incredibly cool.
Like I can't believe I'm this is happening to me It's just like this is like amazing
Logan: Was there a single moment? I guess we go back to the, the, the, the fame example of not being able to go out in your city or whatever, but has there been a single moment that was most surreal that like, Oh, geez, I don't know. Uh, I mean, you've done a podcast with Bill Gates. I'm sure you have your speed dial.
If I took your phone right now, it would have a lot of very interesting people on it. Was there a single moment over the course of the last couple of years that you were like, this is a uniquely surreal moment,
Sam: and kind of every day, there's something that's like, wow, if I could like, if I had like a little bit more mental space to step back, it's like, this would be crazy. Um,
Logan: of a fish in water.
Sam: but yeah, it is kind of like that effect. Uh, after all of that, like November stuff happened that, you know, like that day or the next day or whatever, I got like, uh, I don't know, 10, 20 texts, something like that from like major world, like presidents, prime ministers of countries, whatever.
And that was not the weird part. Uh, the weird part was. That happened. And I was like, um, you know, kind of responding, saying like, thanks or whatever. And it felt like very normal. And then I, we had these like insane, super jammed, like four and a half days and just this like crazy state. And it was, it was just like weird, like not sleeping much, not really [00:34:00] eating, um, energy levels, like very high, very clear, very focused, but just like your body was like in some weird, like adrenaline charged state for a long time.
Yeah. And then it was like, all this happened a week before Thanksgiving. It was kind of crazy, crazy, crazy. Got resolved on Tuesday night. Um,
Logan: He cancels our podcast.
Sam: our podcast. Sorry. I don't usually cancel things. Um, but. Anyway, then on that Wednesday, uh, like now it's the Wednesday before Thanksgiving, Ali and I drove up to Napa and, uh, stopped at this diner.
Gots is very good. And on the drive up there, I realized I hadn't like eaten in like days. And then all of a sudden, like, kind of like normal, it was just like, okay, you know, this is like normally where we'd be doing on weekend, heading out, like, whatever. And go to, uh, Gots, order like four entrees. Like heavy, like, you know, fried, like heavy entrees, like two milkshakes just for me.
And I was sat there and ate and it was very satisfying. Um, and as I was doing that, um, one of them, president of this one country texted again and just said like, oh, I'm sorry, I'll resolve like gray, whatever. And then it hit me that like, oh yeah, like all of these people had texted me and it wasn't weird.
And the weird part was like realizing that that had like happened in the middle of it and that that should have been this very weird experience and it wasn't. So that was like one that sticks out.
Logan: Yeah, that is interesting.
Sam: My takeaway is human adaptability to almost anything is just like much more remarkably strong than we realize.
And you can get used to anything as the new normal, good or bad, pretty fast. And I kind of like over the last couple of years have learned that lesson many times. Um, but I think it says something remarkable about, Humanity and good for us and good as we stare down this like big transition.
Logan: I remember post 9 11, I'm sure you remember exactly where, but I was in some in New Jersey in our town, you know, whatever, dozens of people passed away. [00:36:00] How close the town came together after a terrorist attack happened and it seemed so normal like that it was just the normalcy of it or I have friends in Israel right now and you talk to them about it and they're like, no, it's normal.
I'm like, well, there's a war going like it's got to be surreal. And they're like, well, I mean, What are you gonna do? You go about your day, you go get your food, all that. And it's, it's amazing, these psychological impacting things. At the end of the day, we need to go get food and we need to, you know, talk to our friends and, and all this stuff.
So it is, it is amazing how much that can happen.
Sam: it really like genuinely. That's been my big surprising takeaway to like, feel it. So this really,
Logan: As you think about like models becoming smarter and smarter, what, um, you kind of touched on this a little bit earlier with the, the creative element. Like, what do you think remains uniquely human? As models start doing more and more capabilities of what we used to consider
Sam: I think many, many years from now, humans are still going to care about other, other humans. I, you know, I was reading the internet a little bit and everyone's like, Oh, everyone's going to fall in love with Chachi Beauty now. And everybody's, you know, like, I'm just going to be the Chachi Beauty girlfriend, whatever, whatever.
I bet not. I bet we're, I think we're so wired to care long term about other humans in all sorts of like big and small ways that that's going to remain like our obsession with other people. Sounds like you hear a lot of conspiracy theories about me. You probably don't hear a lot of conspiracy theories about AI.
You might not care if you did hear one. I think we're like not going to watch robots play each other in soccer probably as our like main hobby.
Logan: as you run OpenAI, the company itself, and you, you, uh, built a lot of rules or, or frameworks, uh, at YC on, uh, how to run businesses. And then you've, you've broken a, a, a lot of them. Some, some [00:38:00] are there, are there different types of people you hire for this? Business than you would have had you started a, um, a consumer internet company within the executive ranks or a B2B software company or something,
Sam: um, researchers are very different than my product engineers for the most part, and it's also,
Logan: or Mira or some of the executives like researchers are unique, but does open AI bring in a different type of executive or do you hire for a different trait?
Sam: I mostly have not, like I, I am sometimes you hire externally for executives, but I'm a big believer that if like you generally promote, it's not, it's probably a mistake to only promote people to be executives because That could reinforce a monoculture. And, you know, I think you want to bring in some new, very senior people.
Um, but we mostly like homegrown talent here. And I think that's a, a positive given how different what we do is from what you would do somewhere else.
Logan: Is there a decision, um, that you've made over the course of OpenAI that felt the most important at the time of making it? And how did you go about making it?
Sam: It'd be hard to just a single one, but the decision that we're going to do what we call iterative deployment, that we're not going to go build. AGI in secret and then put it out into the world at once, which was the prevailing wisdom and the Eleazar plan and others. I think that was like a quite important decision we made.
And it felt like a really important one at the time.
Logan: If, if another company that,
Sam: Betting on language models was an important decision and felt like an important one at the time.
Logan: actually don't know the story of, of betting on language models, how did that come to be originally?
Sam: Um, well, we were, we had these other projects. We were doing the robot thing and video games, and there was a, some very small efforts started with one looking at, looking at language modeling and Ilya really believed in it. Uh, [00:40:00] really believed in like the general direction became language models, let's say, and GPT 1, we did GPT 2, we started to study scaling laws, scaled up GPT 3, and then we made a big bet.
This was what we were going to do. And it was not, it looks so, all of these things look so obvious in retrospect, I really don't feel that way at the time.
Logan: One other thing you brought up recently was the, there's two approaches to AI, the replication of yourself and then the smartest employee.
Sam: Oh, it's not, not AI itself, but like how you want to use it. Like when you imagine using your personal AI,
Logan: So there's a subtle distinction when you said it, but, but can you, can you expound on it? Cause it seemed like a fairly profound distinction of how at least Sam thinks about the future of AI use cases. So can you explain that part? Point again, because clearly I misunderstood it.
Sam: If you're going to text me in, you know, five years in the future, I think you want to be clear of whether you're texting me or my AI assistant. And then if it's my AI assistant, that's going to like, you know, bundle messages together and, you You'll get a reply later, or, or, you know, if it can easily do something you might ask my human assistant to do, then fine, you'll know that.
Um, I think there will be value in keeping what those things are separate, and not that it's like, alright, the AI is truly just an extension of Sam. I don't know if I'm talking to Sam. Or Sam's AI ghost, but that's okay. Cause it's the same thing. It's this merged entity. I think, I think there will be like Sam and Sam's AI assistant.
And also I want that for myself. Like, I don't want to feel like this thing is just like this weird extension of me, but that it's a separate entity that I can communicate with across a barrier.
Logan: You see it in, uh, in music or creative where it becomes pretty easy to replicate a Drake or a Taylor Swift audio. We probably need some form of [00:42:00] validation or some centralization that, uh, validates, Hey, this is actually the, the creative work of XYZ person. You're probably going to want some version of that at a personal level too.
Sam: Yeah. But it's like, you know, the way I think about like open AI is it's, I don't, there's different people and I'm asking them to do things and they go off or they ask me to do things and I go off. Um, but it's not a single like board. And I think that's like a way we're all comfortable.
Logan: And so, so what is that? Can you, can you tie that back? Like the, the decentralization of, of letting individuals do their,
Sam: Well, also that, but I meant more just kind of like what is the abstraction of what my personal AI is going to be like? Like do I think of that as this is just me and it's going to like take over my computer and do what's best and because it's me that's going to be totally fine and it's answering messages on my behalf and it's, You know, gonna, just like, I'm slowly gonna like, take my hands off the controls, and it's slowly gonna like, be me.
Or, do I think of this as like, this is a really great person I work with, that I can say, hey, can you do this thing, get back to me, and you're done. But I think of it as, not me.
Sam: The
Logan: As you think about the, uh, uh, educational system and as we think about like the class of college class of 2030 or 2035 or whatever, whatever, um, some, some group in the future, um, are there changes specifically that you think be made within the college educational system to prepare people for the. Future, we have
Sam: biggest one is I think people should not only be allowed, but required to use the tools. There will be some cases where we want people to do something the old fashioned way. Um, because it helps with understanding, you know, like I remember sometimes in math class or whatever, there'd be something you can't use
Logan: no calculators on the
Sam: yeah, but on the whole, like in real life, you get to use the calculator and so you need to [00:44:00] understand it, but then you got to be proficient using the calculator too, and if you did math class and never got to use the calculator, you would be like a less Less good at the work you need to do later.
You know, if all of the open AI researchers never got to use a calculator, open AI probably wouldn't have happened. Uh, computers at least, you know? Um, we don't try to teach people not to use calculators, not to use computers. And I think we shouldn't train people not to use AI either. It's just going to be an important part of, like, doing valuable work in the future.
Logan: last one.
Logan: Um, in planning for AGI and beyond, you wrote the first AGI will be just a point along the continuum of intelligence. We just spoke about earlier. Uh, we think it's likely the progress will continue from there, possibly sustaining the rate of progress we've seen over the past decade for a long period of time.
Do you ever, uh, personally Yeah. Stop and process or visualize like what future will look like in that or is it just too abstract to contemplate
Sam: Um, all the time. I mean, I don't visualize it like, you know, we have these like flying cars in a Star Wars future city and I like that, but like definitely what it means when one person can do the work of hundreds or thousands of well coordinated people and what, what it means when I don't want to say we can discover all of science, but kind of what it feels like, like what it would feel to us like as if we could discover all of science.
Logan: be pretty cool,
Sam: Yeah.
Logan: Sam. Thanks for doing this. Thank you.
Logan: Thank you for listening to this episode of The Logan Bartlett Show with CEO and co founder of OpenAI, Sam Altman. If you enjoyed this conversation, really appreciate it if you like and subscribe and share with anyone else that you think might find interesting, as well as come back for next week where we'll have another exciting episode with a different founder and CEO of an important company in technology.
Thanks everyone for listening and have a good [00:46:00] week.