Patrick Collison interviews Sam Altman
We pulled out the best insights from Sam Altman's recent interview with Patrick Collison at Sohn 2023. Transcription and light editing by Claude, curation by Yiren Lu :-)
Highlights
Patrick Collison: Have any plugins become part of your workflow yet?
Sam Altman: Browsing and the code interpreter once in a while, but honestly, they have not for me personally. They have not yet tipped into a daily habit.
Patrick Collison: A lot of the people on this call are active philanthropists and most of them don't post very much on Twitter. They hear this exchange like, oh, maybe I should help fund something in the interpretability space. If they're having that thought, what's the next step?
Sam Altman: One strategy that I think has not happened enough is grants to single people or small groups of people that are very technical, that want to push for the technical solution and are maybe in grad school or just out or undergrad. I think that is well worth trying. They need access to fairly powerful models and OpenAI is trying to figure out programs to support independent alignment researchers. But I think giving those people financial support is a very good step.
Patrick Collison: To what degree is the field skill bottlenecked, where there are people who maybe have the intrinsic characteristics required but don't have the four years of learning or something like that that are also a prerequisite for their being effective?
Sam Altman: I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to take a smart physics researcher and make them into a productive AI researcher. So we don't have enough talent in the field yet, but it's coming soon. We have a program at OpenAI that does exactly this. And I'm astonished how well it works.
Patrick Collison: For OpenAI, obviously you guys want to be and are a preeminent research organization. But with respect to commercialization, is it more important to be a consumer company or an infrastructure company?
Sam Altman: I believe in platform plus killer app as a business strategy. I think the fact that we're doing a consumer product is helping us make our platform much better, and I hope over time that we figure out how to have the platform make the consumer app much better too. So I think it's a good cohesive strategy to do them together. But as you pointed out, really what we're about is being the best research in the world, and that is more important to us than any productization. Building the capability to make these repeated breakthroughs, they don't all work. We've gone down some bad paths, but we have figured out more than our fair share of the paradigm shifts, and I think we have the next big ones too. And that's really what is important to us to build.
Patrick Collison: How much important ML research comes out of China?
Sam Altman: Non zero, but not a giant amount.
Patrick Collison: Do you have any sense as to why? Because the number of published papers is very large and there are a lot of Chinese researchers in the US who do fantastic work. And so why is the kind of per paper impact from the Chinese stuff relatively low?
Sam Altman: I mean, what a lot of people suspect is they're just not publishing the stuff that is most important.
Patrick Collison: Do you think that's likely to be true?
Sam Altman: I don't trust my intuitions here.
Sam Altman: Okay, so one thing I'm excited about is that all the people who became tech billionaires in the last cycle are pretty interested in putting serious capital into long term projects. And the availability of capital for high risk, high reward, long term, long duration projects that rely on fundamental science and innovation has dramatically changed. So I think there's going to be a lot more capital available for this. You still need the Elon people to do it. One project I've always been tempted to do is say, okay, we're going to identify the 100 most talented people we can find that want to work on these sorts of projects. We're going to give them $250k a year. So like enough money for ten years or something. Let them go off and without the kind of pressure that most people feel, have the certainty to go off and explore for a long period of time and not feel the very understandable pressure to make a ton of money first and put them together with great mentors in a sort of a great peer group. And then the financial model would be like if you start a company, the vehicle gets to invest on predefined terms. I think that would pay off and someone should do it.
Patrick Collison: Which company that is not thought of as an AI company will benefit the most from AI over the next five years?
Sam Altman: I think some sort of investing vehicle is going to figure out how to use AI to be like an unbelievable investor and just have a crazy outperformance.
Sam Altman: I'd like to avoid another pandemic. We need more global coordination. This is harder than AI, where we have lots of data and computing power. I feel like I should ask you what we should do.
Patrick Collison: We need more observability, wastewater sequencing, and data on pathogens and health outcomes. We could improve treatments, vaccines, and trials. Concerning are novel pathogens, so invest in surplus manufacturing and platforms like mRNA. Still, that may not be enough.
Full transcript
Patrick Collison: Patrick, over to you. Thank you, Griffin. And thank you, Sam, for being with us. Last year I actually interviewed Sam Bankman Fried, which was clearly the wrong Sam to be interviewing, so it's good to correct it this year. With the right Sam, we'll start out with the topic on everyone's mind. So when will we all get our World Coin?
Sam Altman: I think if you're not in the US, you can get one in a few weeks. If you're in the US. Maybe never, I don't know. It depends how truly dedicated the US government is to banning crypto.
Patrick Collison: So World Coin launched around a year ago or so.
Sam Altman: Actually, it's been in beta for maybe a year, but it will go live relatively soon outside of the US and in the US, you just won't be able to do it, maybe ever, I don't know. All right, which is a crazy thing to think about. Think whatever you want about crypto and the ups and downs, but the fact that the US is the worst country in the world to have a crypto company in, or you just can't offer it at all is sort of a big statement. Like historically big statement.
Patrick Collison: So I presume almost everyone in the audience is a ChatGPT user. What is your most common ChatGPT use case? Like, not when you're testing something, just you actually want to get where ChatGPT is purely an instrumental tool for.
Sam Altman: Summarization by far. I don't know how I would still keep up with email and Slack without it, but posting a bunch of email or Slack messages into it. Hopefully we'll build some better plugins for this over time. But even doing it the manual way works pretty well.
Patrick Collison: Have any plugins become part of your workflow yet?
Sam Altman: Browsing and the code interpreter once in a while, but honestly, they have not for me personally. They have not yet tipped into a daily habit.
Patrick Collison: It seems very plausible that we're on a trend of super linear realized returns in terms of the capabilities of these models. But who knows? Maybe we'll ask them toad soon. Nothing likely, but it's at least a possibility if we end up in the world where we ask them toad soon. What do you think? Kind of x post. We will look back on the reason as having been too little data, not enough compute. What's the most likely problematic?
Sam Altman: Look, I really don't think it's going to happen, but if it does, I think it'd be that there's something fundamental about our current architectures that limits us in a way that is not obvious today. So maybe we can never get the systems to be very robust and thus we can never get them to reliably stay on track and reason and understand when they're making mistakes and thus they can't really figure out new knowledge very well at scale. But I don't have any reason to believe that's the case.
Patrick Collison: And some people have made the case that we're now training on kind of order of all of the internet's tokens and you can't grow that another two orders of magnitude I guess you could counter with. You have the synthetic data generation. Do you think data bottlenecks matter at all?
Sam Altman: I think you just touched on it. As long as you can get over this synthetic data event horizon where the model is smart enough to make good synthetic data, I think it should be all right. We will need new techniques for sure. I don't want to pretend otherwise in any way. The naive plan of just scale up a transformer with pretrained tokens from the internet that will run out, but that's not the plan.
Patrick Collison: So one of the big breakthroughs in, I guess, GPT3.5 and Four is Rlhf. If you Sam, personally sat down and did all of the Rlhf, would the model be significantly smarter? Like, does it matter who's giving the feedback?
Sam Altman: I think we are getting to the phase where you really do want smart experts giving the feedback in certain areas to get the model to be as generally smart as possible.
Patrick Collison: So will this create like a crazy battle for the smartest grad students?
Sam Altman: I think so. I don't know how crazy of a battle it'll be because there's like a lot of smart grad students in the world. But smart grad students I think will be very important.
Patrick Collison: And how should one think about the question of how many smart grad students one needs? Like, is one enough or do you need like 10,000?
Sam Altman: We're studying this right now. We really don't know how much leverage you can get out of one really smart person where kind of the model can help and the model can do some of its own RL. We're deeply interested in this, but it's a very open question.
Patrick Collison: Should nuclear secrets be classified?
Sam Altman: Probably yes. I don't know how effective we've been there. I think the reason that we have avoided nuclear disaster is not solely attributable to the fact that we classified the secrets, but that we did something. We did a number of smart things and we got lucky. The amount of energy needed, at least for a long time, was huge and sort of required the power of nations and we made the IAEA, which I think was a good decision on the whole and a whole bunch of other things too. So yeah, I think probably anything you can do there to increase probability of a good outcome is worth doing. Classification of nuclear secrets probably helps. Doesn't seem to make a lot of sense to not classify it. On the other hand, I don't think it'd be a complete solution.
Patrick Collison: What's the biggest lesson we should take from our experience with nuclear nonproliferation? The broader sense as we think about all the AI safety considerations that are now central.
Sam Altman: So first of all, I think it is always a mistake to draw too much inspiration from a previous technology. Everybody wants the analogy. Everybody wants to say, oh, it's like this or it's like that, or we did it like this, so we're going to do it like that again. And the shape of every technology is just different. However, I think nuclear materials and AI supercomputers do have some similarities and this is a place where we can draw more than usual parallels and inspiration. But I would caution people to overlearn the lessons of the last thing. I think something like an IAEA for AI, and I realize how naive this sounds and how difficult it is to do, but getting a global regulatory agency that everybody signs up for, for extremely powerful AI training systems, seems to me like a very important thing to do. So I think that's like one lesson.
Patrick Collison: If it's established tomorrow , what's the first thing it should do?
Sam Altman: The easiest way to implement this would be a compute threshold. The best way to implement this would be a capabilities threshold, but that's harder to measure. Any system over that threshold I think should submit to audits, full visibility to that organization, be required to pass certain safety evals before releasing systems. That would be the first thing.
Patrick Collison: And some people on the I don't know how one would characterize the side, but let's say the more pugilistic side would say, that all sounds great, but China is not going to do that and therefore we'll just be handicapping ourselves and consequently it's a less good idea than it seems on the surface.
Sam Altman: There are a lot of people who make incredibly strong statements about what China will or won't do that have never been to China, never spoken to someone who has worked on diplomacy with China. I think it is obviously super hard. But also I think no one wants to destroy the whole world and there is reason to at least try here. Also, I think there's like a bunch of unusual things about and this is why it's dangerous to learn from any technological analogy of the past. There's a bunch of unusual things here. There's, of course, the energy signature and the amount of energy needed, but there aren't that many people that are making the most capable GPUs and you could require them all to put in some sort of monitoring thing that say if you're talking to more than 10,000 other GPUs, like, you got it, whatever, there's options.
Patrick Collison: Yeah. So one of the big surprises for me this year has been the progress in the open source models and it's been this kind of frenzy pace the last 60 days or something. How good do you think the open source models will be in a year?
Sam Altman: I think there's going to be two thrusts to the development here. There will be the hyperscaler's best closed source models, and there will be the progress that the open source community makes, and it'll be a few years behind or whatever. A couple of years behind maybe. But I think we're going to be in a world where there's very capable open source models and people use them for all sorts of things, and the creative power of the whole community is going to impress all of us. And then there will be the frontier of what people with the giant clusters can do, and that will be fairly far ahead. And I think that's good because we get more time to figure out how to deal with some of the scarier things.
Patrick Collison: David Liu made the case to me that, like, the set of economically useful activities is purely a subset of all possible activities, and that pretty good models might be sufficient to address most of that first set. And so maybe the super large models will be very scientifically interesting, and maybe you'll need them to do things like generate further AI progress or something. But for most of the practical day to day cases, maybe an open source model will be sufficient. How likely do you think that future is?
Sam Altman: I think for many super economically valuable things, yes, the smaller open-source model will be sufficient. But you actually just touched on the one thing I would say, which is like, help us invent super intelligence. That's a pretty economically valuable activity. So is like, cure all cancer or discover new physics or whatever else. And that will happen with the biggest models first.
Patrick Collison: Should Facebook open source llama at this point?
Sam Altman: Probably should.
Patrick Collison: Should they adopt a strategy of open sourcing their foundation models LLMs or just Llama in particular?
Sam Altman: I think Facebook's AI strategy has been confused at best for some time, but I think they're now getting really serious and they have extremely competent people, and I expect a more cohesive strategy from them soon. I think they'll be a surprising new real player here.
Patrick Collison: Is there any new discovery that could be made that would meaningfully change your P doom probability either by elevating i0 -*/-65t or by decreasing it?
Sam Altman: Yeah, I mean, a lot. I think that's most of the new work between here and super intelligence will move that probability up or down.
Patrick Collison: Okay. Is there anything you're particularly paying attention to? Any kind of contingent fact you'd love to know?
Sam Altman: First of all, I don't think Rlhf is the right long-term solution. I don't think we can rely on that. I think it's helpful. It certainly makes these models easier to use. But what you really want is to understand what's happening in the internals of the models and be able to align that, say, like exactly, here is the circuit or the set of artificial neurons where something is happening and tweak that in a way that then gives a robust change to the performance of the model.
Patrick Collison: Mechanistic interpretability stuff, yeah, well, that and.
Sam Altman: Then beyond, like there's a whole bunch of things beyond that, but that direction. If we can get that to reliably work, I think everybody's PDOM would go down a lot.
Patrick Collison: And do you think sufficient interpretability work is happening?
Sam Altman: No.
Patrick Collison: Why not? A lot of people say they're very worried about AI safety, so it seems superficially surprising.
Sam Altman: Most of the people who say they're really worried about AI safety just seem to spend their days on Twitter saying they're really worried about AI safety or any number of other things. There are people who are very worried about AI safety and doing great technical work there, but we need a lot more of them. We're certainly shifting a lot more effort inside, a lot more like technical people inside OpenAI to work on that. But what the world needs is not more AI safety, people who post on Twitter and write long philosophical diatribes. It needs more people who are going to do the technical work to make these systems safe and reliably aligned. And I think that's happening. It'll be a combination of people that have good ML researchers shifting their focus and new people coming into the field.
Patrick Collison: A lot of the people on this call are active philanthropists and most of them don't post very much on Twitter. They hear this exchange like, oh, maybe I should help fund something in the interpretability space. If they're having that thought, what's the next step?
Sam Altman: One strategy that I think has not happened enough is grants. Like grants to single people or small groups of people that are very technical, that want to push for the technical solution and are maybe in grad school or just out or undergrad or whatever. I think that is well worth trying. They need access to fairly powerful models and open eyes trying to figure out programs to support independent alignment researchers. But I think giving those people financial support is like a very good step.
Patrick Collison: To what degree is the field skill bottlenecked, where there are people who maybe have the intrinsic characteristics required but don't have the four years of learning or something like that that are also a prerequisite for their being effective.
Sam Altman: I think if you have a smart person who has learned to do good research and has the right sort of mindset, it only takes about six months to make them take a smart physics researcher and make them into a productive AI researcher. So we don't have enough talent in the field yet, but it's coming soon. We have a program at OpenAI that does exactly this. And I'm astonished how well it works.
Patrick Collison: It seems that pretty soon we'll have agents that you can converse with in very natural form. Low latency, full duplex. You can interrupt them, like, the whole thing. And obviously, we're already seeing with things like character and Replica that even nascent products in this direction are getting pretty remarkable traction. It seems to me that these are likely to be a huge deal, and maybe we're substantially underestimating it again, especially once you can converse through voice. Do you think that's right?
Sam Altman: Yes, I do think it's right, for sure. A thing someone said to me recently that has stuck with them is that they're pretty sure their kids are going to have more AI friends than human friends.
Patrick Collison: And if that's right, what do you think the likely consequences are?
Sam Altman: I don't know what the consequences are going to be. One thing that I think is important is that we establish a societal norm soon that, you know, if you're talking to an AI or a human or sort of like weird AI assisted human situation. But people seem to have a hard time kind of differentiating their head, even with these very early weak systems like Replica that you mentioned. It's whatever the circuits in our brain are that crave social interaction seem satisfiable with, like, for some people, in some cases with an AI friend. And so figuring out how to handle that, I think is tricky.
Patrick Collison: Someone recently told me that a frequent topic of discussion on the Replica subreddit is how to handle the emotional challenges and trauma of upgrades to the Replica models, because suddenly your friend becomes somewhat lobotomized or at least a somewhat different person. And presumably these interlocutors all know that Replica is in fact an AI. But somehow, to your point, our emotional response doesn't necessarily seem all that different.
Sam Altman: What I think we're heading to is a society I think what most people assume that we're heading to is a society with one sort of supreme being super intelligence, floating in the sky or whatever. And I think what we're heading to, which is sort of less scary, but in some senses still as weird, is a society that just has a lot of AIS integrated along with humans. And there have been movies about this for like, a long time. Like there's like, you know, C Three PO or whatever you want in Star Wars. Like, people know it's an AI. It's still useful, they still interact with it. It's kind of like cute and person like, although you know, it's not a person. And in that world where we just have, like, a lot of AIS that are contributing to the societal infrastructure we all build up together, that feels manageable and less scary to me than the sort of single big super intelligence.
Patrick Collison: Well, this is a financial event. There's kind of a debate in economics as to whether changes in the working age population push real interest rates up or down because you have a whole bunch of countervailing effects and yeah, they're more productive, but you also need capital investment to kind of make them productive and so forth. How will AI change real interest rates?
Sam Altman: I try not to make macro predictions. I'll say I think they're going to change a lot.
Patrick Collison: Okay, well how will it change measured economic growth?
Sam Altman: I think it should lead to a massive increase in real economic growth, and I presume we'll be able to measure that reasonably well. At least in the early stages, it will be an incredibly capital intensive period because we now know which cancer curing factories or pharma companies we should build and what exactly the right reactor designs are and so forth.
Patrick Collison: That sounds expensive. Do you mean like the present day capital allocation done by humans?
Sam Altman: Yeah, I meant the way that we allocate capital allocation done by humans.
Patrick Collison: How much do you think we spend on cancer research today?
Sam Altman: I don't know. Probably well it depends if you count the pharma companies, but it's probably about like eight or nine billion from the NIH and then I don't know how much the drug companies spend, but I don't know, probably some small multiple of that. Again, if it's like under 50 billion, I was going to guess total between 50 and 100 billion per year.
Patrick Collison: For OpenAI, obviously you guys want to be and are a preeminent research organization. But with respect to commercialization, is it more important to be a consumer company or an infrastructure company?
Sam Altman: I believe in platform plus killer app as a business strategy. I think the fact that we're doing a consumer product is helping us make our platform much better, and I hope over time that we figure out how to have the platform make the consumer app much better too. So I think it's a good cohesive strategy to do them together. But as you pointed out, really what we're about is being the best research in the world, and that is more important to us than any productization. Building the capability to make these repeated breakthroughs, they don't all work. We've gone down some bad paths, but we have figured out more than our fair share of the paradigm shifts, and I think we have the next big ones too. And that's really what is important to us to build.
Patrick Collison: Which breakthrough are you most proud of OpenAI having made?
Sam Altman: The whole GPT paradigm. I think that was transformative and an important contribution back to the world. It comes from the sort of work, the multiple kinds of work that OpenAI is good at combining.
Patrick Collison: Google I/O starts tomorrow, I think. If you were CEO of Google, how would you approach it?
Sam Altman: I think Google is doing a good job. I think they have had quite a lot of focus and intensity recently and are really trying to figure out how they can move to really remake a lot of the company for this new technology. So I've been impressed.
Here is the interview transcript in complete, grammatically correct sentences:
Patrick Collison: Are these models and their attendant capabilities actually a threat to search or is that just a sort of superficial response that is a bit too hasty?
Sam Altman: I suspect that they mean search is going to change in some big ways, but not a threat to the existence of search. So I think it would be like a threat to Google if Google did nothing. But Google is clearly not going to do nothing.
Patrick Collison: How much important ML research comes out of China?
Sam Altman: Non zero, but not a giant amount.
Patrick Collison: Do you have any sense as to why? Because the number of published papers is very large and there are a lot of Chinese researchers in the US who do fantastic work. And so why is the kind of per paper impact from the Chinese stuff relatively low?
Sam Altman: I mean, what a lot of people suspect is they're just not publishing the stuff that is most important.
Patrick Collison: Do you think that's likely to be true?
Sam Altman: I don't trust my intuitions here.
Patrick Collison: Would you prefer OpenAI to figure out a 10x improvement to training efficiency or to inference efficiency?
Sam Altman: It's a good question. It sort of depends on how important synthetic data turns out to be. I mean, I guess if forced to choose, I would choose inference efficiency, but I think the right metric is to think about all the compute that will ever be spent on a model training plus all inference and try to optimize that right.
Patrick Collison: And you say inference efficiency because that is likely the dominant term in that equation.
Sam Altman: Probably. I mean, if we're doing our jobs right.
Patrick Collison: When GPT-2 came out, only a very small number of people noticed sort of that that had happened and you know, really understood what it signified. To your point about the importance of the breakthrough, is there a GPT-2 moment happening now?
Sam Altman: There's a lot of things we're working on that I think will be GPT-2 like moments if they come together. But nothing there's nothing like release that I could point to yet and with high confidence say, this is the GPT-2 of 2023, but I hope by the end of this year, by next year or something, I will change.
Patrick Collison: What's the best non OpenAI AI product that you use?
Sam Altman: Honestly, the only product that I think of is like, really I don't use a lot of things. I kind of, like, have a very narrow view of the world. But ChatGPT is the only AI product I use daily.
Patrick Collison: Is there an AI product that you wish existed and that you think that our current capabilities make possible or will very soon make possible that you're looking forward to?
Sam Altman: I would like a copilot like product that controls my entire computer so they can look at my Slack and my email and zoom and Imessages and my massive to do list documents and just kind of do most of my work.
Patrick Collison: You mentioned curing cancer. Is there an obvious application of these techniques and technologies to science that, again, you think we have orchestrates having capabilities for that you don't see people obviously pursuing today.
Sam Altman: There's a boring one and an exciting one. The boring answer is that if you can just make really good tools like that one I just mentioned, and accelerate individual scientists, each by a factor of three or five or ten or whatever, that probably increases the rate of scientific discovery by a lot, even though it's like, not directly doing science itself. The more exciting one is. I do think that same similar system could go off and start to read all of the literature, think of new ideas, do some limited tests in simulation, email a scientist and say, hey, can you run this for me in the wet lab? And probably make real progress.
Patrick Collison: I don't know how exactly kind of the ontology works here, but you can imagine building these better general purpose models that are kind of like a human. They will go read a lot of literature, et cetera, maybe smarter than a human, better memory, who knows what. And then you can imagine models trained on certain data sets that are doing something nothing like what a human does. You're mapping CRISPRs to edit accuracies or something like that, and it's really a special purpose model in some particular domain. Do you think that the scientifically useful applications will come more from the first category where we're kind of creating better humans, or from the second category, where we're creating these predictive architectures for problem domains that are not currently easy to work with.
Sam Altman: I really don't know. I don't feel like I have a deep enough understanding of the process of science and how great scientists actually work to say that. I guess I would say if we can figure out someday how to build models that are really great at reasoning, then I think they should be able to make some scientific leaps on themselves by themselves. But that requires more work.
Patrick Collison: OpenAI has done a super impressive job of fundraising and has a very unusual capital structure for the nonprofit and the Microsoft deal. Are weird capital structures underrated? Like, should organizations and companies and founders be thinking more expansively about the default? Should people be breaking more corporate structure rules?
Sam Altman: I suspect not. I suspect it's like a horrible thing to innovate on. You should innovate on products and science and not corporate structures. The shape of our problem is just so weird that despite our best efforts, we had to do something strange. But it's been an unpleasant experience on the whole. And the other efforts I'm involved in have always had normal capital structures, and I think that's better.
Patrick Collison: A lot of companies you're involved with are very capital intensive, and OpenAI is perhaps the most capital intensive. Do we underestimate the extent to which capital is a bottleneck, the bottleneck on real-life innovation? Is that some kind of common theme running through the various efforts you're involved with?
Sam Altman: Yes, there's like, four companies that I'm involved with other than just, like, having written a check as an investor. And all of them are super capital intensive.
Patrick Collison: Do you want to enumerate those for the audience?
Sam Altman: OpenAI and Helium are the things I spent the most time on and then also Retro and WorldCoin, but all of them raised minimum nine digits before any product at all, and in OpenAI's case, much more than that. And I think there's a lot of value in being willing to do stuff like this, and it fell out of favor in Silicon Valley at some point, and I understand why. It's also great for companies that only ever have to raise a few hundred thousand, a million dollars and get to profitability. But I think we overpivoted in that direction and we have forgotten collectively how to do the high risk, high reward, hugely capital and time intensive bets and those are also valuable. We should be able to support both.
Patrick Collison: And this touches on the question of why aren't there more Elons in that. I guess the two most successful hardware companies in the broadest end start in the last 20 years were both started by the same person. That seems like a pretty surprising fact and obviously Elon is singular in many respects. But what's your answer to that question? Is it actually a capital story along the lines of what you're saying? If it was your job to cause there to be more SpaceXs and Teslas in the world and maybe you're trying to do some of that yourself, but if you had to kind of push in that direction systematically, what would you be trying to change?
Sam Altman: I have never met another Elon. I have never met another person that I think that can be developed easily into another Elon. He is sort of this like strange n of one character. I'm happy he exists in the world, of course, but also a complex person. I don't know how you get more people like that. I don't know what you think about how to make more. I'm curious.
Patrick Collison: I don't know. I suspect there's something in the culture on both the founder and the capital side, the kinds of companies the founders want to create and then the disposition and to some extent, though maybe to a lesser extent, the fund structure of the sources of capital. A surprise for me as I've learned more about the space over the last 15 years is the extent to which there's a finite or essentially finite set of funding models in the world and each has a particular set of incentives and for the most part a particular sociology. And that's evolved with time. Like venture capital was itself an investment. PE in its modern form was essentially an invention. I doubt we are done with that process of funding model invention. And I suspect there are models that are at least somewhat different to those that prevail today that are somewhat more amenable to this kind of innovation.
Sam Altman: Okay, so one thing I'm excited about is that all the people who became tech billionaires in the last cycle are pretty interested in putting serious capital into long term projects. And the availability of capital for high risk, high reward, long term, long duration projects that rely on fundamental science and innovation has dramatically changed. So I think there's going to be a lot more capital available for this. You still need the Elon people to do it. One project I've always been tempted to do is say, okay, we're going to identify the 100 most talented people we can find that want to work on these sorts of projects. We're going to give them $250k a year. So like enough money for ten years or something. Let them go off and without the kind of pressure that most people feel, have the certainty to go off and explore for a long period of time and not feel the very understandable pressure to make a ton of money first and put them together with great mentors in a sort of a great peer group. And then the financial model would be like if you start a company, the vehicle gets to invest on predefined terms. I think that would pay off and someone should do it.
Patrick Collison: That's kind of the university model, I guess this already exists. You're just reinventing the bus or something. I mean, that it may suggest evidence that it can work. And universities are usually not that good at good at supporting their spin outs, but it happens to at least some extent. And yes, one of the theses for Arc.
Patrick Collison: Which company that is not thought of as an AI company will benefit the most from AI over the next five years?
Sam Altman: I think some sort of investing vehicle is going to figure out how to use AI to be like an unbelievable investor and just have a crazy outperformance.
Patrick Collison: Like RenTec with these new technologies? Is there an operating company you look at?
Sam Altman: Well, do you think of Microsoft as an AI company?
Patrick Collison: Let's say no for the purpose of this question.
Sam Altman: Okay. I think Microsoft will transform themselves across almost every axis with AI.
Patrick Collison: And is that because they're just taking AI more seriously or because there's something about Microsoft that makes them particularly suited to this?
Sam Altman: Both. They saw it sooner than others and have been taking it more seriously than others.
Patrick Collison: What's the likelihood we'll realize GPT-4 is overfit on the problems it was trained on? How would we know? Do you even think about overfitting as a concern?
Sam Altman: I don't think the base model is overfit, but we don't understand "relaxation" as well. We may be damaging the model more than we realize.
Patrick Collison: Do you think a generalized measure of intelligence exists in humans, or is it a statistical artifact? If it exists in humans, does an analogous factor exist in models?
Sam Altman: It's imprecise, but it captures something real in humans and models. Very smart people can learn many things quickly, though some excel in one area. Model intelligence will also be somewhat fungible.
Patrick Collison: Based on your AI safety experience, how should we regulate synthetic biology?
Sam Altman: I'd like to avoid another pandemic. We need more global coordination. This is harder than AI, where we have lots of data and computing power. I feel like I should ask you what we should do.
Patrick Collison: We need more observability, wastewater sequencing, and data on pathogens and health outcomes. We could improve treatments, vaccines, and trials. Concerning are novel pathogens, so invest in surplus manufacturing and platforms like mRNA. Still, that may not be enough.
Sam Altman: Improving rapid response is obvious but we need more progress. Faster trials were key to COVID, and improving them seems low-hanging fruit. Your investment in TrialSpark makes sense.
Patrick Collison: Ezra Klein and Derek Thompson argue we need an "abundance agenda" -- more stuff in many domains for an equal, prosperous, sustainable society. Red tape limits progress. How relevant is this to your work?
Sam Altman: It's huge, but not the only factor. We need more abundance, especially energy and intelligence. Getting fusion out will be painful, pushing us to act where permitting is easier. It's a real problem but not the only one. We lack will to fix it.
Patrick Collison: If Ezra and Derek interviewed you for this book and asked you for your number one diagnosis as to that which is limiting the abundance agenda, what would you nominate?
Sam Altman: Societal collective belief that we can actually make the future better and the level of effort we put on it. Every additional sort of gate you put on something when these things are fragile anyway, I think makes them tremendously less likely to happen. It's really hard to start a new company. It's really hard to convince people it's a good thing to do. Right now, in particular, there's a lot of skepticism. Then you have this regulatory thing and you know, it's going to take a long time, so maybe you don't even try. And then it's going to be way more expensive. There's too much friction and doubt at every stage of the process from idea to mass deployment. I think it makes people try less than they used to or believe less.
Patrick Collison: When we first met, whatever 15 or so years ago, Mark Zuckerberg was preeminent in the technology industry in his twenties. And not that long before then, Marc Andreesen was preeminent in the industry in his twenties. And not that long before then, Bill Gates and Steve Jobs. Generally speaking, for most of the history of the software sector, one of the top three people has been in their twenties.
Sam Altman: Yeah.
Patrick Collison: It doesn't seem that's true today.
Sam Altman: Yeah, it's not good. Something has really gone wrong. There's a lot of discussion about what this is, but where are the great founders in their twenties? I hope we'll see a bunch. I hope this was just an accident in history, but maybe something's gone wrong in our educational system or society or how we think about companies and what people aspire to. It's worth significant concern and study.