Player FM - Internet Radio Done Right
188 subscribers
Checked 5d ago
Added six years ago
Content provided by Benjamin Reinhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Benjamin Reinhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
O
Obscurities


In 1966, two Brazilian men were found dead on Vintém Hill under bizarre circumstances that continue to perplex investigators and conspiracy theorists alike. Lying side by side, their bodies were discovered wearing matching lead masks—shields with no eyeholes—alongside cryptic notes. Were they victims of a cult ritual, a failed experiment, or something even more otherworldly? See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info .…
Idea Machines
Mark all (un)played …
Manage series 2470122
Content provided by Benjamin Reinhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Benjamin Reinhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Idea Machines is a deep dive into the systems and people that bring innovations from glimmers in someone's eye all the way to tools, processes, and ideas that can shift paradigms. We see the outputs of innovation systems everywhere but rarely dig into how they work. Idea Machines digs below the surface into crucial but often unspoken questions to explore themes of how we enable innovations today and how we could do it better tomorrow. Idea Machines is hosted by Benjamin Reinhardt.
…
continue reading
50 episodes
Mark all (un)played …
Manage series 2470122
Content provided by Benjamin Reinhardt. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Benjamin Reinhardt or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Idea Machines is a deep dive into the systems and people that bring innovations from glimmers in someone's eye all the way to tools, processes, and ideas that can shift paradigms. We see the outputs of innovation systems everywhere but rarely dig into how they work. Idea Machines digs below the surface into crucial but often unspoken questions to explore themes of how we enable innovations today and how we could do it better tomorrow. Idea Machines is hosted by Benjamin Reinhardt.
…
continue reading
50 episodes
All episodes
×I
Idea Machines


1 Speculative Technologies with Ben Reinhardt [Macroscience cross-post] 30:04
30:04
Play Later
Play Later
Lists
Like
Liked30:04
Tim Hwang turns the tables and interviews me (Ben) about Speculative Technologies and research management.
I
Idea Machines


1 Industrial Research with Peter van Hardenberg [Idea Machines #50] 46:40
46:40
Play Later
Play Later
Lists
Like
Liked46:40
Peter van Hardenberg talks about Industrialists vs. Academics, Ink&Switch's evolution over time, the Hollywood Model, internal lab infrastructure, and more! Peter is the lab director and CEO of Ink&Switch , a private, creator oriented, computing research lab. References Ink&Switch (and their many publications) The Hollywood Model in R&D Idea Machines Episode with Adam Wiggins Paul Erdós Transcript Peter Van Hardenberg [00:01:21] Ben: Today I have the pleasure of speaking with Peter van Hardenbergh. Peter is the lab director and CEO of Inkin switch. Private creator oriented, competing research lab. I talked to Adam Wiggins, one of inkind switches founders, [00:01:35] way back in episode number four. It's amazing to see the progress they've made as an organization. They've built up an incredible community of fellow travelers and consistently released research reports that gesture at possibilities for competing that are orthogonal to the current hype cycles. Peter frequently destroys my complacency with his ability to step outside the way that research has normally done and ask, how should we be operating, given our constraints and goals. I hope you enjoy my conversation with Peter. Would you break down your distinction between academics and industrialists [00:02:08] Peter: Okay. Academics are people whose incentive structure is connected to the institutional rewards of the publishing industry, right? You, you publish papers. And you get tenure and like, it's a, it's, it's not so cynical or reductive, but like fundamentally the time cycles are long, right? Like you have to finish work according to when, you know, submission deadlines for a conference are, you know, you're [00:02:35] working on something now. You might come back to it next quarter or next year or in five years, right? Whereas when you're in industry, you're connected to users, you're connected to people at the end of the day who need to touch and hold and use the thing. And you know, you have to get money from them to keep going. And so you have a very different perspective on like time and money and space and what's possible. And the real challenge in terms of connecting these two, you know, I didn't invent the idea of pace layers, right? They, they operate at different pace layers. Academia is often intergenerational, right? Whereas industry is like, you have to make enough money every quarter. To keep the bank account from going below zero or everybody goes home, [00:03:17] Ben: Right. Did. Was it Stuart Brand who invented pace [00:03:22] Peter: believe it was Stewart Brand. Pace layers. Yeah. [00:03:25] Ben: That actually I, I'd never put these two them together, but the, the idea I, I, I think about impedance mismatches between [00:03:35] organizations a lot. And that really sort of like clicks with pace layers Exactly. Right. Where it's like [00:03:39] Peter: Yeah, absolutely. And, and I think in a big way what we're doing at, Ink& Switch on some level is trying to provide like synchro mesh between academia and industry, right? Because they, the academics are moving on a time scale and with an ambition that's hard for industry to match, right? But also, Academics. Often I think in computer science are like, have a shortage of good understanding about what the real problems people are facing in the world today are. They're not disinterested. [00:04:07] Ben: just computer [00:04:08] Peter: Those communication channels don't exist cuz they don't speak the same language, they don't use the same terminology, they don't go to the same conferences, they don't read the same publications. Right. [00:04:18] Ben: Yeah. [00:04:18] Peter: so vice versa, you know, we find things in industry that are problems and then it's like you go read the papers and talk to some scientists. I was like, oh dang. Like. We know how to solve this. It's just nobody's built it. [00:04:31] Ben: Yeah. [00:04:32] Peter: Or more accurately it would be to say [00:04:35] there's a pretty good hunch here about something that might work, and maybe we can connect the two ends of this together. [00:04:42] Ben: Yeah. Often, I, I think of it as someone, someone has, it is a quote unquote solved problem, but there are a lot of quote unquote, implementation details and those implementation details require a year of work. [00:04:56] Peter: yeah, a year or many years? Or an entire startup, or a whole career or two? Yeah. And, and speaking of, Ink&Switch, I don't know if we've ever talked about, so a switch has been around for more than half a decade, right? [00:05:14] Peter: Yeah, seven or eight years now, I think I could probably get the exact number, but yeah, about that. [00:05:19] Ben: And. I think I don't have a good idea in my head over that time. What, what has changed about in, can switches, conception of itself and like how you do things. Like what is, what are some of the biggest things that have have changed over that time?[00:05:35] [00:05:35] Peter: So I think a lot of it could be summarized as professionalization. But I, I'll give a little brief history and can switch began because the. You know, original members of the lab wanted to do a startup that was Adam James and Orion, but they recognized that they didn't, they weren't happy with computing and where computers were, and they knew that they wanted to make something that would be a tool that would help people who were solving the world's problems work better. That's kinda a vague one, but You know, they were like, well, we're not physicists, we're not social scientists. You know, we can't solve climate change or radicalization directly, or you know, the journalism crisis or whatever, but maybe we can build tools, right? We know how to make software tools. Let's build tools for the people who are solving the problems. Because right now a lot of those systems they rely on are getting like steadily worse every day. And I think they still are like the move to the cloud disempowerment of the individual, like, you [00:06:35] know, surveillance technology, distraction technology. And Tristan Harris is out there now. Like hammering on some of these points. But there's just a lot of things that are like slow and fragile and bad and not fun to work with and lose your, you know, lose your work product. You know, [00:06:51] Ben: Yeah, software as a service more generally. [00:06:54] Peter: Yeah. And like, there's definitely advantages. It's not like, you know, people are rational actors, but something was lost. And so the idea was well go do a bit of research, figure out what the shape of the company is, and then just start a company and, you know, get it all solved and move on. And I think the biggest difference, at least, you know, aside from scale and like actual knowledge is just kind of the dawning realization at some point that like there won't really be an end state to this problem. Like this isn't a thing that's transitional where you kind of come in and you do some research for a bit, and then we figure out the answer and like fold up the card table and move on to the next thing. It's like, oh no, this, this thing's gotta stick around because these problems aren't gonna [00:07:35] go away. And when we get through this round of problems, we already see what the next round are. And that's probably gonna go on for longer than any of us will be working. And so the vision now, at least from my perspective as the current lab director, is much more like, how can I get this thing to a place where it can sustain for 10 years, for 50 years, however long it takes, and you know, to become a place that. Has a culture that can sustain, you know, grow and change as new people come in. But that can sustain operations indefinitely. [00:08:07] Ben: Yeah. And, and so to circle back to the. The, the jumping off point for this, which is sort of since, since it began, what have been some of the biggest changes of how you operate? How you, or just like the, the model more generally or, or things that you were [00:08:30] Peter: Yeah, so the beginning was very informal, but, so maybe I'll skip over the first like [00:08:35] little period where it was just sort of like, Finding our footing. But around the time when I joined, we were just four or five people. And we did one project, all of us together at a time, and we just sort of like, someone would write a proposal for what we should do next, and then we would argue about like whether it was the right next thing. And, you know, eventually we would pick a thing and then we would go and do that project and we would bring in some contractors and we called it the Hollywood model. We still call it the Hollywood model. Because it was sort of structured like a movie production. We would bring in, you know, to our little core team, we'd bring in a couple specialists, you know, the equivalent of a director of photography or like a, you know, a casting director or whatever, and you bring in the people that you need to accomplish the task. Oh, we don't know how to do Bluetooth on the web. Okay. Find a Bluetooth person. Oh, there's a bunch of crypto stuff, cryptography stuff. Just be clear on this upcoming project, we better find somebody who knows, you know, the ins and outs of like, which cryptography algorithms to use or [00:09:35] what, how to build stuff in C Sharp for Windows platform or Surface, whatever the, the project was over time. You know, we got pretty good at that and I think one of the biggest changes, sort of after we kind of figured out how to actually do work was the realization that. Writing about the work not only gave us a lot of leverage in terms of our sort of visibility in the community and our ability to attract talent, but also the more we put into the writing, the more we learned about the research and that the process of, you know, we would do something and then write a little internal report and then move on. But the process of taking the work that we do, And making it legible to the outside world and explaining why we did it and what it means and how it fits into the bigger picture. That actually like being very diligent and thorough in documenting all of that greatly increases our own understanding of what we did.[00:10:35] And that was like a really pleasant and interesting surprise. I think one of my sort of concerns as lab director is that we got really good at that and we write all these like, Obscenely long essays that people claim to read. You know, hacker News comments on extensively without reading. But I think a lot about, you know, I always worry about the orthodoxy of doing the same thing too much and whether we're sort of falling into patterns, so we're always tinkering with new kind of project systems or new ways of working or new kinds of collaborations. And so yeah, that's ongoing. But this, this. The key elements of our system are we bring together a team that has both longer term people with domain contexts about the research, any required specialists who understand like interesting or important technical aspects of the work. And then we have a specific set of goals to accomplish [00:11:35] with a very strict time box. And then when it's done, we write and we put it down. And I think this avoids number of the real pitfalls in more open-ended research. It has its own shortcomings, right? But one of the big pitfalls that avoids is the kind of like meandering off and losing sight of what you're doing. And you can get great results from that in kind of a general research context. But we're very much an industrial research context. We're trying to connect real problems to specific directions to solve them. And so the time box kind of creates the fear of death. You're like, well, I don't wanna run outta time and not have anything to show for it. So you really get focused on trying to deliver things. Now sometimes that's at the cost, like the breadth or ambition of a solution to a particular thing, but I think it helps us really keep moving forward. [00:12:21] Ben: Yeah, and, and you no longer have everybody in the lab working on the same projects, right. [00:12:28] Peter: Yeah. So today, at any given time, The sort of population of the lab fluctuates between sort of [00:12:35] like eight and 15 people, depending on, you know, whether we have a bunch of projects in full swing or you know, how you count contractors. But we usually, at the moment we have sort of three tracks of research that we're doing. And those are local first software Programmable Inc. And Malleable software. [00:12:54] Ben: Nice. And so I, I actually have questions both about the, the write-ups that you do and the Hollywood model and so on, on the Hollywood model. Do you think that I, I, and this is like, do you think that the, the Hollywood model working in, in a. Industrial Research lab is particular to software in the sense that I feel like the software industry, people change jobs fairly frequently. Contracting is really common. Contractors are fairly fluid and. [00:13:32] Peter: You mean in terms of being able to staff and source people?[00:13:35] [00:13:35] Ben: Yeah, and people take, like, take these long sabbaticals, right? Where it's like, it's not uncommon in the software industry for someone to, to take six months between jobs. [00:13:45] Peter: I think it's very hard for me to generalize about the properties of other fields, so I want to try and be cautious in my evaluation here. What I would say is that, I think the general principle of having a smaller core of longer term people who think and gain a lot of context about a problem and pairing them up with people who have fresh ideas and relevant expertise, does not require you to have any particular industry structure. Right. There are lots of ways of solving this problem. Go to a research, another research organization and write a paper with someone from [00:14:35] an adjacent field. If you're in academia, right? If you're in a company, you can do a partnership you know, hire, you know, I think a lot of fields of science have much longer cycles, right? If you're doing material science, you know, takes a long time to build test apparatus and to formulate chemistries. Like [00:14:52] Ben: Yeah. [00:14:52] Peter: someone for several years, right? Like, That's fine. Get a detach detachment from another part of the company and bring someone as a secondment. Like I think that the general principle though, of putting together a mixture of longer and shorter term people with the right set of skills, yes, we solve it a particular way in our domain. But I don't think that that's software u unique to software. [00:15:17] Ben: Would, would it be overreaching to map that onto professors and postdocs and grad students where you have the professor who is the, the person who's been working on the, the program for a long time has all the context and then you have postdocs and grad students [00:15:35] coming through the lab. [00:15:38] Peter: Again, I need to be thoughtful about. How I evaluate fields that I'm less experienced with, but both my parents went through grad school and I've certainly gotten to know a number of academics. My sense of the relationship between professors and or sort of PhD, yeah, I guess professors and their PhD students, is that it's much more likely that the PhD students are given sort of a piece of the professor's vision to execute. [00:16:08] Ben: Yeah. [00:16:09] Peter: And that that is more about scaling the research interests of the professor. And I don't mean this in like a negative way but I think it's quite different [00:16:21] Ben: different. [00:16:22] Peter: than like how DARPA works or how I can switch works with our research tracks in that it's, I it's a bit more prescriptive and it's a bit more of like a mentor-mentee kind of relationship as [00:16:33] Ben: Yeah. More training.[00:16:35] [00:16:35] Peter: Yeah. And you know, that's, that's great. I mean, postdocs are a little different again, but I think, I think that's different than say how DARPA works or like other institutional research groups. [00:16:49] Ben: Yeah. Okay. I, I wanted to see how, how far I could stretch the, stretch [00:16:55] Peter: in academia there's famous stories about Adosh who would. Turn up on your doorstep you know, with a suitcase and a bottle of amphetamines and say, my, my brain is open, or something to that effect. And then you'd co-author a paper and pay his room and board until you found someone else to send him to. I think that's closer in the sense that, right, like, here's this like, great problem solver with a lot of like domain skills and he would parachute into a place where someone was working on something interesting and help them make a breakthrough with it. [00:17:25] Ben: Yeah. I think the, the thing that I want to figure out, just, you know, long, longer term is how to. Make those [00:17:35] short term collaborations happen when with, with like, I, I I think it's like, like there's some, there's some coy intention like in, in the sense of like Robert Kos around like organizational boundaries when you have people coming in and doing things in a temporary sense. [00:17:55] Peter: Yeah, academia is actually pretty good at this, right? With like paper co-authors. I mean, again, this is like the, the pace layers thing. When you have a whole bunch of people organized in an industry and a company around a particular outcome, You tend to have like very specific goals and commitments and you're, you're trying to execute against those and it's much harder to get that kind of like more fluid movement between domains. [00:18:18] Ben: Yeah, and [00:18:21] Peter: That's why I left working in companies, right? Cause like I have run engineering processes and built products and teams and it's like someone comes to me with a really good idea and I'm like, oh, it's potentially very interesting, but like, [00:18:33] Ben: but We [00:18:34] Peter: We got [00:18:35] customers who have outages who are gonna leave if we don't fix the thing, we've got users falling out of our funnel. Cause we don't do basic stuff like you just, you really have a lot of work to do to make the thing go [00:18:49] Ben: Yeah. [00:18:49] Peter: business. And you know, my experience of research labs within businesses is that they're almost universally unsuccessful. There are exceptions, but I think they're more coincidental than, than designed. [00:19:03] Ben: Yeah. And I, I think less and less successful over time is, is my observation that. [00:19:11] Peter: Interesting. [00:19:12] Ben: Yeah, there's a, there's a great paper that I will send you called like, what is the name? Oh, the the Changing Structure of American Innovation by She Aurora. I actually did a podcast with him because I like the paper so much. that that I, I think, yeah, exactly. And so going back to your, your amazing [00:19:35] write-ups, you all have clearly invested quite a chunk of, of time and resources into some amount of like internal infrastructure for making those really good. And I wanted to get a sense of like, how do you decide when it's worth investing in internal infrastructure for a lab? [00:19:58] Peter: Ooh. Ah, that's a fun question. Least at In and Switch. It's always been like sort of demand driven. I wish I could claim to be more strategic about it, but like we had all these essays, they were actually all hand coded HTML at one point. You know, real, real indie cred there. But it was a real pain when you needed to fix something or change something. Cause you had to go and, you know, edit all this H T M L. So at some point we were doing a smaller project and I built like a Hugo Templating thing [00:20:35] just to do some lab notes and I faked it. And I guess this is actually a, maybe a somewhat common thing, which is you do one in a one-off way. And then if it's promising, you invest more in it. [00:20:46] Ben: Yeah. [00:20:46] Peter: And it ended up being a bigger project to build a full-on. I mean, it's not really a cms, it's sort of a cms, it's a, it's a templating system that produces static HT m l. It's what all our essays come out of. But there's also a lot of work in a big investment in just like design and styling. And frankly, I think that one of the things that in can switch apart from other. People who do similar work in the space is that we really put a lot of work into the presentation of our work. You know, going beyond, like we write very carefully, but we also care a lot about like, picking good colors, making sure that text hyphenates well, that it, you know, that the the screencast has the right dimensions and, you know, all that little detail work and. It's expensive [00:21:35] in time and money to do, but I think it's, I think the results speak for themselves. I think it's worth it. [00:21:47] Ben: Yeah. I, and I mean, if, if the ultimate goal is to influence what people do and what they think, which I suspect is, is at least some amount of the goal then communicating it. [00:22:00] Peter: It's much easier to change somebody's mind than to build an entire company. [00:22:05] Ben: Yes. Well, [00:22:06] Peter: you wanna, if you wanna max, it depends. Well, you don't have to change everybody's mind, right? Like changing an individual person's mind might be impossible. But if you can put the right ideas out there in the right way to make them legible, then you'll change the right. Hopefully you'll change somebody's mind and it will be the right somebody. [00:22:23] Ben: yeah. No, that is, that is definitely true. And another thing that I am. Always obscenely obsessed, exceedingly impressed by that. In Switch. [00:22:35] Does is your sort of thoughtfulness around how you structure your community and sort of tap into it. Would you be willing to sort of like, walk me through how you think about that and like how you have sort of the, the different layers of, of kind of involvement? [00:22:53] Peter: Okay. I mean, sort of the, maybe I'll work from, from the inside out cuz that's sort of the history of it. So in the beginning there was just sort of the people who started the lab. And over time they recruited me and, and Mark Mcg again and you know, some of our other folk to come and, and sign on for this crazy thing. And we started working with these wonderful, like contractors off and on and and so the initial sort of group was quite small and quite insular and we didn't publish anything. And what we found was that. Once we started, you know, just that alone, the act of bringing people in and working with them started to create the beginning of a [00:23:35] community because people would come into a project with us, they'd infect us with some of their ideas, we'd infect them with some of ours. And so you started to have this little bit of shared context with your past collaborators. And because we have this mix of like longer term people who stick with the lab and other people who come and go, You start to start to build up this, this pool of people who you share ideas and language with. And over time we started publishing our work and we began having what we call workshops where we just invite people to come and talk about their work at Ink and Switch. And by at, I mean like now it's on a discord. Back in the day it was a Skype or a Zoom call or whatever. And the rule back then in the early days was like, if you want to come to the talk. You have to have given a talk or have worked at the lab. And so it was like very good signal to noise ratio in attendance cuz the only people who would be on the zoom call would be [00:24:35] people who you knew were grappling with those problems. For real, no looky lose, no, no audience, right? And over time it just, there were too many really good, interesting people who are doing the work. To fit in all those workshops and actually scheduling workshops is quite tiring and takes a lot of energy. And so over time we sort of started to expand this community a little further. And sort of now our principle is you know, if you're doing the work, you're welcome to come to the workshops. And we invite some people to do workshops sometimes, but that's now we have this sort of like small private chat group of like really interesting folk. And it's not open to the public generally because again, we, I don't want to have an audience, right? I want it to practitioner's space. And so over time, those people have been really influential on us as well. And having that little inner [00:25:35] circle, and it's a few hundred people now of people who, you know, like if you have a question to ask about something tricky. There's probably somebody in there who has tried it, but more significantly, like the answer will come from somebody who has tried it, not from somebody who will call you an idiot for trying or who will, right, like you, you avoid all the, don't read the comments problems because the sort of like, if anybody was like that, I would probably ask them to leave, but we've been fortunate that we haven't had any of that kind of stuff in the community. I will say though, I think I struggle a lot because I think. It's hard to be both exclusive and inclusive. Right, but exclusive community deliberately in the sense that I want it to be a practitioner's space and one where people can be wrong and it's not too performative, like there's not investors watching or your, your user base or whatever. [00:26:32] Ben: Yeah. [00:26:32] Peter: at the same time, [00:26:33] Ben: strangers. [00:26:34] Peter: [00:26:35] inclusive space where we have people who are earlier in their career or. From non-traditional backgrounds, you know, either academically or culturally or so on and so forth. And it takes constant work to be like networking out and meeting new people and like inviting them into this space. So it's always an area to, to keep working on. At some point, I think we will want to open the aperture further, but yeah, it's, it's, it's a delicate thing to build a community. [00:27:07] Ben: Yeah, I mean the, the, frankly, the reason I'm asking is because I'm trying to figure out the same things and you have done it better than basically anybody else that I've seen. This is, this is maybe getting too down into the weeds. But why did you decide that discourse or discord was the right tool for it? And the, the reason that I ask is that I personally hate sort of [00:27:35] streaming walls of texts, and I find it very hard to, to seriously discuss ideas in, in that format. [00:27:43] Peter: Yeah, I think async, I mean, I'm an old school like mailing list guy. On some level I think it's just a pragmatic thing. We use Discord for our internal like day-to-day operations like. Hey, did you see the pr? You know, oh, we gotta call in an hour with so-and-so, whatever. And then we had a bunch of people in that community and then, you know, we started having the workshops and inviting more people. So we created a space in that same discord where. You know, people didn't have to get pinged when we had a lab call and we didn't want 'em turning up on the zoom anyway. And so it wasn't so much like a deliberate decision to be that space. I think there's a huge opportunity to do better and you know, frankly, what's there is [00:28:35] not as designed or as deliberate as I would like. It's more consequence of Organic growth over time and just like continuing to do a little bit here and there than like sort of an optimum outcome. And it could, there, there's a lot of opportunity to do better. Like we should have newsletters, there should be more, you know, artifacts of past conversations with better organizations. But like all of that stuff takes time and energy. And we are about a small little research lab. So many people you know, [00:29:06] Ben: I, I absolutely hear you on that. I think the, the, the tension that I, I see is that people, I think like texting, like sort of stream of texts. Slack and, and discord type things. And, and so there's, there's the question of like, what can you get people to do versus like, what creates the, the right conversation environment?[00:29:35] And, and maybe that's just like a matter of curation and like standard setting. [00:29:42] Peter: Yeah, I don't know. We've had our, our rabbit trails and like derailed conversations over the years, but I think, you know, if you had a forum, nobody would go there. [00:29:51] Ben: Yeah. [00:29:52] Peter: like, and you could do a mailing list, but I don't know, maybe we could do a mailing list. That would be a nice a nice form, I think. But people have to get something out of a community to put things into it and you know, you have to make, if you want to have a forum or, or an asynchronous posting place, you know, the thing is people are already in Discord or slack. [00:30:12] Ben: exactly. [00:30:13] Peter: something else, you have to push against the stream. Now, actually, maybe one interesting anecdote is I did experiment for a while with, like, discord has sort of a forum post feature. They added a while back [00:30:25] Ben: Oh [00:30:25] Peter: added it. Nobody used it. So eventually I, I turned it off again. Maybe, maybe it just needs revisiting, but it surprised me that it wasn't adopted, I guess is what [00:30:35] I would say. [00:30:36] Ben: Yeah. I mean, I think it, I think the problem is it takes more work. It's very easy to just dash off a thought. [00:30:45] Peter: Yeah, but I think if you have the right community, then. Those thoughts are likely to have been considered and the people who reply will speak from knowledge [00:30:55] Ben: Yeah. [00:30:56] Peter: and then it's not so bad, right? [00:30:59] Ben: it's [00:30:59] Peter: The problem is with Hacker News or whatever where like, or Reddit or any of these open communities like you, you know, the person who's most likely to reply is not the person who's most helpful to apply. [00:31:11] Ben: Yeah, exactly. Yeah, that makes, that makes a lot of sense. And sort of switching tracks yet again, how so one, remind me how long your, your projects are, like how long, how big are the, is the time box. [00:31:28] Peter: the implementation phase for a standard income switch Hollywood project, which I can now call them standard, I think, cuz we've done like, [00:31:35] Ooh, let me look. 25 or so over the years. Let's see, what's my project count number at? I have a little. Tracker. Yeah, I think it's 25 today. So we've done about 20 some non-trivial number of these 10 to 12 weeks of implementation is sort of the core of the project, and the idea is that when you hit that start date, at the beginning of that, you should have the team assembled. You should know what you're building, you should know why you're building it, and you should know what done looks like. Now it's research, so inevitably. You know, you get two weeks in and then you take a hard left and like, you know, but that, that we write what's called the brief upfront, which is like, what is the research question we are trying to answer by funding this work and how do we think this project will answer it? Now, your actual implementation might change, or you might discover targets of opportunity along the way. But the idea is that by like having a, a narrow time box, like a, a team [00:32:35] that has a clear understanding of what you're trying to accomplish. And like the right set of people on board who already have all the like necessary skills. You can execute really hard for like that 10 to 12 weeks and get quite far in that time. Now, that's not the whole project though. There's usually a month or two upfront of what we call pre-infusion, kind of coming from the espresso idea that like you make better espresso if you take a little time at low pressure first to get ready with the shot, and so we'll do. You know, and duration varies here, but there's a period before that where we're making technical choices. Are we building this for the web or is this going on iPad? Are we gonna do this with rust and web assembly, or is this type script is this, are we buying Microsoft Surface tablets for this as we're like the ink behavior, right? So all those decisions we try and make up front. So when you hit the execution phase, you're ready to go. Do we need, what kind of designer do we want to include in this project? And who's available, you know? All of that stuff. We [00:33:35] try and square away before we get to the execution phase. [00:33:38] Ben: right. [00:33:38] Peter: when the end of the execution phase, it's like we try to be very strict with like last day pencils down and try to also reserve like the last week or two for like polish and cleanup and sort of getting things. So it's really two to two and a half, sometimes three months is like actually the time you have to do the work. And then after that, essays can take between like two months and a year or two. To produce finally. But we try to have a dr. We try to have a good first draft within a month after the end of the project. And again, this isn't a process that's like probably not optimal, but basically someone on the team winds up being the lead writer and we should be more deliberate about that. But usually the project lead for a given project ends up being the essay writer. And they write a first draft with input and collaboration from the rest of the group. And then people around [00:34:35] the lab read it and go, this doesn't make any sense at all. Like, what? What do you do? And you know, to, to varying degrees. And then it's sort of okay, right? Once you've got that kind of feedback, then you go back and you restructured and go, oh, I need to explain this part more. You know, oh, these findings don't actually cover the stuff that other people at the lab thought was interesting from the work or whatever. And then that goes through, you know, an increasing sort of, you know, standard of writing stuff, right? You send it out to some more people and then you send it to a bigger group. And you know, we send it to people who are in the field that whose input we respect. And then we take their edits and we debate which ones to take. And then eventually it goes in the HTML template. And then there's a long process of like hiring an external copy editor and building nice quality figures and re-recording all your crappy screencasts to be like, Really crisp with nice lighting and good, you know, pacing and, you know, then finally at the end of all of that, we publish. [00:35:33] Ben: Nice. And [00:35:35] how did you settle on the, the 10 to 12 weeks as the right size, time box? [00:35:42] Peter: Oh, it's it's it's, it's clearly rationally optimal. [00:35:46] Ben: Ah, of course, [00:35:47] Peter: No, I'm kidding. It's totally just, it became a habit. I mean, I think. Like I, I can give an intuitive argument and we've, we've experimented a bit. You know, two weeks is not long enough to really get into anything, [00:36:02] Ben: right. [00:36:02] Peter: and the year is too long. There's too much, too much opportunity to get lost along the way. There's no, you go too long with no real deadline pressure. It's very easy to kind of wander off into the woods. And bear in mind that like the total project duration is really more like six months, right? And so where we kind of landed is also that we often have like grad students or you know, people who are between other contracts or things. It's much easier to get people for three months than for eight months. And if I feel like [00:36:35] just intuitively, if I, if someone came to you with an eight month project, I'd be, I'm almost positive that I would be able to split it into two, three month projects and we'd be able to like find a good break point somewhere in the middle. And then write about that and do another one. And it's like, this is sort of a like bigger or smaller than a bread box argument, but like, you know, a month is too little and six months feels too long. So two to four months feels about right. In terms of letting you really get into, yeah, you can really get into the meat of a problem. You can try a few different approaches. You can pick your favorite and then spend a bit of time like analyzing it and like working out the kinks. And then you can like write it up. [00:37:17] Ben: Thanks. [00:37:18] Peter: But you know, there have been things that are not, that haven't fit in that, and we're doing some stuff right now that has, you know, we've had a, like six month long pre-infusion going this year already on some ink stuff. So it's not a universal rule, but like that's the, that's the [00:37:33] Ben: Yeah. No, I [00:37:35] appreciate that intuition [00:37:36] Peter: and I think it also, it ties into being software again, right? Like again, if you have to go and weld things and like [00:37:43] Ben: yeah, exactly. [00:37:44] Peter: You know, [00:37:44] Ben: let let some bacteria grow. [00:37:46] Peter: or like, you know, the, it's very much a domain specific answer. [00:37:51] Ben: Yeah. Something that I wish people talked about more was like, like characteristic time scales of different domains. And I, I think that's software, I mean, software is obviously shorter, but it'd be interesting to, to sort of dig down and be like, okay, like what, what actually is it? So the, the, the last question I'd love to ask is, To what extent does everybody in the lab know what's, what everybody else is working on? Like. [00:38:23] Peter: So we use two tools for that. We could do a better job of this. Every Monday the whole lab gets together for half an hour only. [00:38:35] And basically says what they're doing. Like, what are you up to this week? Oh, we're trying to like, you know, figure out what's going on with that you know, stylist shaped problem we were talking about at the last demo, or, oh, we're, you know, we're in essay writing mode. We've got a, we're hoping to get the first draft done this week, or, you know, just whatever high level kind of objectives the team has. And then I was asked the question like, well, Do you expect to have anything for show and tell on Friday and every week on Friday we have show and tell or every other week. Talk a bit more about that and at show and tell. It's like whatever you've got that you want input on or just a deadline for you can share. Made some benchmark showing that this code is now a hundred times faster. Great. Like bring it to show and tell. Got that like tricky you know, user interaction, running real smooth. Bring it to show and tell, built a whole new prototype of a new kind of [00:39:35] like notetaking app. Awesome. Like come and see. And different folks and different projects have taken different approaches to this. What has been most effective, I'm told by a bunch of people in their opinion now is like, kind of approaching it. Like a little mini conference talk. I personally actually air more on the side of like a more casual and informal thing. And, and those can be good too. Just from like a personal alignment like getting things done. Perspective. What I've heard from people doing research who want to get useful feedback is that when they go in having sort of like rehearsed how to explain what they're doing, then how to show what they've done and then what kind of feedback they want. That not only do they get really good feedback, but also that process of making sure that the demo you're gonna do will actually run smoothly and be legible to the rest of the group [00:40:35] forces you. Again, just like the writing, it forces you to think about what you're doing and why you made certain choices and think about which ones people are gonna find dubious and tell them to either ignore that cuz it was a stand-in or let's talk about that cuz it's interesting. And like that, that that little cycle is really good. And that tends to be, people often come every two weeks for that [00:40:59] Ben: Yeah. [00:41:01] Peter: within when they're in active sort of mode. And so not always, but like two weeks feels about like the right cadence to, to have something. And sometimes people will come and say like, I got nothing this week. Like, let's do it next week. It's fine. And the other thing we do with that time is we alternate what we call zoom outs because they're on Zoom and I have no, no sense of humor I guess. But they're based on, they're based on the old you and your research hamming paper with where the idea is that like, at least for a little while, every week [00:41:35] we all get together and talk about something. Bigger picture that's not tied to any of our individual projects. Sometimes we read a paper together, sometimes we talk about like an interesting project somebody saw, you know, in the world. Sometimes it's skills sharing. Sometimes it's you know, just like, here's how I make coffee or something, right? Like, You know, just anything that is bigger picture or out of the day-to-day philosophical stuff. We've read Illich and, and Ursula Franklin. People love. [00:42:10] Ben: I like that a lot. And I, I think one thing that, that didn't, that, that I'm still wondering about is like, On, on sort of a technical level are, are there things that some peop some parts of the lab that are working on that other parts of the lab don't get, like they, they know, oh, like this person's working on [00:42:35] inks, but they kind of have no idea how inks actually work? Or is it something where like everybody in the lab can have a fairly detailed technical discussion with, with anybody else [00:42:45] Peter: Oh no. I mean, okay, so there are interesting interdependencies. So some projects will consume the output of past projects or build on past projects. And that's interesting cuz it can create almost like a. Industry style production dependencies where like one team wants to go be doing some research. The local first people are trying to work on a project. Somebody else is using auto merge and they have bugs and it's like, oh but again, this is why we have those Monday sort of like conversations. Right? But I think the teams are all quite independent. Like they have their own GitHub repositories. They make their own technology decisions. They use different programming languages. They, they build on different stacks, right? Like the Ink team is often building for iPad because that's the only place we can compile like [00:43:35] ink rendering code to get low enough latency to get the experiences we want. We've given up on the browser, we can't do it, but like, The local first group for various reasons has abandoned electron and all of these like run times and mostly just build stuff for the web now because it actually works and you spend all, spend way less calories trying to make the damn thing go if you don't have to fight xcode and all that kind of stuff. And again, so it really varies, but, and people choose different things at different times, but no, it's not like we are doing code review for each other or like. Getting into the guts. It's much more high level. Like, you know, why did you make that, you know, what is your programming model for this canvas you're working on? How does you know, how does this thing relate to that thing? Why is, you know, why does that layout horizontally? It feels hard to, to parse the way you've shown that to, you know, whatever. [00:44:30] Ben: Okay, cool. That, that makes sense. I just, I, the, the, the reason I ask [00:44:35] is I am just always thinking about how how related do projects inside of a single organization need to be for, like, is, is there sort of like an optimum amount of relatedness? [00:44:50] Peter: I view them all as the aspects of the same thing, and I think that that's, that's an important. Thing we didn't talk about. The goal of income switch is to give rise to a new kind of computing that is more user-centric, that's more productive, that's more creative in like a very raw sense that we want people to be able to think better thoughts, to produce better ideas, to make better art, and that computers can help them with that in ways that they aren't and in fact are [00:45:21] Ben: Yeah. [00:45:25] Peter: whether you're working on ink, Or local first software or malleable software media canvases or whatever domain you are working in. It [00:45:35] is the same thing. It is an ingredient. It is an aspect, it is a dimension of one problem. And so some, in some sense, all of this adds together to make something, whether it's one thing or a hundred things, whether it takes five years or 50 years, you know, that's, we're all going to the same place together. But on many different paths and at different speeds and with different confidence, right? And so in the small, the these things can be totally unrelated, but in the large, they all are part of one mission. And so when you say, how do you bring these things under one roof, when should they be under different roofs? It's like, well, when someone comes to me with a project idea, I ask, do we need this to get to where we're going? [00:46:23] Ben: Yeah, [00:46:24] Peter: And if we don't need it, then we probably don't have time to work on it because there's so much to do. And you know, there's a certain openness to experimentation and, [00:46:35] and uncertainty there. But that, that's the rubric that I use as the lab director is this, is this on the critical path of the revolution?…
I
Idea Machines


1 MACROSCIENCE with Tim Hwang [Idea Machines #49] 57:19
57:19
Play Later
Play Later
Lists
Like
Liked57:19
A conversation with Tim Hwang about historical simulations, the interaction of policy and science, analogies between research ecosystems and the economy, and so much more. Topics Historical Simulations Macroscience Macro-metrics for science Long science The interaction between science and policy Creative destruction in research “Regulation” for scientific markets Indicators for the health of a field or science as a whole “Metabolism of Science” Science rotation programs Clock speeds of Regulation vs Clock Speeds of Technology References Macroscience Substack Ada Palmer’s Papal Simulation Think Tank Tycoon Universal Paperclips (Paperclip maximizer html game) Pitt Rivers Museum Transcript [00:02:02] Ben: Wait, so tell me more about the historical LARP that you're doing. Oh, [00:02:07] Tim: yeah. So this comes from like something I've been thinking about for a really long time, which is You know in high school, I did model UN and model Congress, and you know, I really I actually, this is still on my to do list is to like look into the back history of like what it was in American history, where we're like, this is going to become an extracurricular, we're going to model the UN, like it has all the vibe of like, after World War II, the UN is a new thing, we got to teach kids about international institutions. Anyways, like, it started as a joke where I was telling my [00:02:35] friend, like, we should have, like, model administrative agency. You know, you should, like, kids should do, like, model EPA. Like, we're gonna do a rulemaking. Kids need to submit. And, like, you know, there'll be Chevron deference and you can challenge the rule. And, like, to do that whole thing. Anyways, it kind of led me down this idea that, like, our, our notion of simulation, particularly for institutions, is, like, Interestingly narrow, right? And particularly when it comes to historical simulation, where like, well we have civil war reenactors, they're kind of like a weird dying breed, but they're there, right? But we don't have like other types of historical reenactments, but like, it might be really valuable and interesting to create communities around that. And so like I was saying before we started recording, is I really want to do one that's a simulation of the Cuban Missile Crisis. But like a serious, like you would like a historical reenactment, right? Yeah. Yeah. It's like everybody would really know their characters. You know, if you're McNamara, you really know what your motivations are and your background. And literally a dream would be a weekend simulation where you have three teams. One would be the Kennedy administration. The other would be, you know, Khrushchev [00:03:35] and the Presidium. And the final one would be the, the Cuban government. Yeah. And to really just blow by blow, simulate that entire thing. You know, the players would attempt to not blow up the world, would be the idea. [00:03:46] Ben: I guess that's actually the thing to poke, in contrast to Civil War reenactment. Sure, like you know how [00:03:51] Tim: that's gonna end. Right, [00:03:52] Ben: and it, I think it, that's the difference maybe between, in my head, a simulation and a reenactment, where I could imagine a simulation going [00:04:01] Tim: differently. Sure, right. [00:04:03] Ben: Right, and, and maybe like, is the goal to make sure the same thing happened that did happen, or is the goal to like, act? faithfully to [00:04:14] Tim: the character as possible. Yeah, I think that's right, and I think both are interesting and valuable, right? But I think one of the things I'm really interested in is, you know, I want to simulate all the characters, but like, I think one of the most interesting things reading, like, the historical record is just, like, operating under deep uncertainty about what's even going on, right? Like, for a period of time, the American [00:04:35] government is not even sure what's going on in Cuba, and, like, you know, this whole question of, like, well, do we preemptively bomb Cuba? Do we, we don't even know if the, like, the warheads on the island are active. And I think I would want to create, like, similar uncertainty, because I think that's where, like, that's where the strategic vision comes in, right? That, like, you have the full pressure of, like, Maybe there's bombs on the island. Maybe there's not even bombs on the island, right? And kind of like creating that dynamic. And so I think simulation is where there's a lot, but I think Even reenactment for some of these things is sort of interesting. Like, that we talk a lot about, like, oh, the Cuban Missile Crisis. Or like, the other joke I had was like, we should do the Manhattan Project, but the Manhattan Project as, like, historical reenactment, right? And it's kind of like, you know, we have these, like, very, like off the cuff or kind of, like, stereotype visions of how these historical events occur. And they're very stylized. Yeah, exactly, right. And so the benefit of a reenactment that is really in detail Yeah. is like, oh yeah, there's this one weird moment. You know, like that, that ends up being really revealing historical examples. And so even if [00:05:35] you can't change the outcome, I think there's also a lot of value in just doing the exercise. Yeah. Yeah. The, the thought of [00:05:40] Ben: in order to drive towards this outcome that I know. Actually happened I wouldn't as the character have needed to do X. That's right That's like weird nuanced unintuitive thing, [00:05:50] Tim: right? Right and there's something I think about even building into the game Right, which is at the very beginning the Russians team can make the decision on whether or not they've even actually deployed weapons into the cube at all, yeah, right and so like I love that kind of outcome right which is basically like And I think that's great because like, a lot of this happens on the background of like, we know the history. Yeah. Right? And so I think like, having the team, the US team put under some pressure of uncertainty. Yeah. About like, oh yeah, they could have made the decision at the very beginning of this game that this is all a bluff. Doesn't mean anything. Like it's potentially really interesting and powerful, so. [00:06:22] Ben: One precedent I know for this completely different historical era, but there's a historian, Ada Palmer, who runs [00:06:30] Tim: a simulation of a people election in her class every year. That's so good. [00:06:35] And [00:06:36] Ben: it's, there, you know, like, it is not a simulation. [00:06:40] Tim: Or, [00:06:41] Ben: sorry, excuse me, it is not a reenactment. In the sense that the outcome is indeterminate. [00:06:47] Tim: Like, the students [00:06:48] Ben: can determine the outcome. But... What tends to happen is like structural factors emerge in the sense that there's always a war. Huh. The question is who's on which sides of the war? Right, right. And what do the outcomes of the war actually entail? That's right. Who [00:07:05] Tim: dies? Yeah, yeah. And I [00:07:07] Ben: find that that's it's sort of Gets at the heart of the, the great [00:07:12] Tim: man theory versus the structural forces theory. That's right. Yeah. Like how much can these like structural forces actually be changed? Yeah. And I think that's one of the most interesting parts of the design that I'm thinking about right now is kind of like, what are the things that you want to randomize to impose different types of like structural factors that could have been in that event? Right? Yeah. So like one of the really big parts of the debate at XCOM in the [00:07:35] early phases of the Cuban Missile Crisis is You know, McNamara, who's like, right, he runs the Department of Defense at the time. His point is basically like, look, whether or not you have bombs in Cuba or you have bombs like in Russia, the situation has not changed from a military standpoint. Like you can fire an ICBM. It has exactly the same implications for the U. S. And so his, his basically his argument in the opening phases of the Cuban Missile Crisis is. Yeah. Which is actually pretty interesting, right? Because that's true. But like, Kennedy can't just go to the American people and say, well, we've already had missiles pointed at us. Some more missiles off, you know, the coast of Florida is not going to make a difference. Yeah. And so like that deep politics, and particularly the politics of the Kennedy administration being seen as like weak on communism. Yeah. Is like a huge pressure on all the activity that's going on. And so it's almost kind of interesting thinking about the Cuban Missile Crisis, not as like You know us about to blow up the world because of a truly strategic situation but more because of like the local politics make it so difficult to create like You know situations where both sides can back down [00:08:35] successfully. Basically. Yeah [00:08:36] Ben: The the one other thing that my mind goes to actually to your point about it model UN in schools. Huh, right is Okay, what if? You use this as a pilot, and then you get people to do these [00:08:49] Tim: simulations at [00:08:50] Ben: scale. Huh. And that's actually how we start doing historical counterfactuals. Huh. Where you look at, okay, you know, a thousand schools all did a simulation of the Cuban Missile Crisis. In those, you know, 700 of them blew [00:09:05] Tim: up the world. Right, right. [00:09:07] Ben: And it's, it actually, I think it's, That's the closest [00:09:10] Tim: thing you can get to like running the tape again. Yeah. I think that's right. And yeah, so I think it's, I think it's a really underused medium in a lot of ways. And I think particularly as like you know, we just talk, talk like pedagogically, like it's interesting that like, it seems to me that there was a moment in American pedagogical history where like, this is a good way of teaching kids. Like, different types of institutions. And like, but it [00:09:35] hasn't really matured since that point, right? Of course, we live in all sorts of interesting institutions now. And, and under all sorts of different systems that we might really want to simulate. Yeah. And so, yeah, this kind of, at least a whole idea that there's lots of things you could teach if you, we like kind of opened up this way of kind of like, Thinking about kind of like educating for about institutions. Right? So [00:09:54] Ben: that is so cool. Yeah, I'm going to completely, [00:09:59] Tim: Change. Sure. Of course. [00:10:01] Ben: So I guess. And the answer could be no, but is, is there connections between this and your sort of newly launched macroscience [00:10:10] Tim: project? There is and there isn't. Yeah, you know, I think like the whole bid of macroscience which is this project that I'm doing as part of my IFP fellowship. Yeah. Is really the notion that like, okay, we have all these sort of like interesting results that have come out of metascience. That kind of give us like, kind of like the beginnings of a shape of like, okay, this is how science might work and how we might like get progress to happen. And you know, we've got [00:10:35] like a bunch of really compelling hypotheses. Yeah. And I guess my bit has been like, I kind of look at that and I squint and I'm like, we're, we're actually like kind of in the early days of like macro econ, but for science, right? Which is like, okay, well now we have some sense of like the dynamics of how the science thing works. What are the levers that we can start, like, pushing and pulling, and like, what are the dials we could be turning up and turning down? And, and, you know, I think there is this kind of transition that happens in macro econ, which is like, we have these interesting results and hypotheses, but there's almost another... Generation of work that needs to happen into being like, oh, you know, we're gonna have this thing called the interest rate Yeah, and then we have all these ways of manipulating the money supply and like this is a good way of managing like this economy Yeah, right and and I think that's what I'm chasing after with this kind of like sub stack but hopefully the idea is to build it up into like a more coherent kind of framework of ideas about like How do we make science policy work in a way that's better than just like more science now quicker, please? Yeah, right, which is I think we're like [00:11:35] we're very much at at the moment. Yeah, and in particular I'm really interested in the idea of chasing after science almost as like a Dynamic system, right? Which is that like the policy levers that you have You would want to, you know, tune up and tune down, strategically, at certain times, right? And just like the way we think about managing the economy, right? Where you're like, you don't want the economy to overheat. You don't want it to be moving too slow either, right? Like, I am interested in kind of like, those types of dynamics that need to be managed in science writ large. And so that's, that's kind of the intuition of the project. [00:12:04] Ben: Cool. I guess, like, looking at macro, how did we even decide, macro econ, [00:12:14] Tim: how did we even decide that the things that we're measuring are the right things to measure? Right? Like, [00:12:21] Ben: isn't it, it's like kind of a historical contingency that, you know, it's like we care about GDP [00:12:27] Tim: and the interest rate. Yeah. I think that's right. I mean in, in some ways there's a triumph of like. It's a normative triumph, [00:12:35] right, I think is the argument. And you know, I think a lot of people, you hear this argument, and it'll be like, And all econ is made up. But like, I don't actually think that like, that's the direction I'm moving in. It's like, it's true. Like, a lot of the things that we selected are arguably arbitrary. Yeah. Right, like we said, okay, we really value GDP because it's like a very imperfect but rough measure of like the economy, right? Yeah. Or like, oh, we focus on, you know, the money supply, right? And I think there's kind of two interesting things that come out of that. One of them is like, There's this normative question of like, okay, what are the building blocks that we think can really shift the financial economy writ large, right, of which money supply makes sense, right? But then the other one I think which is so interesting is like, there's a need to actually build all these institutions. that actually give you the lever to pull in the first place, right? Like, without a federal reserve, it becomes really hard to do monetary policy. Right. Right? Like, without a notion of, like, fiscal policy, it's really hard to do, like, Keynesian as, like, demand side stuff. Right. Right? And so, like, I think there's another project, which is a [00:13:35] political project, to say... Okay, can we do better than just grants? Like, can we think about this in a more, like, holistic way than simply we give money to the researchers to work on certain types of problems. And so this kind of leads to some of the stuff that I think we've talked about in the past, which is like, you know, so I'm obsessed right now with like, can we influence the time horizon of scientific institutions? Like, imagine for a moment we had a dial where we're like, On average, scientists are going to be thinking about a research agenda which is 10 years from now versus next quarter. Right. Like, and I think like there's, there's benefits and deficits to both of those settings. Yeah. But man, if I don't hope that we have a, a, a government system that allows us to kind of dial that up and dial that down as we need it. Right. Yeah. The, the, [00:14:16] Ben: perhaps, quite like, I guess a question of like where the analogy like holds and breaks down. That I, that I wonder about is, When you're talking about the interest rate for the economy, it kind of makes sense to say [00:14:35] what is the time horizon that we want financial institutions to be thinking on. That's like roughly what the interest rate is for, but it, and maybe this is, this is like, I'm too, [00:14:49] Tim: my note, like I'm too close to the macro, [00:14:51] Ben: but thinking about. The fact that you really want people doing science on like a whole spectrum of timescales. And, and like, this is a ill phrased question, [00:15:06] Tim: but like, I'm just trying to wrap my mind around it. Are you saying basically like, do uniform metrics make sense? Yeah, exactly. For [00:15:12] Ben: like timescale, I guess maybe it's just. is an aggregate thing. [00:15:16] Tim: Is that? That's right. Yeah, I think that's, that's, that's a good critique. And I think, like, again, I think there's definitely ways of taking the metaphor too far. Yeah. But I think one of the things I would say back to that is It's fine to imagine that we might not necessarily have an interest rate for all of science, right? So, like, you could imagine saying, [00:15:35] okay, for grants above a certain size, like, we want to incentivize certain types of activity. For grants below a certain size, we want different types of activity. Right, another way of slicing it is for this class of institutions, we want them to be thinking on these timescales versus those timescales. Yeah. The final one I've been thinking about is another way of slicing it is, let's abstract away institutions and just think about what is the flow of all the experiments that are occurring in a society? Yeah. And are there ways of manipulating, like, the relative timescales there, right? And that's almost like, kind of like a supply based way of looking at it, which is... All science is doing is producing experiments, which is like true macro, right? Like, I'm just like, it's almost offensively simplistic. And then I'm just saying like, okay, well then like, yeah, what are the tools that we have to actually influence that? Yeah, and I think there's lots of things you could think of. Yeah, in my mind. Yeah, absolutely. What are some, what are some that are your thinking of? Yeah, so I think like the two that I've been playing around with right now, one of them is like the idea of like, changing the flow of grants into the system. So, one of the things I wrote about in Microscience just the past week was to think [00:16:35] about, like sort of what I call long science, right? And so the notion here is that, like, if you look across the scientific economy, there's kind of this rough, like, correlation between size of grant and length of grant. Right, where so basically what it means is that like long science is synonymous with big science, right? You're gonna do a big ambitious project. Cool. You need lots and lots and lots of money Yeah and so my kind of like piece just briefly kind of argues like but we have these sort of interesting examples like the You know Like framing a heart study which are basically like low expense taking place over a long period of time and you're like We don't really have a whole lot of grants that have that Yeah. Right? And so the idea is like, could we encourage that? Like imagine if we could just increase the flow of those types of grants, that means we could incentivize more experiments that take place like at low cost over long term. Yeah. Right? Like, you know, and this kind of gets this sort of interesting question is like, okay, so what's the GDP here? Right? Like, or is that a good way of cracking some of the critical problems that we need to crack right now? Right? Yeah. And it's kind of where the normative part gets into [00:17:35] it is like, okay. So. You know, one way of looking at this is the national interest, right? We say, okay, well, we really want to win on AI. We really want to win on, like, bioengineering, right? Are there problems in that space where, like, really long term, really low cost is actually the kind of activity we want to be encouraging? The answer might be no, but I think, like, it's useful for us to have, like, that. Color in our palette of things that we could be doing Yeah. In like shaping the, the dynamics of science. Yeah. Yeah. [00:18:01] Ben: I, I mean, one of the things that I feel like is missing from the the meta science discussion Mm-Hmm. is, is even just, what are those colors? Mm-Hmm. like what, what are the, the different and almost parameters of [00:18:16] Tim: of research. Yeah. Right, right, right. And I think, I don't know, one of the things I've been thinking about, which I'm thinking about writing about at some point, right, is like this, this view is, this view is gonna piss people off in some ways, because where it ultimately goes is this idea that, like, like, the scientist or [00:18:35] science Is like a system that's subject to the government, or subject to a policy maker, or a strategist. Which like, it obviously is, right? But like, I think we have worked very hard to believe that like, The scientific market is its own independent thing, And like, that touching or messing with it is like, a not, not a thing you should do, right? But we already are. True, that's kind of my point of view, yeah exactly. I think we're in some ways like, yeah I know I've been reading a lot about Keynes, I mean it is sort of interesting that it does mirror... Like this kind of like Great Depression era economic thinking, where you're basically like the market takes care of itself, like don't intervene. In fact, intervening is like the worst possible thing you could do because you're only going to make this worse. And look, I think there's like definitely examples of like kind of like command economy science that like don't work. Yes. But like, you know, like I think most mature people who work in economics would say there's some room for like at least like Guiding the system. Right. And like keeping it like in balance is like [00:19:35] a thing that should be attempted and I think it's kind of like the, the, the argument that I'm making here. Yeah. Yeah. I [00:19:41] Ben: mean, I think that's, [00:19:42] Tim: that's like the meta meta thing. Right. Right. Is even [00:19:46] Ben: what, what level of intervention, like, like what are the ways in which you can like usefully intervene and which, and what are the things that are, that are foolish and kind of. crEate the, the, [00:20:01] Tim: Command economy. That's right. Yeah, exactly. Right. Right. And I think like, I think the way through is, is maybe in the way that I'm talking about, right? Which is like, you can imagine lots of bad things happen when you attempt to pick winners, right? Like maybe the policymaker whoever we want to think of that as like, is it the NSF or NIH or whatever? Like, you know, sitting, sitting in their government bureaucracy, right? Like, are they well positioned to make a choice about who's going to be the right solution to a problem? Maybe yes, maybe no. I think we can have a debate about that, right? But I think there's a totally reasonable position, which is they're not in it, so they're not well positioned to make that call. Yeah. [00:20:35] Right? But, are they well positioned to maybe say, like, if we gave them a dial that was like, we want researchers to be thinking about this time horizon versus that time horizon? Like, that's a control that they actually may be well positioned to inform on. Yeah. As an outsider, right? Yeah. Yeah. And some of this I think, like, I don't know, like, the piece I'm working on right now, which will be coming out probably Tuesday or Wednesday, is you know, some of this is also like encouraging creative destruction, right? Which is like, I'm really intrigued by the idea that like academic fields can get so big that they become they impede progress. Yes. Right? And so this is actually a form of like, I like, it's effectively an intellectual antitrust. Yeah. Where you're basically like, Basically, like the, the role of the scientific regulator is to basically say these fields have gotten so big that they are actively reducing our ability to have good dynamism in the marketplace of ideas. And in this case, we will, we will announce new grant policies that attempt to break this up. And I actually think that like, that is pretty spicy for a funder to do. But like actually maybe part of their role and maybe we should normalize that [00:21:35] being part of their role. Yeah. Yeah, absolutely. [00:21:37] Ben: I I'm imagining a world where There are, where this, like, sort of the macro science is as divisive as [00:21:47] Tim: macroeconomics. [00:21:48] Ben: Right? Because you have, you have your like, your, your like, hardcore free market people. Yeah. Zero government intervention. Yeah, that's right. No antitrust. No like, you know, like abolish the Fed. Right, right. All of that. Yeah, yeah. And I look forward to the day. When there's there's people who are doing the same thing for research. [00:22:06] Tim: Yeah, that's right. Yeah. Yeah when I think that's actually I mean I thought part of a lot of meta science stuff I think is this kind of like interesting tension, which is that like look politically a lot of those people in the space are Pro free market, you know, like they're they're they're liberals in the little L sense. Yeah, like at the same time Like it is true that kind of like laissez faire science Has failed because we have all these examples of like progress slowing down Right? Like, I don't know. Like, I think [00:22:35] that there is actually this interesting tension, which is like, to what degree are we okay with intervening in science to get better outcomes? Yeah. Right? Yeah. Well, as, [00:22:43] Ben: as I, I might put on my hat and say, Yeah, yeah. Maybe, maybe this is, this is me saying true as a fair science has never been tried. Huh, right. Right? Like, that, that, that may be kind of my position. Huh. But anyways, I... And I would argue that, you know, since 1945, we have been, we haven't had laissez faire [00:23:03] Tim: science. Oh, interesting. [00:23:04] Ben: Huh. Right. And so I'm, yeah, I mean, it's like, this is in [00:23:09] Tim: the same way that I think [00:23:11] Ben: a very hard job for macroeconomics is to say, well, like, do we need [00:23:15] Tim: more or less intervention? Yeah. Yeah. [00:23:17] Ben: What is the case there? I think it's the same thing where. You know, a large amount of science funding does come from the government, and the government is opinionated about what sorts of things [00:23:30] Tim: it funds. Yeah, right. Right. And you [00:23:33] Ben: can go really deep into that. [00:23:35] So, so I [00:23:35] Tim: would. Yeah, that's actually interesting. That flips it. It's basically like the current state of science. is right now over regulated, is what you'd say, right? Or, or [00:23:44] Ben: badly regulated. Huh, sure. That is the argument I would say, very concretely, is that it's badly regulated. And, you know, I might almost argue that it is... It's both over and underregulated in the sense that, well, this is, this is my, my whole theory, but like, I think that there, we need like some pockets where it's like much less regulated. Yeah. Right. Where you're, and then some pockets where you're really sort of going to be like, no. You don't get to sort of tune this to whatever your, your project, your program is. Yeah, right, right. You're gonna be working with like [00:24:19] Tim: these people to do this thing. Yeah, yeah. Yeah, and I think there actually is interesting analogies in like the, the kind of like economic regulation, economic governance world. Yeah. Where like the notion is markets generally work well, like it's a great tool. Yeah. Like let it run. [00:24:35] Right. But basically that there are certain failure states that actually require outside intervention. And I think what's kind of interesting in thinking about in like a macro scientific, if you will, context is like, what are those failure states for science? Like, and you could imagine a policy rule, which is the policymaker says, we don't intervene until we see the following signals emerging in a field or in a region. Right. And like, okay, that's, that's the trigger, right? Like we're now in recession mode, you know, like there's enough quarters of this problem of like more papers, but less results. You know, now we have to take action, right? Oh, that's cool. Yeah, yeah. That would be, that would be very interesting. And I think that's like, that's good, because I think like, we end up having to think about like, you know, and again, this is I think why this is a really exciting time, is like MetaScience has produced these really interesting results. Now we're in the mode of like, okay, well, you know, on that policymaker dashboard, Yeah. Right, like what's the meter that we're checking out to basically be like, Are we doing well? Are we doing poorly? Is this going well? Or is this going poorly? Right, like, I think that becomes the next question to like, make this something practicable Yeah. For, for [00:25:35] actual like, Right. Yeah. Yeah. One of my frustrations [00:25:38] Ben: with meta science [00:25:39] Tim: is that it, I [00:25:41] Ben: think is under theorized in the sense that people generally are doing these studies where they look at whatever data they can get. Huh. Right. As opposed to what data should we be looking at? What, what should we be looking for? Yeah. Right. Right. And so, so I would really like to have it sort of be flipped and say, okay, like this At least ideally what we would want to measure maybe there's like imperfect maybe then we find proxies for that Yeah, as opposed to just saying well, like here's what we can measure. It's a proxy for [00:26:17] Tim: okay. That's right, right Yeah, exactly. And I think a part of this is also like I mean, I think it is like Widening the Overton window, which I think like the meta science community has done a good job of is like trying to widen The Overton window of what funders are willing to do. Yeah. Or like what various existing incumbent actors are willing to [00:26:35] do. Because I think one way of getting that data is to run like interesting experiments in this space. Right? Like I think one of the things I'm really obsessed with right now is like, okay, imagine if you could change the overhead rate that universities charge on a national basis. Yeah. Right? Like, what's that do to the flow of money through science? And is that like one dial that's actually like On the shelf, right? Like, we actually have the ability to influence that if we wanted to. Like, is that something we should be running experiments against and seeing what the results are? Yeah, yeah. [00:27:00] Ben: Another would be earmarking. Like, how much money is actually earmarked [00:27:05] Tim: for different things. That's right, yeah, yeah. Like, how easy it is to move money around. That's right, yeah. I heard actually a wild story yesterday about, do you know this whole thing, what's his name? It's apparently a very wealthy donor. That has convinced the state of Washington's legislature to the UW CS department. it's like, it's written into law that there's a flow of money that goes directly to the CS department. I don't think CS departments need more money. I [00:27:35] know, I know, but it's like, this is a really, really kind of interesting, like, outcome. Yeah. Which is like a very clear case of basically just like... Direct subsidy to like, not, not just like a particular topic, but like a particular department, which I think is like interesting experiment. I don't like, I don't know what's been happening there, but yeah. Yeah. Yeah. Natural, natural experiment. [00:27:50] Ben: Totally. Has anybody written down, I assume the answer is no, but it would be very interesting if someone actually wrote down a list of sort of just all the things you [00:28:00] Tim: could possibly [00:28:00] Ben: want to pay attention to, right? Like, I mean, like. Speaking of CS, it'd be very interesting to see, like, okay, like, what fraction of the people who, like, get PhDs in an area, stay in this area, right? Like, going back to the, the [00:28:15] Tim: health of a field or something, right? Yeah, yeah. I think that's right. I, yeah. And I think that those, those types of indicators are interesting. And then I think also, I mean, in the spirit of like it being a dynamic system. Like, so a few years back I read this great bio by Sebastian Malaby called The Man Who Knew, which is, it's a bio of Alan Greenspan. So if you want to ever read, like, 800 pages about [00:28:35] Alan Greenspan, book for you. It's very good. But one of the most interesting parts about it is that, like, there's a battle when Alan Greenspan becomes head of the Fed, where basically he's, like, extremely old school. Like, what he wants to do is he literally wants to look at, like, Reams of data from like the steel industry. Yeah, because that's kind of got his start And he basically is at war with a bunch of kind of like career People at the Fed who much more rely on like statistical models for predicting the economy And I think what's really interesting is that like for a period of time actually Alan Greenspan has the edge Because he's able to realize really early on that like there's It's just changes actually in like the metabolism of the economy that mean that what it means to raise the interest rate or lower the interest rate has like very different effects than it did like 20 years ago before it got started. Yeah. And I think that's actually something that I'm also really quite interested in science is basically like When we say science, people often imagine, like, this kind of, like, amorphous blob. But, like, I think the metabolism is changing all the [00:29:35] time. And so, like, what we mean by science now means very different from, like, what we mean by science, like, even, like, 10 to 20 years ago. Yes. And, like, it also means that all of our tactics need to keep up with that change, right? And so, one of the things I'm interested in to your question about, like, has anyone compiled this list of, like, science health? Or the health of science, right? It's maybe the right way of thinking about it. is that, like, those indicators may mean very different things at different points in time, right? And so part of it is trying to understand, like, yeah, what is the state of the, what is the state of this economy of science that we're talking about? Yeah. You're kind of preaching [00:30:07] Ben: to the, to the choir. In the sense that I'm, I'm always, I'm frustrated with the level of nuance that I feel like many people who are discussing, like, science, quote, making air quotes, science and research, are, are talking about in the sense that. They very often have not actually like gone in and been part of the system. Huh, right. And I'm, I'm open to the fact that [00:30:35] you [00:30:35] Tim: don't need to have got like [00:30:36] Ben: done, been like a professional researcher to have an opinion [00:30:41] Tim: or, or come up with ideas about it. [00:30:43] Ben: Yeah. But at the same time, I feel like [00:30:46] Tim: there's, yeah, like, like, do you, do you think about that tension at all? Yeah. I think it's actually incredibly valuable. Like, I think So I think of like Death and Life of Great American Cities, right? Which is like, the, the, the really, one of the really, there's a lot of interesting things about that book. But like, one of the most interesting things is sort of the notion that like, you had a whole cabal of urban planners that had this like very specific vision about how to get cities to work right and it just turns out that like if you like are living in soho at a particular time and you like walk along the street and you like take a look at what's going on like there's always really actually super valuable things to know about yeah that like are only available because you're like at that like ultra ultra ultra ultra micro level and i do think that there's actually some potential value in there like one of the things i would love to be able to set up, like, in the community of MetaScience or whatever you want to call it, right, [00:31:35] is the idea that, like, yeah, you, you could afford to do, like, very short tours of duty, where it's, like, literally, you're just, like, spending a day in a lab, right, and, like, to have a bunch of people go through that, I think, is, like, really, really helpful and so I think, like, thinking about, like, what the rotation program for that looks like, I think would be cool, like, you, you should, you should do, like, a six month stint at the NSF just to see what it looks like. Cause I think that kind of stuff is just like, you know, well, A, I'm selfish, like I would want that, but I also think that like, it would also allow the community to like, I think be, be thinking about this in a much more applied way. Yeah. Yeah. Yeah. [00:32:08] Ben: I think it's the, the meta question there for, for everything, right? Is how much in the weeds, like, like what am I trying to say? The. It is possible both to be like two in the weeds. Yeah, right and then also like too high level Yeah, that's right. And in almost like what what is the the right amount or like? Who, who should [00:32:31] Tim: be talking to whom in that? That's right. Yeah, I mean, it's like what you were saying earlier that like the [00:32:35] success of macro science will be whether or not it's as controversial as macroeconomics. It's like, I actually hope that that's the case. It's like people being like, this is all wrong. You're approaching it like from a too high level, too abstract of a level. Yeah. I mean, I think the other benefit of doing this outside of like the level of insight is I think one of the projects that I think I have is like We need to, we need to be like defeating meta science, like a love of meta science aesthetics versus like actual like meta science, right? Like then I think like a lot of people in meta science love science. That's why they're excited to not talk about the specific science, but like science in general. But like, I think that intuition also leads us to like have very romantic ideas of like what science is and how science should look and what kinds of science that we want. Yeah. Right. The mission is progress. The mission isn't science. And so I think, like, we have to be a lot more functional. And again, I think, like, the benefit of these types of, like, rotations, like, Oh, you just are in a lab for a month. Yeah. It's like, I mean, you get a lot more of a sense of, like, Oh, okay, this is, this is what it [00:33:35] looks like. Yeah. Yeah. I'd like to do the same thing for manufacturing. Huh. Right. [00:33:39] Ben: Right. It's like, like, and I want, I want everybody to be rotating, right? Huh. Like, in the sense of, like, okay, like, have the scientists go and be, like, in a manufacturing lab. That's right. [00:33:47] Tim: Yeah. [00:33:48] Ben: And be like, okay, like, look. Like, you need to be thinking about getting this thing to work in, like, this giant, like, flow pipe instead of a [00:33:54] Tim: test tube. That's right, right. Yeah, yeah, yeah. Yeah, [00:33:57] Ben: unfortunately, the problem is that we can't all spend our time, like, if everybody was rotating through all the [00:34:03] Tim: things they need to rotate, we'd never get anything done. Yeah, exactly. [00:34:06] Ben: ANd that's, that's, that's kind of [00:34:08] Tim: the problem. Well, and to bring it all the way back, I mean, I think you started this question on macroscience in the context of transitioning away from all of this like weird Cuban Missile Crisis simulation stuff. Like, I do think one way of thinking about this is like, okay, well, if we can't literally send you into a lab, right? Like the question is like, what are good simulations to give people good intuitions about the dynamics in the space? Yeah. And I think that's, that's potentially quite interesting. Yeah. Normalized weekend long simulation. That's right. Like I love the idea of basically [00:34:35] like like you, you get to reenact the publication of a prominent scientific paper. It's like kind of a funny idea. It's just like, you know, yeah. Or, or, or even trying to [00:34:44] Ben: get research funded, right? Like, it's like, okay, like you have this idea, you want yeah. [00:34:55] Tim: I mean, yeah, this is actually a project, I mean, I've been talking to Zach Graves about this, it's like, I really want to do one which is a game that we're calling Think Tank Tycoon, which is basically like, it's a, it's a, the idea would be for it to be a strategy board game that simulates what it's like to run a research center. But I think like to broaden that idea somewhat like it's kind of interesting to think about the idea of like model NSF Yeah, where you're like you you're in you're in the hot seat you get to decide how to do granting Yeah, you know give a grant [00:35:22] Ben: a stupid thing. Yeah, some some some congressperson's gonna come banging [00:35:26] Tim: on your door Yeah, like simulating those dynamics actually might be really really helpful Yeah I mean in the very least even if it's not like a one for one simulation of the real world just to get like some [00:35:35] common intuitions about like The pressures that are operating here. I [00:35:38] Ben: think you're, the bigger point is that simulations are maybe underrated [00:35:42] Tim: as a teaching tool. I think so, yeah. Do you remember the the paperclip maximizer? Huh. The HTML game? Yeah, yeah. [00:35:48] Ben: I'm, I'm kind of obsessed with it. Huh. Because, it, you've, like, somehow the human brain, like, really quickly, with just, like, you know, some numbers on the screen. Huh. Like, just like numbers that you can change. Right, right. And some, like, back end. Dynamic system, where it's like, okay, like based on these numbers, like here are the dynamics of the [00:36:07] Tim: system, and it'll give you an update. [00:36:09] Ben: Like, you start to really get an intuition for, for system dynamics. Yeah. And so, I, I, I want to see more just like plain HTML, like basically like spreadsheet [00:36:20] Tim: backend games. Right, right, like the most lo fi possible. Yeah, I think so. Yeah. Yeah, I think it's helpful. I mean, I think, again, particularly in a world where you're thinking about, like, let's simulate these types of, like, weird new grant structures that we might try out, right? Like, you know, we've got a bunch [00:36:35] of hypotheses. It's kind of really expensive and difficult to try to get experiments done, right? Like, does a simulation with a couple people who are well informed give us some, at least, inclinations of, like, where it might go or, like, what are the unintentional consequences thereof? Yeah. [00:36:51] Ben: Disciplines besides the military that uses simulations [00:36:56] Tim: successfully. Not really. And I think what's kind of interesting is that like, I think it had a vogue that like has kind of dissipated. Yeah, I think like the notion of like a a game being the way you kind of do like understanding of a strategic situation, I think like. Has kind of disappeared, right? But like, I think a lot of it was driven, like, RAND actually had a huge influence, not just on the military. But like, there's a bunch of corporate games, right? That were like, kind of invented in the same period. Yeah. That are like, you determine how much your steel production is, right? And was like, used to teach MBAs. But yeah, I think it's, it's been like, relatively limited. Hm. [00:37:35] Yeah. It, yeah. Hm. [00:37:38] Ben: So. Other things. Huh. Like, just to, [00:37:41] Tim: to shift together. Sure, sure, go ahead. Yeah, yeah, yeah, yeah. I guess another [00:37:44] Ben: thing that we haven't really talked about, but actually sort of plays into all of this, is thinking about better [00:37:50] Tim: ways of regulating technology. [00:37:52] Ben: I know that you've done a lot of thinking about that, and maybe this is another thing to simulate. [00:38:00] Tim: Yeah, it's a model OSTP. But [00:38:04] Ben: it's maybe a thing where, this is actually like a prime example where the particulars really matter, right? Where you can't just regulate. quote unquote technology. Yeah. Right. And it's like, there's, there's some technologies that you want to regulate very, very closely and very tightly and others that you want to regulate very [00:38:21] Tim: loosely. Yeah, I think that's right. And I think that's actually, you know, I think it is tied to the kind of like macro scientific project, if you will. Right. Which is that I think we have often a notion of like science regulation being like. [00:38:35] literally the government comes in and is like, here are the kind of constraints that we want to put on the system. Right. And there's obviously like lots of different ways of doing that. And I think there's lots of contexts in which that's like appropriate. But I think for a lot of technologies that we confront right now, the change is so rapid that the obvious question always becomes, no matter what emerging technology talking about is like, how does your clock speed of regulation actually keep up with like the clock speed of technology? And the answer is frequently like. It doesn't, right? And like you run into these kind of like absurd situations where you're like, well, we have this thing, it's already out of date by the time it goes into force, everybody kind of creates some like notional compliance with that rule. Yeah. And like, in terms of improving, I don't know, safety outcomes, for instance, it like has not actually improved safety outcomes. And I think in that case, right, and I think I could actually make an argument that like, the problem is becoming more difficult with time. Right? Like, if you really believe that the pace of technological change is faster than it used to be, then it is possible that, like, there was a point at which, like, government was operating, and it could actually keep [00:39:35] pace effectively, or, like, a body like Congress could actually keep pace with society, or with technology successfully, to, like, make sure that it was conformant with, sort of, like, societal interests. Do you think that was [00:39:46] Ben: actually ever the case, or was it that we didn't, we just didn't [00:39:50] Tim: have as many regulations? I would say it was sort of twofold, right? Like, I think one of them was you had, at least, let's just talk about Congress, right? It's really hard to talk about, like, government as a whole, right? Like, I think, like, Congress was both better advised and was a more efficient institution, right? Which means it moved faster than it does today. Simultaneously, I also feel like for a couple reasons we can speculate on, right? Like, science, or in the very least, technology. Right, like move slower than it does today. Right, right. And so like actually what has happened is that both both dynamics have caused problems, right? Which is that like the organs of government are moving slower at the same time as science is moving faster And like I think we've passed some inflection [00:40:35] point now where like it seems really hard to craft You know, let's take the AI case like a sensible framework that would apply You know, in, in LLMs where like, I don't know, like I was doing a little recap of like recent interoperability research and I like took a step back and I was like, Oh, all these papers are from May, 2023. And I was like, these are all big results. This is all a big deal. Right. It's like very, very fast. Yeah. So that's kind of what I would say to that. Yeah. I don't know. Do you feel differently? You feel like Congress has never been able to keep up? Yeah. [00:41:04] Ben: Well, I. I wonder, I guess I'm almost, I'm, I'm perhaps an outlier in that I am skeptical of the claim that technology overall has sped up significantly, or the pace of technological change, the pace of software change, certainly. Sure. Right. And it's like maybe software as a, as a fraction of technology has spread up, sped up. And maybe like, this is, this is a thing where like to the point of, of regulations needing to, to. Go into particulars, [00:41:35] right? Mm-Hmm. . Right, right. Like tuning the regulation to the characteristic timescale of whatever talk [00:41:40] Tim: technology we're talking about. Mm-Hmm. , right? [00:41:42] Ben: But I don't know, but like, I feel like outside of software, if anything, technology, the pace of technological change [00:41:52] Tim: has slowed down. Mm hmm. Right. Right. Yeah. [00:41:55] Ben: This is me putting on my [00:41:57] Tim: stagnationist bias. And would, given the argument that I just made, would you say that that means that it should actually be easier than ever to regulate technology? Yeah, I get targets moving slower, right? Like, yeah, [00:42:12] Ben: yeah. Or it's the technology moving slowly because of the forms of [00:42:14] Tim: the regulator. I guess, yeah, there's like compounding variables. [00:42:16] Ben: Yeah, the easiest base case of regulating technology is saying, like, no, you can't have [00:42:20] Tim: any. Huh, right, right, right. Like, it can't change. Right, that's easy to regulate. Yeah, right, right. That's very easy to regulate. I buy that, I buy that. It's very easy to regulate well. Huh, right, right. I think that's [00:42:27] Ben: That's the question. It's like, what do we want to lock in and what don't we [00:42:31] Tim: want to lock in? Yeah, I think that's right and I think, you [00:42:35] know I guess what that moves me towards is like, I think some people, you know, will conclude the argument I'm making by saying, and so regulations are obsolete, right? Or like, oh, so we shouldn't regulate or like, let the companies take care of it. And I'm like, I think so, like, I think that that's, that's not the conclusion that I go to, right? Like part of it is like. Well, no, that just means we need, we need better ways of like regulating these systems, right? And I think they, they basically require government to kind of think about sort of like moving to different parts of the chain that they might've touched in the past. Yeah. So like, I don't know, we, Caleb and I over at IFP, we just submitted this RFI to DARPA. In part they, they were thinking about like how does DARPA play a role in dealing with like ethical considerations around emerging technologies. Yep. But the deeper point that we were making in our submission. was simply that like maybe actually science has changed in a way where like DARPA can't be the or it's harder for DARPA to be the originator of all these technologies. Yeah. So they're, they're almost, they're, they're placing the, the, the ecosystem, the [00:43:35] metabolism of technology has changed, which requires them to rethink like how they want to influence the system. Yeah. Right. And it may be more influence at the point of like. Things getting out to market, then it is things like, you know, basic research in the lab or something like that. Right. At least for some classes of technology where like a lot of it's happening in private industry, like AI. Yeah, exactly. Yeah. [00:43:55] Ben: No, I, I, I think the, the concept of, of like the metabolism of, of science and technology is like really powerful. I think in some sense it is, I'm not sure if you would, how would you map that to the idea of there being a [00:44:11] Tim: research ecosystem, right? Right. Is it, is it that there's like [00:44:17] Ben: the metabolic, this is, this is incredibly abstract. Okay. Like, is it like, I guess if you're looking at the metabolism, does, does the metabolism sort of say, we're going to ignore institutions for now and the metabolism is literally just the flow [00:44:34] Tim: of [00:44:35] like ideas and, and, and outcomes and then maybe like the ecosystem is [00:44:41] Ben: like, okay, then we like. Sort of add another layer and say there are institutions [00:44:46] Tim: that are sure interacting with this sort of like, yeah, I think like the metabolism view or, you know, you might even think about it as like a supply chain view, right? To move it away from, like, just kind of gesturing at bio for no reason, right? Is I think what's powerful about it is that, you know, particularly in foundation land, which I'm most familiar with. There's a notion of like we're going to field build and what that means is we're going to name a field and then researchers Are going to be under this tent that we call this field and then the field will exist Yeah, and then the proper critique of a lot of that stuff is like researchers are smart They just like go where the money is and they're like you want to call up like I can pretend to be nanotech for a Few years to get your money Like, that's no problem. I can do that. And so there's kind of a notion that, like, if you take the economy of science as, like, institutions at the very beginning, you actually miss the bigger [00:45:35] picture. Yes. Right? And so the metabolism view is more powerful because you literally think about, like, the movement of, like, an idea to an experiment to a practical technology to, like, something that's out in the world. Yeah. And then we basically say, how do we influence those incentives before we start talking about, like, oh, we announced some new policy that people just, like... Cosmetically align their agendas to yeah, and like if you really want to shape science It's actually maybe arguably less about like the institution and more about like Yeah, the individual. Yeah, exactly. Like I run a lab. What are my motivations? Right? And I think this is like, again, it's like micro macro, right? It's basically if we can understand that, then are there things that we could do to influence at that micro level? Yeah, right. Which is I think actually where a lot of Macro econ has moved. Right. Which is like, how do we influence like the individual firm's decisions Yeah. To get the overall aggregate change that we want in the economy. Yeah. And I think that's, that's potentially a better way of approaching it. Right. A thing that I desperately [00:46:30] Ben: want now is Uhhuh a. I'm not sure what they're, they're [00:46:35] actually called. Like the, you know, like the metal, like, like, like the [00:46:37] Tim: prep cycle. Yeah, exactly. Like, like, like the giant diagram of, of like metabolism, [00:46:43] Ben: right. I want that for, for research. Yeah, that would be incredible. Yeah. If, if only, I mean, one, I want to have it on [00:46:50] Tim: my wall and to, to just get across the idea that. [00:46:56] Ben: It is like, it's not you know, basic research, applied [00:47:01] Tim: research. Yeah, totally. Right, right, right. When it goes to like, and what I like about kind of metabolism as a way of thinking about it is that we can start thinking about like, okay, what's, what's the uptake for certain types of inputs, right? We're like, okay, you know like one, one example is like, okay, well, we want results in a field to become more searchable. Well what's really, if you want to frame that in metabolism terms, is like, what, you know, what are the carbs that go into the system that, like, the enzymes or the yeast can take up, and it's like, access to the proper results, right, and like, I think that there's, there's a nice way of flipping in it [00:47:35] that, like, starts to think about these things as, like, inputs, versus things that we do, again, because, like, we like the aesthetics of it, like, we like the aesthetics of being able to find research results instantaneously, but, like, the focus should be on, Like, okay, well, because it helps to drive, like, the next big idea that we think will be beneficial to me later on. Or like, even being [00:47:53] Ben: the question, like, is the actual blocker to the thing that you want to see, the thing that you think it is? Right. I've run into far more people than I can count who say, like, you know, we want more awesome technology in the world, therefore we are going to be working on Insert tool here that actually isn't addressing, at least my, [00:48:18] Tim: my view of why those things aren't happening. Yeah, right, right. And I think, I mean, again, like, part of the idea is we think about these as, like, frameworks for thinking about different situations in science. Yeah. Like, I actually do believe that there are certain fields because of, like, ideologically how they're set up, institutionally how [00:48:35] they're set up, funding wise how they're set up. that do resemble the block diagram you were talking about earlier, which is like, yeah, there actually is the, the basic research, like we can put, that's where the basic research happens. You could like point at a building, right? And you're like, that's where the, you know, commercialization happens. We pointed at another building, right? But I just happen to think that most science doesn't look like that. Right. And we might ask the question then, like, do we want it to resemble more of like the metabolism state than the block diagram state? Right. Like both are good. Yeah, I mean, I would [00:49:07] Ben: argue that putting them in different buildings is exactly what's causing [00:49:10] Tim: all the problems. Sure, right, exactly, yeah, yeah. Yeah. But then, again, like, then, then I think, again, this is why I think, like, the, the macro view is so powerful, at least to me, personally, is, like, we can ask the question, for what problems? Yeah. Right? Like, are there, are there situations where, like, that, that, like, very blocky way of doing it serves certain needs and certain demands? Yeah. And it's like, it's possible, like, one more argument I can make for you is, like, Progress might be [00:49:35] slower, but it's a lot more controllable. So if you are in the, you know, if you think national security is one of the most important things, you're willing to make those trade offs. But I think we just should be making those trade offs, like, much more consciously than we do. And [00:49:49] Ben: that's where politics, in the term, in the sense of, A compromise between people who have different priorities on something can actually come in where we can say, okay, like we're going to trade off, we're going to say like, okay, we're going to increase like national security a little bit, like in, in like this area to, in compromise with being able to like unblock this. [00:50:11] Tim: That's right. Yeah. And I think this is the benefit of like, you know, when I say lever, I literally mean lever, right. Which is basically like, we're in a period of time where we need this. Yeah. Right? We're willing to trade progress for security. Yeah. Okay, we're not in a period where we need this. Like, take the, take, ramp it down. Right? Like, we want science to have less of this, this kind of structure. Yeah. That's something we need to, like, have fine tuned controls over. Right? Yeah. And to be thinking about in, like, a, a comparative sense, [00:50:35] so. And, [00:50:36] Ben: to, to go [00:50:36] Tim: back to the metabolism example. Yeah, yeah. I'm really thinking about it. Yeah, yeah. [00:50:39] Ben: Is there an equivalent of macro for metabolism in the sense that like I'm thinking about like, like, is it someone's like blood, like, you know, they're like blood glucose level, [00:50:52] Tim: like obesity, right? Yeah, right. Kind of like our macro indicators for metabolism. Yeah, that's right. Right? Or like how you feel in the morning. That's right. Yeah, exactly. I'm less well versed in kind of like bio and medical, but I'm sure there is, right? Like, I mean, there is the same kind of like. Well, I study the cell. Well, I study, you know, like organisms, right? Like at different scales, which we're studying this stuff. Yeah. What's kind of interesting in the medical cases, like You know, it's like, do we have a Hippocratic, like oath for like our treatment of the science person, right? It's just like, first do no harm to the science person, you know? [00:51:32] Ben: Yeah, I mean, I wonder about that with like, [00:51:35] with research. Mm hmm. Is there, should we have more heuristics about how we're [00:51:42] Tim: Yeah, I mean, especially because I think, like, norms are so strong, right? Like, I do think that, like, one of the interesting things, this is one of the arguments I was making in the long science piece. It's like, well, in addition to funding certain types of experiments, if you proliferate the number of opportunities for these low scale projects to operate over a long period of time, there's actually a bunch of like norms that might be really good that they might foster in the scientific community. Right. Which is like you learn, like scientists learn the art of how to plan a project for 30 years. That's super important. Right. Regardless of the research results. That may be something that we want to put out into the open so there's more like your median scientist has more of those skills Yeah, right, like that's another reason that you might want to kind of like percolate this kind of behavior in the system Yeah, and so there's kind of like these emanating effects from like even one offs that I think are important to keep in mind [00:52:33] Ben: That's actually another [00:52:35] I think used for simulations. Yeah I'm just thinking like, well, it's very hard to get a tight feedback loop, right, about like whether you manage, you planned a project for 30 years [00:52:47] Tim: well, right, [00:52:48] Ben: right. But perhaps there's a better way of sort of simulating [00:52:51] Tim: that planning process. Yeah. Well, and I would love to, I mean, again, to the question that you had earlier about like what are the metrics here, right? Like I think for a lot of science metrics that we may end up on, they may have these interesting and really curious properties like we have for inflation rate. Right. We're like, the strange thing about inflation is that we, we kind of don't like, we have hypotheses for how it happens, but like, part of it is just like the psychology of the market. Yeah. Right. Like you anticipate prices will be higher next quarter. Inflation happens if enough people believe that. And part of what the Fed is doing is like, they're obviously making money harder to get to, but they're also like play acting, right? They're like. You know, trust me guys, we will continue to put pressure on the economy until you feel differently about this. And I think there's going to be some things in science that are worth [00:53:35] measuring that are like that, which is like researcher perceptions of the future state of the science economy are like things that we want to be able to influence in the space. And so one of the things that we do when we try to influence like the long termism or the short termism of science It's like, there's lots of kind of like material things we do, but ultimately the idea is like, what does that researcher in the lab think is going to happen, right? Do they think that, you know, grant funding is going to become a lot less available in the next six months or a lot more available in the next six months? Like influencing those might have huge repercussions on what happens in science. And like, yeah, like that's a tool that policymakers should have access to. Yeah. Yeah. [00:54:11] Ben: And the parallels between the. The how beliefs affect the economy, [00:54:18] Tim: and how beliefs [00:54:19] Ben: affect science, I think may also be a [00:54:21] Tim: little bit underrated. Yeah. In the sense that, [00:54:24] Ben: I, I feel like some people think that It's a fairly deterministic system where it's like, ah, yes, this idea's time has come. And like once, once all the things that are in place, like [00:54:35] once, once all, then, then it will happen. And like, [00:54:38] Tim: that is, that's like how it works. [00:54:40] Ben: Which I, I mean, I have, I wish there was more evidence to my point or to disagree with me. But like, I, I think that's, that's really not how it works. And I'm like very often. a field or, or like an idea will, like a technology will happen because people think that it's time for that technology to happen. Right. Right. Yeah. Obviously, obviously that isn't always the case. Right. Yeah. Yeah. There's, there's, there's hype [00:55:06] Tim: cycles. And I think you want, like, eventually, like. You know, if I have my druthers, right, like macro science should have like it's Chicago school, right? Which is basically like the idea arrives exactly when it should arrive. Scientists will discover it on exactly their time. And like your only role as a regulator is to ensure the stability of scientific institutions. I think actually that that is a, that's not a position I agree with, but you can craft a totally, Reasonable, coherent, coherent governance framework that's based around that concept, right? Yes. Yeah. I think [00:55:35] like [00:55:35] Ben: you'll, yes. I, I, I think like that's actually the criteria for success of meta science as a field uhhuh, because like once there's schools , then, then, then it will have made it, [00:55:46] Tim: because [00:55:47] Ben: there aren't schools right now. Mm-Hmm. , like, I, I feel , I almost feel I, I, I now want there to be schools because. I want a, a better thing to, to craft my disagreements with people on. [00:55:56] Tim: Right. [00:55:56] Ben: Right. And be like, Oh, like, you know, right now it's, it's like individual people. That's right. Yeah. So it's like, I [00:56:02] Tim: want, I want some team. Yeah. I think, I don't know. I think so one of my favorite museums in the world is this museum called the Pitt Rivers Museum, which is in Oxford. It's like, it's preserved like many things at Oxford from like when it was first founded in whatever century it was. And what's great about it is that you walk into it and you're like, what is this? Like it builds itself as a museum, but it's just like a closet of stuff that this guy collected and it's basically like this early I'm like, yeah, this is the early phase of every Science or every field. Yeah, it's like you're we're still in the phase of like that's interesting. I guess I'll put it in the [00:56:35] box That's interesting. I guess I'll put it in my back and we're just collecting at the moment, right? Yeah, but I think like, you know, you can only you can only do that for so long, right? Ultimately, you have to have a point of view because if it's gonna be more than a purely observational field It's gonna be a thing that actually should inform science policymaking Yeah, it has to come with some normative judgments that we're not gonna always have empirical results for And part of it is, like, these really hard to deal with questions epistemologically of, like, does science discover the idea, like, immediately upon all the resources being available? Or are there, like, lots of provisionalities to science that would require intervention? There's no way of proving that's a really hard thing to prove or disprove. It ends up being a matter of, like, what's the philosophy that will dominate how... Like science planners think about the issue. [00:57:35]…
I
Idea Machines


1 Idea Machines with Nadia Asparouhova [Idea Machines #48] 55:34
55:34
Play Later
Play Later
Lists
Like
Liked55:34
Nadia Asparouhova talks about idea machines on idea machines! Idea machines, of course, being her framework around societal organisms that turn ideas into outcomes. We also talk about the relationship between philanthropy and status, public goods and more. Nadia is a hard-to-categorize doer of many things: In the past, she spent many years exploring the funding, governance, and social dynamics of open source software, both writing a book about it called “ Working in Public ” and putting those ideas into practice at GitHub, where she worked to improve the developer experience. She explored parasocial communities and reputation-based economies as an independent researcher at Protocol Labs and put those ideas into practice as employee number two at Substack, focusing on the writer experience. She’s currently researching what the new tech elite will look like, which forms the base of a lot of our conversation. Completely independently, the two of us came up with the term “idea machines” to describe same thing — in her words: “self-sustaining organisms that contains all the parts needed to turn ideas into outcomes.” I hope you enjoy my conversation with Nadia Asparouhova. Links Nadia's Idea Machines Piece Nadia's Website Working in Public: The Making and Maintenance of Open Source Software Transcript [00:01:59] Ben: I really like your way of, of defining things and sort of bringing clarity to a lot of these very fuzzy words that get thrown around. So, so I'd love to sort of just get your take on how we should think about so a few definitions to start off with. So I, in your mind, what, what is tech, when we talk about like tech and philanthropy what, what is that, what is that entity. [00:02:23] Nadia: Yeah, tech is definitely a fuzzy term. I think it's best to find as a culture, more than a business industry. And I think, yeah, I mean, tech has been [00:02:35] associated with startups historically, but But like, I think it's transitioning from being this like pure software industry to being more like, more like a, a way of thinking. But personally, I don't think I've come across a good definition for tech anywhere. It's kind, you know? [00:02:52] Ben: Yeah. Do, do you think you could point to some like very sort of like characteristic mindsets of tech that you think really sort of set it. [00:03:06] Nadia: Yeah. I think the probably best known would be, you know, failing fast and moving fast and breaking things. I think like the interest in the sort of like David and gly model of an individual that is going up against an institution or some sort of. Complex bureaucracy that needs to be broken apart. Like the notion of disrupting, I think, is a very tech sort of mindset of looking at a problem and saying like, how can we do this better? So it, in a [00:03:35] weird way, tech is, I feel like it's sort of like, especially in relation, in contrast to crypto, I feel like it's often about iterating upon the way things are or improving things, even though I don't know that tech would like to be defined that way necessarily, but when I, yeah. Sort of compare it to like the crypto mindset, I feel like tech is kind of more about breaking apart institutions or, or doing yeah. Trying to do things better. [00:04:00] Ben: A a as opposed. So, so could you then dig into the, the crypto mindset by, by contrast? That's a, I think that's a, a subtle difference that a lot of people don't go into. [00:04:10] Nadia: Yeah. Like I think the crypto mindset is a little bit more about building a parallel universe entirely. It's about, I mean, well, one, I don't see the same drive towards creating monopolies in the way that and I don't know if that was like always a, you know, core value of tech, but I think in practice, that's kind of what it's been of. You try to be like the one thing that is like dominating a market. Whereas with crypto, I think people are [00:04:35] because they have sort of like decentralization as a core value, at least at this stage of their maturity. It's more about building lots of different experiments or trying lots of different things and enabling people to sort of like have their own little corner of the universe where they can, they have all the tools that they need to sort of like build their own world. Whereas the tech mindset seems to imply that there is only one world the world is sort of like dominated by these legacy institutions and it's Tech's job to fix. Those problems. So it's like very much engaged with what it sees as kind of like that, that legacy world or [00:05:10] Ben: Yeah, I, I hadn't really thought about it that way. But that, that totally makes sense. And I'm sure other people have, have talked about this, but do, do you feel that is an artifact of sort of the nature of the, the technology that they're predicated on? Like the difference between, I guess sort of. The internet and the, the internet of, of like SAS and servers and then the [00:05:35] internet of like blockchains and distributed things. [00:05:38] Nadia: I mean, it's weird. Cause if you think about sort of like early computing days, I don't really get that feeling at all. I'm not a computer historian or a technology historian, so I'm sure someone else has a much more nuanced answer to this than I do, but yeah. I mean, like when I think of like sixties, computer or whatever, it, it feels really intertwined with like creating new worlds. And that's why like, I mean, because crypto is so new, it's maybe. It, we can only really observe what's happening right now. I don't know that crypto will always look exactly like this in the future. In fact, it almost certainly will not. So it's hard to know like, what are, it's like core distinct values, but I, I just sort of noticed the contrast right now, at least, but probably, yeah, if you picked a different point in, in text history, sort of like pre startups, I guess and, and pre, or like that commercialization phase or that wealth accumulation phase it was also much more, I guess, like pie this guy. Right. But yeah, it feel, it feels like at least the startup mindset, or like whenever that point of [00:06:35] history started all this sort of like big successes were really about like overturning legacy industries, the, yeah. The term disruption was like such a buzzword. It's about, yeah. Taking something that's not working and making it better, which I think is like very intertwined with like programmer mindset. [00:06:51] Ben: It's yeah, it's true. And I'm just thinking about sort of like my impression of, of the early internet and it, and it did not have that same flavor. So, so perhaps it's a. Artifact of like the stage of a culture or ecosystem then like the technology underlying it. I guess [00:07:10] Nadia: And it's strange. Cause I, I feel like, I mean, there are people today who still sort of maybe fetishizes too strong, a word, but just like embracing that sort of early computing mindset. But it almost feels like a subculture now or something. It doesn't feel. yeah. I don't know. I don't, I don't find that that's like sort of the prevalent mindset in, in tech. [00:07:33] Ben: Well, it, it feels like the, the sort of [00:07:35] like mechanisms that drive tech really do sort of center. I mean, this is my bias, but like, I feel like the, the way that that tech is funded is primarily through venture capital, which only works if you're shooting for a truly massive Result and the way that you get a truly massive result is not to build like a little niche thing, but to try to take over an industry. [00:08:03] Nadia: It's about arbitrage [00:08:05] Ben: yeah. Or, or like, or even not even quite arbitrage, but just like the, the, to like, that's, that's where the massive amount of money is. And, and like, [00:08:14] Nadia: This means her like financially. I feel like when I think about the way that venture capital works, it's it's. [00:08:19] Ben: yeah, [00:08:20] Nadia: ex sort of exploiting, I guess, the, the low margin like cost models. [00:08:25] Ben: yeah, yeah, definitely. And like then using that to like, take over an industry, whereas if maybe like, you're, you're not being funded in a way [00:08:35] that demands, that sort of returns you don't need to take as, as much of a, like take over the world mindset. [00:08:41] Nadia: Yeah. Although I don't think like those two things have to be at odds with each other. I think it's just like, you know, there's like the R and D phase that is much more academic in nature and much more exploratory and then venture capital is better suited for the point in which some of those ideas can be commercialized or have a commercial opportunity. But I don't think, yeah, I don't, I don't think they're like fighting with each other either. [00:09:07] Ben: Really? I, I guess I, I don't know. It's like, so can I, can I, can I disagree and, and sort of say, like, it feels like the, the, the stance that venture type funding comes with, like forces on people is a stance of like, we are, we might fail, but we're, we're setting out to capture a huge, huge amount of value and like, [00:09:35] And, and, and just like in order for venture portfolios to work, that needs to be the mindset. And like there, there are other, I mean, there are just like other funding, ways of funding, things that sort of like ask for more modest returns. And they can't, I mean, they can't take as many risks. They come with other constraints, but, but like the, the need for those, those power law returns does drive a, the need to be like very ambitious in terms of scale. [00:10:10] Nadia: I guess, like what's an example of something that has modest financial returns, but massive social impact that can't be funded through philanthropy and academia or through through venture capital [00:10:29] Ben: Well, I mean, like are, I mean, like, I think that there's, [00:10:35] I think that, that, that, [00:10:38] Nadia: or I guess it [00:10:39] Ben: yeah, I think the philanthropy piece is really important. Sorry, go ahead. [00:10:42] Nadia: Yeah. I guess always just like, I feel like it was like different types of funding for different, like, I, I sort of visualized this pipeline of like, yeah. When you're in the R and D phase. Venture capital is not for you. There's other types of funding that are available. And then like, you know, when you get to the point where there are commercial opportunities, then you switch over to a different kind of funding. [00:11:01] Ben: Yeah. Yeah, no, I, I definitely agree with that. I, I, I think, I think what we're like where, where, where I was at least talking about is like that, that venture capital is sort of in the tech world is, is like the, the, the thing, the go to funding mechanism. [00:11:16] Nadia: Yeah. Yeah. Which is partly why I'm interested in, I guess, idea machines and other sources of funding that feel like they're at least starting to emerge now. Which I think gets back to those kinds of routes that, I mean, it's actually surprising to me that you can talk to people in tech who don't always make the connection that tech started as an, [00:11:35] you know, academically and government funded enterprise. And not venture venture capital came along later. Right then and so, yeah, maybe we, we're kind of at that point where there's been enough wealth generated that can kind of start that cycle again. [00:11:47] Ben: yeah. And, and speaking of that another distinction that, that you've made in your writing that I think is really important is the difference between charity and philanthropy. Do you mind unpacking how you think about that? [00:12:00] Nadia: Yeah. Charity is, is more like direct services. So you're not, there's sort of like a one to one, you put something in, you get sort of similar equal measure back out of it. And there's, I mean, charity is, you know, you can have like emergency relief or disasters or yeah, just like charitable services for people that need that kind of support. And to me, it's, it's just sort of strange that it always gets lumped in with philanthropy, which is a. Enterprise entirely philanthropy is more of the early stage pipeline [00:12:35] for it it's, it's more like venture capital, but for public goods in the same way that venture capital is very early stage financing for private goods. Philanthropy is very early stage financing for public goods. And if those public goods show promise or yeah, one need to be scaled, then you can go to government to get to get more funding to sustain it. Or maybe there are commercial opportunities or, you know, there are multiple paths that can, they can branch out from there. But yeah, philanthropy at its heart is about experimenting with really wild and crazy ideas that benefit public society that that could have massive social returns if successful. Whereas charity is not really about risk taking charity is really about providing a stable source of financing for those who really need it in the moment. [00:13:21] Ben: And, and the there's, there's two things I, I, I want to poke at there is like, do so. So you describe philanthropy as like crazy risk taking do, do you think that most [00:13:35] philanthropists see it, that. [00:13:37] Nadia: Today? No. And yeah, philanthropy has had this very varied history over the last, like let's say like modern philanthropy in its current form has only really existed since the late 18 hundreds, early 19 hundreds. So we've got whatever, like a hundred, hundred 50 years. Most of what we think about in philanthropy today for, you know, most let's say adults that have really only grown up in the phase of philanthropy that you might call like late stage modern philanthropy to be a little cynical about it where it has. And, and part of that has just come from, I mean, just a bridge history of philanthropy, but you know, early on or. Premodern philanthropy. We had the the church was kind of maybe more played more of that, that role or that that force in both like philanthropic experiments and direct services. And then like when, in the age of sort of like, yeah, post gilded, age, post industrial revolution you had people who made a lot of, lot of self-made wealth. And you had people that were experimenting with new ideas [00:14:35] to provide public goods and services to society. And government at the time was not really playing a role in that. And so all that was coming from private citizens and private capital. And so those are, yeah, there was a time in which philanthropy was much more experimental in that way. But then as government sort of stepped in around you know, mid 19 hundreds to become sort of like that primary provider and funder of public services that diminished the role of philanthropy. And then in the late 1960s, Foundations just became much more heavily regulated. And I think that was sort of like the turning point where philanthropy went from being this like highly experimental and, and just sort of like aggressive risk taking sort of enterprise to much more like safe because it was just sort of like hampered by all these like accountability requirements. So yeah, I think like philanthropy today is not representative of what philanthropy has been historically or what it could be. [00:15:31] Ben: A and what are, what are some of your favorite, like weird, [00:15:35] risky pre regulation, philanthropic things. [00:15:40] Nadia: Oh, I don't do favorites, but [00:15:42] Ben: Oh, okay. Well what, what are, what are some, some amusing examples of, of risky philanthropic cakes. [00:15:51] Nadia: one I mean, [00:15:52] Ben: Take a couple. [00:15:54] Nadia: Probably like the most famous example would be like Carnegie public libraries. So like our public library system started as a privately funded experiment. And for each library that was created Andrew Carnegie would ask the government, the, the local government or the local community to find he would help fund the creation of the libraries. And then the government would have to find a way to like continue to sustain it and support it over the years. So it was this nice sort of like, I guess, public private type partnership. But then you have, I mean, also scientific research and public health initiatives that were philanthropically supported and funded. So Rockefeller's eradication of worm as a yeah. Public health initiative finding care for yellow fever. Those are some [00:16:35] examples. Yeah. I mean the public school education system in the south did not exist until there was sort of like an initiative to say, why aren't there public schools in the south and how do we just create them and, and fund. So and then also like the state of American private universities, which were sort of modeled after European universities at the time. But also came about after private philanthropists were funding research into understanding, like why is our American higher education? Not very good, you know, at the time it was like, not that good compared to the German university models. And so there was a bunch of research that was produced from that. And then they kind of like set out to yeah. Reform American universities and, yeah. So, I mean, there, there're just like so many examples of people just sort of saying, and, and I think like, I, I, one thing I do wanna caveat is like, I'm not regressive in the sense of. Wow. This thing, you know, worked really well a hundred years ago. And why don't we just do the exact same thing again? I feel like that's like a common pitfall in history. It's not that I think, you know, [00:17:35] everything about the world is completely different today versus let's say 19 years, but [00:17:39] Ben: in the past. And so it could be different to her in the [00:17:41] Nadia: exactly that that's sort of, the takeaway is like, where we're at right now is not a terminal state or it doesn't have to be a terminal state. Like philanthropy has been through many different phases and it can continue to have other phases in the future. They're not gonna look exactly like they did historically, but yeah. [00:17:56] Ben: That, that's that such a good distinction. And it goes for, for so many things where like, like when you point to historical examples I don't know. Like, I, I think that I, I suffer the same thing where I, you know, it's like you point to, to historical examples and it's like, not, it's not bringing up the historical examples to say, like, we should go back to this it's to say, like, it has been different and it could be different. [00:18:18] Nadia: Something I think about, and this is a little, it just, I don't know. I, I just think of like any, any adult today in, like, let's say like the, the who's like active in the workforce. We're talking about the span of like a, you know, like 30 year institutional memory or something. Like, and so [00:18:35] like anything that we think about, like, what is like possible or not possible is just like limited by like our biological lifespans. Like anyone you're talking, like, all we ever know is like, what we've grown up with in like, let's say the last 30 ish years for anyone. And so it's like, the reason why it's important to study history is to remind yourself that like everything that you know about, you know, what I think about philanthropy right now, based on the inputs I've been given in my lifetime is very different from if I study history and go, oh, actually it's only been that way for like pretty short amount of time. Only a few decades. [00:19:06] Ben: Yeah, totally. And I, I, I guess this is, this might be a, a slightly people might disagree with this, but from, from my perspective there's been sort of less institutional change within. The lifetime of most people in, in the workforce and especially most people in tech, which tends to skew younger than there was in the past, [00:19:30] Nadia: Yeah. [00:19:32] Ben: like, or, or like to put, put more fine on a point of it. [00:19:35] Like there's, there seems to have been less institutional change in the like latter half of the, the 20th century than in the first, like two thirds of it. [00:19:44] Nadia: Yeah. I think that's right. It feels much more much more stagnant. [00:19:49] Ben: Yeah. And I, I think the, the last thing like pull, pull us back to, to, to definitions real quick. So how, how do you like to describe idea of machines to people? Like if, if someone was like, Nadia, what, what is an idea machine besides this podcast? How would you, how would you describe that? [00:20:05] Nadia: I would point them to my blog post. So I don't have to explain it. [00:20:08] Ben: Okay. Excellent. Perfect. Everybody. [00:20:14] Nadia: If I had to, I mean, if I had to sort of explain in short version, I would say it's kind of like the modern successor to philanthropic foundations, maybe depending who I'm talking to, I might say that or yeah, it's just, it's sort of like a, a framework for understanding the interaction between funders and communities and that are like [00:20:35] centered around to similar ideology and how they turn ideas into outcomes is like there's a whole bunch of soft social infrastructure that, that. To take someone who says, Hey, I have an NDO. Why don't we do X? And like, how does that actually happen in the world? There's so many different inputs that like come together to make that happen. And that was just sort of my attempt at creating a framework for. [00:20:54] Ben: Yeah, no, I think it's a really good framework. And, and the, the, one of the, the powerful things I think in it is that you say there's like these like five components where there's like an ideology, a community ideas, an agenda, and people who capitalize the agenda. And then and I guess I'll, I'll like caveat this for, for the listeners, like in, in the piece you use effective altruism or EA for short as, as a, kind of like a case study in, in idea machines. And so it is, is sort of very topical right now. And I, I think what we will try to avoid is like the, the topical topics about it, but use it as a, an object of study. I think it's actually a very good object of study. [00:21:35] For thinking about these things. And, and actually one of the things that I thought was, was sort of stood out to me about it about EA a as opposed to many other philanthropies is that EA feels like one of the few places where the people who are capitalizing the agenda are, are willing to capitalize other people's other people's agendas as opposed to, to like sort of imposing their own on that. Do you, do you get a sense of that? [00:22:03] Nadia: Yeah. Yeah. It feels, it feels like there's. Mm, yeah. Some sort of shift there. So, I mean, if you think about. You know, someone got super wealthy in the let's call, Haiti of, of the five, one C three foundation. Like, I don't know, let's say like the fifties or something. Yeah, someone, someone makes a ton of money and like the next step is at some point they end up setting up a charitable foundation, they appoint a committee of people to help them figure out like, what should my agenda? And they, but it's all kind of like flowing from the donor and saying like, I want to [00:22:35] create this thing in the world. I wanna fund this thing in the world because it's sort of like my personal interest. Whereas I feel like we're starting to see some examples today of sure. Like, you know, there has to be alignment between a funder's interest and maybe like a community's interest. But in some ways the agenda is being driven, not just by the funder or like foundation staff but by a community of people that are sort of all like talking to each other and saying like, here's what we think is the most important agenda. And so it feels in some ways, like much. Yeah, much more organic. And it's not to say that, you know, the funder is not influencing that or doesn't have an influence in that. But but I, I sort of like seeing now that there, if, if it feels like it's like much more yeah. Intertwined or like it could go in a lot of different directions. So yeah, you see that with EA, which was the example I had used of like the agenda is very strongly driven by its community. It's not like there's like one foundation of, of people that are just like sitting in an ivory tower and saying, here's what we think we should fund. And then they just like go off and do it. And I think that just creates a lot more [00:23:35] possibilities for serendipity around, like what kinds of ideas end up getting funded? [00:23:38] Ben: Yeah. And it also, it also feels like at least to me I'd be interested if you agree with this, it feels like it makes for situations where you can actually like pool capital more easily for for, for sort of like larger projects. Where, when it's, it's like individual. When there's not sort of like a, a broader agenda you have sort of like the, the funding gets very dispersed, but whereas like, if there's, there's a way for like multiple funders to say like, okay, like this is an important thing, then it makes it much easier to like pull capital for, for bigger ideas. [00:24:19] Nadia: Yeah, I think that's right. Like I think within the world of philanthropy, there's it is just sort of more naturally. Towards zero sum games and competitiveness of funding because there's just less funding available. And because there is always this sort of like [00:24:35] reputation or status aspect intertwined with it, where like you wanna be, you know, the funder that made something happen in the world. But I agree that when it, the, the, the, the boundaries feel a little bit more porous when it's not just like, you know, two distinct foundations that are competing with each other or two distinct funders, but it's like, we're, there are multiple funders, you know, that are existing, bigger fish, smaller fish, or whatever that are like, sort of amplifying the agenda of, of a separate community that is not, you know, is not even formally affiliated with any of, any of these funders. [00:25:08] Ben: Yeah. And do, do you have a sense of how, like, almost like what, what are the, the necessary preconditions for that? Level of community to, to come about. Right. Like EA I think maybe is it's under talked about how, like it has, you know, a hundred years of like thinking behind it, of, of before [00:25:35] people really, you know, it's like sort of like different utilitarian and consequentialist philosophers, really sort of like working out, like thinking about how do we prioritize things. And, and so it's just, I guess it's like, if for, for like creating new, powerful, useful idea machines, like what, what are sort of like the, the like bricks that need to be created to lay the groundwork for them? [00:26:01] Nadia: Yeah. I mean, you've seen it come out in different sorts of ways. So like for EA, as you said it, I mean, it already existed before any major funders came in. It was for, I mean, first you have sort of its historical roots in utilitarianism, which go way back, but then even just effective altruism itself was, you know, started in Oxford and like was an academic discipline right at, at its outset. So there was already a seed of something there before they had major funders coming in, but then there are other, other types of idea machines, I think that are where like that community has to be actively nurtured. And it's weird cause [00:26:35] yeah, I mean, I don't think there's anything wrong with that. Or I think people tend to. Underestimate, how many communities had a lot of elbow grace put in to get them going, right. So it's like, you need to create some initial momentum to build a scene. It's not like it's not always just, you know, a handful of people got together and decided to make a thing. I think that's sort of like the historical story that guest glorified, like we like thinking about a bunch of artists and creatives that are just sort of like hanging out at the same cafe and then like, you know, this scene starts to organically form. That's definitely a thing, but right, right. But you know, there's also, yeah. In, in many cases there are funders behind the scenes who are helping make these things happen. They're, you know, convenings that are organized, there are you know, individual academics or or creatives or writers that are being funded in order to help you. Bring these sorts of ideas to to the, [00:27:35] the forefront of, of people's minds. So yeah, I think there's a lot of work that can go, it's just like, you know, start anything, but there's a lot of work that can go on behind the scenes to help these communities even start to exist. But then they start to have these compounding returns for funders, I think, where it's like, okay, now, instead of, you know, instead of hiring a couple of program officers to my foundation I am starting this like community of people that is now a beacon for attracting other people I might not have even even heard of that are sort of like flocking to this cause. And it's sort of like a, a talent, well, in itself, [00:28:08] Ben: Yeah. To change tracks a little bit. So with, with these sort of like new waves of like sort of potential philanthropists in, in both like the tech world or the crypto world do you have any sense of like risky, philanthropic experiments that you would want to see people do? Like just sort of like any, any kind of wishlist. [00:28:32] Nadia: I don't know. I don't know if that's like the role that I am trying to play [00:28:35] necessarily. I mean, I think like personally one area that still feels the way I think about it is I just think about, you know, what are the different components of, of, of the public sector and sort of like what areas are being more or less. Covered right now. And so we see funders that are getting more involved in politics and policy. We see funders that are you know, replicating or trying to, to field build in, in academia. I feel like media is still strangely kind of overlooked or just this big enigma to me, at least when I think about, yeah. How do, how do funders influence different aspects of the public sector? And so, yeah, there's, there's sort of, well, I don't think it's even necessarily a lack of interest because I, I see a lot of. You know, again, that sort of tech mindset and yeah, I guess I'm more specific thinking about tech right now, but going back to, you know, tech wanting to break apart institutions or tech, sort of like being this ancy teenager that is like railing against the institution you see a lot [00:29:35] of that and there's, you know, a lot of tension between tech industry and media right now. So you see that sort of like champing up bit. But then it's not clear to me, like what, like what they're doing to replace that. Is it, and, and, and some of that is just maybe more existential questions about like, what is the future of media? Like, what should that be? Is it this sort of focus on individual media creators instead of, you know, going to like the mainstream newspaper or the mainstream TV network or whatever you're going to Joe Rogan, let's say that's relevant today, cuz I just saw. Mark Zuckerberg did an interview on, on Joe Rogan so like, you know, is, is it like, is that what the future looks like? Is that the vision of what tech wants media to look like? It's not totally clear to me what the answer is yet, but, and I also feel like I'm seeing sort of like a lack of interest in and funding towards that. So that that's sort of like one area where, and it's sort of unsurprising to me, I guess that like, you know, tech is gonna be interested in like science or [00:30:35] politics. And maybe just sort of tech is not great at thinking about cultural artifacts. But you know, in terms of like my personal wishlist or just areas where I think their deficiencies on the sort of public sector checklists that, that one of them. [00:30:49] Ben: yeah, no, that's that's and I think the important thing is, is to, to flag these things. Right. Cuz it's like, it's, it's sort of hard to know what counterfactuals are, but it's like, yeah, like like media media as public goods. Does seem like kind of underrated as an idea, right. It's like would, would, I don't know. It's like, I think Sesame Street's really important and that was, that was publicly funded, right? [00:31:17] Nadia: mm-hmm and even education is sort of like a, a weird, like, I mean, there's talk about homeschooling. There's talk about how universities aren't, you know, really adequate today. I mean, you have like the, you know, one effort to, to [00:31:35] build a new university, but it feels. I don't know, I'm still sort of like waiting for like, what are like the really big, ambitious efforts that we're gonna see in terms of like tech people that are trying to rebuild either, you know, primary, secondary education or higher education. I just, yeah, I don't know. [00:31:53] Ben: Yeah, no, that, that that's in a great, a great place. Like it does not feel like there have been a lot of ambitious experiments there. In terms of right. Like anything along the lines of, of like building all the, the public schools in the south. Right. [00:32:06] Nadia: Right. Like at that level and this actually, I mean, this is like, and I think you, and I may not agree on this topic, but like I do genuinely wonder, you know, at the same time, we're also iterating at the same time you have these, you know, cycles of wealth that come in and, and shape public society in different ways, on like a broader scale. You also have the, you know, a hundred year institutional cycle where like, Institutions are built and then they kind of mature and then they, they start to stagnate and, and die down. What have we learned from like the last a hundred [00:32:35] years of institution building? Like maybe we learned that institutions are not as great as they seem, or they inevitably decline. And like, maybe people are interested in ways to avoid that in, in other words, like, you know, do we need to build another CNN in, in the realm of media? Or do we need to build another Harvard or is maybe the takeaway that like institutions themselves are falling out of favor and the philanthropically funded experiments might not look like the next Harvard, but they're gonna look like some, yeah, some, some sort of more broken down version of that. [00:33:05] Ben: Ooh, [00:33:06] Nadia: I don't know. And yeah. Yeah. I don't know. [00:33:10] Ben: sorry. Go, go ahead. [00:33:11] Nadia: Oh, I was just gonna say, I mean, like, this is, this is where I feel like history only has limited things to teach us. Right. Because yeah, the sort of copy paste answer would be. There used to be better institutions. Let's just build new institutions. But I think, and I think this is actually where crypto is thinking more critically about this than tech where crypto says like, yeah, like, why are we [00:33:35] just gonna repeat the same mistakes over and over again? Let's just do something completely different. Right. And I think that is maybe part of the source of their disinterest in what legacy institutions are doing, where they're just like, we're not even trying to do that. We're not trying to replicate that. We wanna just re rethink that concept entirely. I, I feel like, yeah, in tech, there's still a bit of LARPing around like, like around like, you know, without sort of the critical question of like, what did we, what did we take away from that? Maybe that wasn't so good. What we did in the past. [00:34:04] Ben: Yeah, well, I, I guess my response just is, is I think definitely that. That institutions are not functioning as well as they have. I think the, the question is like, what is the conclusion to draw from that? And, and maybe the, the conclusion I draw is that we need like different, like newer, different [00:34:35] institutions. And I feel like there's different levels of implicitness or explicitness of an institution, but broadly, it is some way of coordinating people that last through time. Right. And so, even what people are doing in crypto is I would argue building institutions. They just are organized wildly differently than ones we've seen before. [00:35:00] Nadia: Yeah. Yeah. And again, it's like, so the history is so short in crypto. It's hard to say what exactly anyone is trying to do until maybe we can understand that in retrospect. Yeah, I mean, I don't know. I, I think like there is just like some. Like, I feel like there's probably some learning from, from open source where I spent a lot of my brain space in the past around like, it was just like an entirely different type of coordination model from, from like centralized cozy firms. [00:35:34] Ben: Yeah. [00:35:34] Nadia: [00:35:35] And like there's some learning there and, and crypto is modeling itself much more after like open source projects than it is after like KO's theory of the firm. And, and so I, so I, I think there's probably some learnings there of like, yes, they're building things. I don't know. I mean, like in the world of opensource, like a lot of these projects don't last very, like you don't sort of like iterate upon existing projects. A lot of times you just build a new project and then eventually try to get people to like switch over to that project. So it's like these much shorter lifespans And so I don't, I don't know what that looks like in terms of institutional design for like the public sector or social institutions, but I just, yeah, I don't know. I think I just sort of wonder what that looks like. And yeah, I do see, like, there are some experiments within sort of like non crypto tech world as well. Like I was just thinking about Institute for progress and they're a, a policy think tank in, in DC. And I think like one of the things that they're doing well is trying to iterate [00:36:35] upon the sort of, you know, existing think tech tank model. And like one of the things that they acknowledge better than maybe, you know, you go to ano you go to a sort of like one of the stodgy older think tanks, and you're like, your brand is the think tank, right? You are like an employee of that place and you are representing their brand. Whereas I think my sense, at least with Institute for progress is they've been a little bit more like you are someone who is an expert already in your. domain. You, you already have your own audience. You're, you're someone who's already widely known and we're kind of like the infrastructure that is supporting you. I don't wanna speak on their behalf. That's sort of like the way I've been understanding it. And yeah, I mean, so, you know, even outside of crypto, I think people are still contending with that whole atomization of the firm, cetera, etcetera of like how do you balance or like individual reputation versus firm reputation. And maybe that is where it plays out. Like to my question about, you know, are you trying to build another media institution or is it just about supporting like lots of in individual influencers? But yeah, [00:37:35] just, I wonder like, are we sitting here waiting for new institutions to be built and like, actually there are no more, maybe we're just like institutions period are dying and like that's the future. Or yeah, at the same time, like they do provide this sort of like history and memory that is useful. So I don't know. [00:37:51] Ben: yeah, I mean, like, it sounds to me like, there's, there's, I mean, from what you're saying, there's like a much more sort of subtle way to look at it where there's, there's like a number of different sort of like sliders or spectra, right. Where it's like, how. I don't know, like internalized versus externalized, the institution is right where it's like, you think of like your like 1950s company and it's like, people are like subsume themselves to it. Right. And that's like on some end of the spectrum. And then on the other end of the spectrum, it's like like, I don't know, like YouTube, right. Where it's like, yeah. Like all like YouTube YouTubers are like technically all YouTubers, but like beyond that [00:38:35] they have no like coordination or, or real like connection. And like, and like that's one access. And then like new institutions could like come in and, and maybe we're like moving towards an era of history where like the, like just there is more externalization, but then like, sort of like explicitly acknowledging that and then figuring out how to. Do a lot of good and like have that, that sort of like institutional memory, given the, a world where, where like everybody's a brand [00:39:09] Nadia: Yeah. [00:39:10] Ben: that it, it seems like it's, that's not necessarily like institutions are dead. It's just like institutions live in a different like, like are, are just like structurally different [00:39:23] Nadia: Yeah. Yeah. Like, I, I, I wondered, like if we just sort of embrace the fact that maybe we are moving towards having much shorter memories like what does a short term memory [00:39:35] institution look like? I dunno, like maybe that's just sort where, right. You know, like I try to sort of like observe what is happening versus kind of being like, it should be different. And so like, if that just is what it is then, like, how do we design for that? I have an idea and I think that actually get to like part of what crypto is trying to do differently is saying, okay, like, this is where we have sort like trustless and where we have the rules that are encoded into a protocol where like, you don't need to remember anything like the, the network is remembering for you. [00:40:03] Ben: Yeah, I'm just thinking, I, I haven't actually watched it, but do you know the movie memento, which I [00:40:09] Nadia: Yes, [00:40:10] Ben: a guy who has yeah, exactly is short term memory loss and just like tattoos all over his body. So it's like, what, what is the institutional version of that? I guess, I guess like, yeah, exactly. That's that's where the, the note taking goes. [00:40:25] Nadia: Your. [00:40:27] Ben: yeah, exactly. So sort of down another separate track is, is something that I've noticed is like, [00:40:35] I guess, how do you think about what is and is not a public good? And I, and I asked this because I think my experience talking to many people in, in tech there's, there's sort of this attitude that sort of everything can be made like that, that almost like public goods don't exist. That it's like every, like everything can, can sort of be done by a, for profit company. And if like you can't capture the value of what you're doing it might not be valuable. [00:41:06] Nadia: Yeah, that's a frustrating one. Yeah, I mean like public goods have a very literal and simple economic definition of being a, a good that is non rivals and non-excludable so non excludable, meaning that you can't prevent anyone from accessing it and non rivals, meaning that if someone uses the public good, it doesn't diminish someone else's ability to use that, that public good. And that sort of stands in contrast to private goods and other types of goods. So, you know, there's that definition to start with, but then of course in [00:41:35] real life, real life is much more complex than that. Right. And so I, I noticed there was like a lot of, yeah, just like assumptions. I get rolled up in that. So like one of the things. Open source code, for example in the book that I wrote I tried to sort of like break apart, like people think of open source code as a public. Good. And that's it. Right. And, and with that carries a bunch of implications around, well, if open source is, you know, freely accessible, it's not excludable. That means that we should not prevent anyone from contributing to it. And that's like, you know, then, then that leads to all these sort of like management problems. And so I kind of try to break that apart and say the consumption of open source code. Like the, the actual code itself can be a public good that is freely accessible, but then the production of open source, like who actually contributes to an open source community could be, you know, like more like a membership style community where you do exclude people. That's just, you know, one example that comes to mind of like how public goods are not as black and white as they seem. I think another, like assumption that I see is that public goods have to be funded by government. And government has again, [00:42:35] like, you know, Especially since mid 19 hundreds, like been kinda like primary provider of public goods, but there are also public goods that are privately funded. Like, you know like roads can be funded through public private partnerships or privately funded. So it's not just because something is a public good. Doesn't say anything about how it has to be funded. So yeah, there, there is just sort of like, and then, yeah, as you're saying within tech, I think there's just because the vehicle of change in the world that is sort of like the defining vehicle for the tech industry is startups. Right. And so it's both like understandable why like everything gets filtered through that lens of like, why is it not a startup? But then, you know, as, as we both know, kind of minimizes the text history, the reason that we even, you know, got to the commercial era of startups and the startup. Era is because of the years and years of academic and government funded research that, that led up to that. So and, and then, and same with sort of like the open source work that I [00:43:35] was doing was to say, okay, all these companies that are developing their software products, every single one of these private companies is using open source code. They're relying on this public digital infrastructure to build their software. So like, it's, it's not quite as clean cut as especially, I mean, by some estimates, like a vast majority of let's say, yeah, any, any private company, any private software company, like, you know, let's say like 70% of their, their code or, you know, it's, it varies so much between companies, but like certainly a majority of the code that is quote unquote written is actually just like shared public code. So it's, you know it's, it's not quite as simple as saying like public goods have no place in, in tech. I think they, they still have a very, very strong place. [00:44:16] Ben: Yeah, no, and it it's, it's also just, just thinking about like, sort of like the, the publicness of different things, right? Cuz it's like, there are for profit, there, there are profitable private schools. Right. And yet, [00:44:35] like I think most people would agree that. If all schools were, were for profit and private I mean, yeah, I guess separating out like the, the, like, even if schools were for profit and private you would prob like, it would probably still be a good thing to have government getting money into those schools. Right. Like even like, I, I think people who don't like public schooling still think that it is worthwhile for the government to give money towards schools. Right. [00:45:12] Nadia: Mm-hmm [00:45:13] Ben: Is that [00:45:14] Nadia: Yeah. And, and this is a distinction between, for the example of education, it's like, you know, the concept of education might be a public. Good. But then how is education funded might, you know, get funded in different ways, including private. [00:45:27] Ben: yeah, exactly. And, and, and I. Yeah. So, so the, the, the concept of education [00:45:35] as, as a public good. Yeah, that's a, that's a good way of putting it and there's like, but I, and I think, I guess there, there are, there are more I guess think fuzzier places where it's like, it's less clear whe like, to what extent it's to public good, like like I think infrastructure maybe one where it's like, you, you could imagine a system where like, everybody just like, who uses, say like a sewer line buys into it versus having it be, be publicly funded. And I think like research might be another one. [00:46:11] Nadia: I mean, even education is if you go far back enough, right? Like not everyone went to public schools before. Not everyone got an education. It was not seen as necessarily something that it was something for like privileged people to get. It was not something that was just like part of the public sector. So yeah, our, our notions of what the public sector even is, or what's in and out of it is definitely evolved over the years. [00:46:32] Ben: Yeah, no, that's a really good point. So it's, [00:46:35] it's like that again is like, that's, that's where it's complicated where it's like, it's not just some like attribute of the world. Right. It's like our, like some kind of social consensus, [00:46:45] Nadia: Great. [00:46:46] Ben: around public goods. And, and something I also wanted to, to talk about is like, I know you've been thinking a lot about like the, sort of the relationship between philanthropy and status and I guess like, do, do you have, like, what's like. Do do you have a sense of like, why? Like, and it's different for everybody, but like why do people do philanthropy now? Like when you, when you don't have like a, a sort of like a, a reli, excuse me, a religious mandate to do it. [00:47:21] Nadia: I actually think, yeah, I think this question is more complicated than it seems. Because there's so many different types of philanthropists you know, The old adage of, if you've met one philanthropist, you've met one philanthropist. And so motivations [00:47:35] are, I mean, there are a lot of different motivations and also just sort of like, there's some spectrum here that I am still kind of lack and vocabulary on, but like a lot of philanthropy, if you just look by the numbers, like a lot of philanthropy is done at the local level, right. Or it's done within a philanthropy sort of local sphere. Like we forget about, you know, when you think about philanthropy, you think about the biggest billionaires in the world. You think about bill gates or Warren buffet or whatever. But like, we forget that, you know, there are a lot of people that are wealthy that are just kind of like that, that aren't part of the quote unquote global elite. Right? So like I, yeah, one example I have to think about is like the, the Koch family. And and so we all know the Koch brothers, but then like, They were, they were not the original philanthropist in their family. Their father was, and their father was originally, I mean, they had a family foundation and they just kind of focused on their local area doing local philanthropy. And it was only with the next generation that they ended up sort of like expanding into this like more global focus. But like, yeah, I mean, there's so much philanthropy. That is, so when we say, you know, like, what are the motivations of someone of a philanthropist? Like, it, it really [00:48:35] depends on like who you're talking about. But I do think like one aspect that just gets really under discussed or underappreciated philanthropy is the kind of like cohort nature of at least philanthropy that operates on a more like global, global skill. And I don't mean literally global in the sense of like international, I just mean like, I don't know what the right term is for this, but like outside of your yeah, like nonlocal right. [00:48:59] Ben: Yeah. [00:49:00] Nadia: And yeah, I don't know. That feels unsatisfying too. I don't really know what, what, what the term is, but there is a distinction there, right. But yeah, I think like, well, yeah, I don't know. I don't know what the right term is. But like I, the, the ways in which, so like, you know, why does a, why does a philanthropist? I, I think I have one open question of like, why, what makes a philanthropist convert from kinda like the more local focus to some expanded quote unquote global focus is one question. I think like when people talk about the motivations of philanthropists, they tend to focus on individual motivations of that person. So, you [00:49:35] know, the classic answer to like, why do, why do people give philanthropically? It's always like something like about altruism and wanting to give back or it's, or it's like the, you know, the, the edgy self-interested model of like, you know, people that are motivated by, by status and wanting to look good. I don't, I feel like those answers, they don't, they're not like they're just not fully satisfying to me. I think there's. This aspect of maybe like, like a more like power relational theory that is maybe under, under discussed or underappreciated of if you think about like like these wealth generations, rather than just like individuals who are wealthy you can see these sort of like cohorts of people that all became wealthy in similar sorts of ways. So you have wall street wealth, you have tech wealth, you have crypto wealth. And and you know, these are very large buckets, but you can sort of group people together based on like, they got wealthy because they had some unique insight that the previous paradigm did not have. And I think like, [00:50:35] there's sort of like, yeah, there are these cycles that like wealth is moving in where first you're sort of like the outcast you're working outta your garage, you know, let's use the startup example. No one really cares about you. You're very counterculture. Then you become sort of like more popular you're you're like a, but you're still like a counterculture for people that are like in the know, right. You're showing traction, you're showing promise whatever, and then there's some explosion to the mean stream. There's sort of this like frenzied period where everyone wants to, you know, do startups or join a startup or start a startup. And then there's sort of like the crash, right? And this is this mirrors Carla press's technological revolutions and, and financial capital where she talks about how technological innovations influence financial markets. But you know, she talks about these sort like cycles that we move in. And then like, after the sort of like crash, there's like a backlash, right? There's like a reckoning where the public says, you know, how, how could we have been misled by this, these crazy new people or whatever. But that moment is actually the moment in which the, the new paradigm starts to like cement its power and starts to become sort of like, you know, the dominant force in the field. It needs to start. [00:51:35] Switching over and thinking about their public legacy. But I think like one learnings we can have from looking at startup wealth now and sort of like how interesting it is that in the last couple years, like suddenly a lot of people in tech are starting to think about culture building and institution building and, and their public legacies that wasn't true. Like, you know, 10 years ago, what is actually changed. And I think a lot of that really was influenced by the, the tech backlash that was experienced in, in 2016 or so. And so you look at these initiatives now, like there are multiple examples of like philanthropic initiatives that are happening now. And I don't find it satisfying to just say, oh, it's because these individuals want to have a second act in their career. Or because they're motivated by status. Like, I think those are certainly all components of it, but it doesn't really answer the question of why are so many people doing it together right now? Not literally coordinated together, but like it's happening independently in a lot of different places. And so I feel like we need some kind of. Cohort analysis or cohort explanation to say, okay, I actually think this is kind of like a defense mechanism because you have this [00:52:35] clash between like a rising new paradigm against the incumbents and the new paradigm needs to find ways to, you know, like wield its influence in the public sector or else it's just gonna be, you know, regulated out of existence or they're gonna, you know, be facing this sort of like hostile media landscape. They need to learn how to actually like put their fingers into that and and, and grapple with the role. But it it's this sort of like coming of age for our counterculture where they're used to, like tech is used to sort of being in this like safe enclave in Silicon valley and is now being forced or like reckoned with the outside world. So like that, that, that is one answer for me of like, why do philanthropists do these things? It's we can talk about sort of like individual motivations for any one person. In, in my sort of like particular area of interest in trying to understand, like, why is tech wealth doing this? Or like, what will crypto wealth be doing in the future? I, I find that kind of explanation. Helpful. [00:53:25] Ben: Yeah. That's I feel like it has a very like Peter Turin vibe like in, in the good way, in the sense of like, like identifying. [00:53:35] like, I, I, I don't think that history is predictive, but I do think that there are patterns that repeat and like that, like, I've never heard anybody point out that pattern, but it feels really truthy to me. I think the, the, the really cool thing to do would be to like, sort of, as you dig into this, like, sort of like set up some kind of like bet with yourself on like, what are the conditions under which like crypto people will become like start heavily going into philanthropy. Right. Like, [00:54:09] Nadia: Yes, totally. I think about this now. That's why I'm like, I'm weirdly, like, to me, crypto wealth is the specter in the future, but they're not actually in the same boat as what tech wealth is in right now. So I'm almost in a, like, they're, they're not yet really motivated to deal with this stuff, because I think like that moment, if I had to like, make a bet on it is like, it's gonna be the moment where like crypto, when, when crypto really faces like a public [00:54:35] backlash. Because right now I think they're still in the like we're counterculture, but we're cool kind of moment. And then they had a little bit of this frenzy in the crash, but like, yeah, I think it's still. [00:54:44] Ben: for tech, right? Or 2000. [00:54:46] Nadia: Yeah. And even despite exactly. And, and, and despite the, you know, same as in 2001 where people were like, ah, pets.com, you know, it was all a scam. This was all bullshit. Oh, sorry. I dunno if I could say that. [00:54:57] Ben: Say that. [00:54:57] Nadia: But then, you know, like did not even, like startups had a whole other Renaissance after that was like not, you know, far from being over. But like people still by and large, like love crypto. And like, there are the, you know, loud, negative people that are criticizing it in the same way that people criticize startups in 2001. But like by and large, a lot of people are still engaging with it and are interested in it. And so, like, I don't feel like it's hit that public backlash moment yet the way that startups did in 2016. So I feel like once it gets to that point and then like, kind of the reckoning after that is kind of the point where crypto wealth will be motivated to act philanthropically in kind of like this larger cohort [00:55:35] kind of way. [00:55:36] Ben: Yeah. And I don't think that the time scales will be the same, but I mean the time scale for, for that in tech, if we sort of like map it on to the, the 2000 crash is like, you know, so you have like 15 years. So like, that'd be like 20 37 is when we need to like Peck back in and see like, okay, is this right? [00:55:56] Nadia: It's gonna be faster. So I'm gonna cut that in half or something. I feel like the cycles are getting shorter and moving faster. [00:56:01] Ben: That, that, that definitely feels true. Looking to the future is, is a a good place for us to, to wrap up. I really appreciate this.…
I
Idea Machines


1 Institutional Experiments with Seemay Chou [Idea Machines #47] 1:13:50
1:13:50
Play Later
Play Later
Lists
Like
Liked1:13:50
Seemay Chou talks about the process of building a new research organization, ticks, hiring and managing entrepreneurial scientists, non-model organisms, institutional experiments and a lot more! Seemay is the co-founder and CEO of Arcadia Science — a research and development company focusing on underesearched areas in biology and specifically new organisms that haven't been traditionally studied in the lab. She’s also the co-founder of Trove Biolabs — a startup focused on harnessing molecules in tick saliva for skin therapies and was previously an assistant professor at UCSF. She has thought deeply not just about scientific problems themselves, but the meta questions of how we can build better processes and institutions for discovery and invention. I hope you enjoy my conversation with Seemay Chou Links Seemay on Twitter (@seemaychou) Arcadia's Research Trove Biolabs Seemay's essay about building Arcadia Transcript [00:02:02] Ben: So since a lot of our conversation is going to be about it how do you describe Arcadia to a smart well-read person who has never actually heard of it before? [00:02:12] Seemay: Okay. I, I actually don't have a singular answer to this smart and educated in what realm. [00:02:19] Ben: oh, good question. Let's assume they have taken some undergraduate science classes, but perhaps are not deeply enmeshed in, in academia. So, so like, [00:02:31] Seemay: enmeshed in the meta science community.[00:02:35] [00:02:35] Ben: No, no, no, no, but they've, they, they, they, they they're aware that it's a thing, but [00:02:40] Seemay: Yeah. Okay. So for that person, I would say we're a research and development company that is interested in thinking about how we explore under researched areas in biology, new organisms that haven't been traditionally studied in the lab. And we're thinking from first principal polls about all the different ways we can structure the organization around this to also yield outcomes around innovation and commercialization. [00:03:07] Ben: Nice. And how would you describe it to someone who is enmeshed in the, the meta science community? [00:03:13] Seemay: In the meta science community, I would, I would say Arcadias are meta science experiment on how we enable more science in the realm of discovery, exploration and innovation. And it's, you know, that that's where I would start. And then there's so much more that we could click into on that. Right. [00:03:31] Ben: And we will, we will absolutely do that. But before we get there I'm actually really [00:03:35] interested in, in Arcadia's backstory. Cuz cuz when we met, I feel like you were already , well down the, the path of spinning it up. So what's, there's, there's always a good story there. What made you wanna go do this crazy thing? [00:03:47] Seemay: So, so the backstory of Arcadia is actually trove. Soro was my first startup that I spun out together with my co-founder of Kira post. started from a point of frustration around a set of scientific questions that I found challenging to answer in my own lab in academia. So we were very interested in my lab in thinking about all the different molecules and tick saliva that manipulate the skin barrier when a tick is feeding, but basically the, the ideal form of a team around this was, you know, like a very collaborative, highly skilled team that was, you know, strike team for like biochemical, fractionation, math spec, developing itch assays to get this done. It was [00:04:35] not a PhD style project of like one person sort of open-endedly exploring a question. So I was struggling to figure out how to get funding for this, but that wasn't even the right question because even with the right money, like it's still very challenging to set up the right team for this in academia. And so it was during this frustration that I started exploring with Kira about like, what is even the right way to solve this problem, because it's not gonna be through writing more grants. There's a much bigger problem here. Right? And so we started actually talking to people outside of academia. Like here's what we're trying to achieve. And actually the outcome we're really excited about is whether it could yield information that could be acted on for an actually commercializable product, right. There's like skin diseases galore that this could potentially be helpful for. So I think that transition was really important because it went from sort of like a passive idea to, oh, wait, how do we act as agents to figure out how to set this up correctly? [00:05:35] We started talking to angel investors, VCs people in industry. And that's how we learned that, you know, like itch is a huge area. That's an unmet need. And we had tools at our disposal to potentially explore that. So that's how tr started. And that I think was. The beginning of the end or the, the start of the beginning. However you wanna think about it. Because what it did, was it the process of starting trove? It was so fun and it was not at all in conflict with the way I was thinking about my science, the science that was happening on the team was extremely rigorous. And I experienced like a different structure. And that was like the light bulb in my head that not all science should be structured the same way. It really depends on what you're trying to achieve. And then I went down this rabbit hole of trying to study the history of what you might call meta science. Like what are the different structures and iterations of this that have happened over, over the history of even the United States. And it's, hasn't always been the same. Right? And then I think [00:06:35] like, as a scientist, like once you grapple with that, that the way things are now is not how they always have been. Suddenly you have an experiment in front of you. And so that is how Arcadia became born, because I realize. Couched within this trove experiment is so many things that I've been frustrated about that I, I, I don't feel like I've been maximized as the type of scientist that I am. And I really want to think in my career now about not how I fit into the current infrastructure, but like what other infrastructures are available to us. Right? [00:07:08] Ben: Nice. [00:07:09] Seemay: Yeah. So that, that was the beginning. [00:07:11] Ben: and, and so you, you then, I, I, I'm just gonna extrapolate one more, more step. And so you sort of like looked at the, the real, the type of work that you really wanted to do and determined that, that the, the structure of Arcadia that you've built is, is like perhaps the right way to go about enabling that. [00:07:30] Seemay: Okay. So a couple things I, I don't even know yet if Arcadia is the right way to do it. So I [00:07:35] feel like it's important for me to start this conversation there that I actually don't know. But also, yeah, it's a hypothesis and I would also say that, like, that is a beautiful summary, but it's still, it was still a little clunkier than the way you described it and the way I described it. So there's this gap there then of like, okay, what is the optimal place for me to do my science? How do we experiment with this? And I was still acting in a pretty passive way. You know, I was around people in the bay area thinking about like new orgs. And I had heard about this from like ju and Patrick Collison and others, like people very interested in funding and experimenting with new structures. So I thought, oh, if I could find someone else to create an organization. That I could maybe like help advise them on and be a part of, and, and so I started writing up this proposal that I was trying to actually pitch to other people like, oh, would you be interested in leading something like this? [00:08:35] Like, and the more that went on and I, I had like lots and lots and lots of conversations with other scientists in academia, trying to find who would lead this, that it took probably about six months for me to realize like, oh, in the process of doing this, I'm actually leading this. I think and like trying to find someone to hand the keys over to when actually, like, I seem to be the most invested so far. And so I wrote up this whole proposal trying to find someone to lead it and. It came down to that like, oh, I've already done this legwork. Like maybe I should consider myself leading it. And I've, I've definitely asked myself a bunch of times, like, was that like some weird internalized sexism on my part? Cause I was like looking for like someone, some other dude or something to like actually be in charge here. So that's actually how it started. And, and I think a couple people started suggesting to this to me, like if you feel so strongly about this, why aren't you doing this? And I know [00:09:35] it's always an important question for a founder to ask themselves. [00:09:38] Ben: Yeah, yeah, no, that's, that's really clutch. I appreciate you sort of going into the, the, the, the, the, the, like, not straight paths of it. Because, because I guess when we, we put these things into stories, we always like to, to make it like nice and, and linear and like, okay, then this happened and this happened, and here we are. But in reality, it was it's, it's always that ambiguity. Can, can I actually ask two, two questions based on, on that story? One is you, you mentioned that. In academia, even if you had the money, you wouldn't be able to put together that strike team that you thought was necessary. Like why can, can you, can you unpack that a little bit? [00:10:22] Seemay: Yeah. I mean, I think there's a lot of reasons why one of the important reasons, which is absolutely not a criticism of academia, in fact, it's maybe like my support of the [00:10:35] mission in academia is around training and education. That like part of our job as PIs and the research projects we set up is to provide an opportunity for a scientist to learn how to ask questions. How to answer those, how to go through the whole scientific process. And that requires a level of sort of like openness and willingness to allow the person to take the reigns on that. That I think is very difficult if you're trying to hit like very concrete, aggressive milestones with a team of people, right. Another challenge of that is, you know, the way we set up incentive structures around, you know, publishing, like we also don't set up the way we, you know, publish articles in journals to be like very collaborative or as collaborative as you would want in this scenario. Right. At the end of the day, there's a first author, there's the last author. And that is just a reality. We all struggle with despite everyone's best intentions. And so that inherently now sets up yeah. [00:11:35] Another situation where you're trying to figure out how you, we, this collaborative effort with this reality and. Even in the best case scenario, it doesn't always feel great. Right? Like it just like makes it harder to do the thing. And then finally, like it just, you know, for the way we fund projects in, in academia, you know, this wasn't a very hypothesis driven project. Like it's very hard to lay out specific aims for it. Beyond just the things we're gonna be trying to like, what, what, what is our process that we can lay [00:12:08] Ben: Yeah, it's a [00:12:09] Seemay: I can't tell you yeah. What the outcomes are gonna be. So I did write grants on that and that was repeatedly the feedback. And then finally, there's, you know, this other thing, which is that, like, we didn't want to accidentally land on an opportunity for invi innovation. We explicitly wanted to find molecules that could be, you know, engineered for products. Like that was [00:12:35] our hypothesis. If there is any that like. By borrowing the innovation from ticks who have evolved to feed for days to sometimes over a week that we are skipping steps to figure out the right natural product for manipulating processes in the skin that have been so challenging to, you know, solve. So we didn't want it to be an accident. We wanted to be explicitly translational quote unquote. So that again, poses another challenge within an academic lab where you, you have a different responsibility, right? [00:13:05] Ben: Yeah. And, and you it's there there's like that tension there between setting out to do that and then setting out to do something that is publishable, right? [00:13:14] Seemay: Mm-hmm mm-hmm . Yeah. Yeah. And I think one of the, the hard things that I'm always trying to think about is like, what are things that have out of the things that I just listed? What are things that are appropriately different about academia and what are the things that maybe are worth a second? [00:13:31] Ben: mm. [00:13:32] Seemay: they might actually be holding us back even [00:13:35] within academia. So the first thing I would say is non-negotiable that there's a training responsibility. So that is has to be true, but that's not necessarily mutually exclusive with also having the opportunity for this other kind of team. For example, we don't really have great ways in academia to properly, you know, support staff scientists at a, at a high level. Like there's a very limited opportunity for that. And I, you know, I'm not arguing with people about like the millions of reasons why that might be. That's just a fact, you know, so that's not my problem to solve. I just, I just see that as like a challenge also like of course publishing, right? Like I think [00:14:13] Ben: yeah, [00:14:14] Seemay: in a best case scenario publishing should be science should be in the driver's seat and publishing should be supporting those activities. I think we do see, you know, and I know there's a spectrum of opinions on this, but there are definitely more and more cases now where publishing seems to be in the [00:14:35] driver's seat, [00:14:36] Ben: yeah, [00:14:36] Seemay: dictating how the science goes on many levels. And, you know, I can only speak for myself that I, I felt that to be increasingly true as I advanced my career. [00:14:47] Ben: yeah. And just, just to, to make it, make it really explicit that it's like the, the publishing is driving because that's how you like, make your tenure case. That's how you make any sort of credibility. Everybody's gonna be judging you based on what you're publishing as opposed to any other. [00:15:08] Seemay: right. And more, I think the reason it felt increasingly heavy as I advanced my career was not even for those reasons, to be honest, it was because of my trainees, [00:15:19] Ben: Hmm. [00:15:20] Seemay: if I wanna be out. Doing my crazy thing. I have a huge responsibility now to my students, and that is something I'm not willing to like take a risk on. And so now my hands are tied in this like other way, and their [00:15:35] careers are important to me. And if they wanna go into academia, I have to safeguard that. [00:15:40] Ben: Yeah. I mean, it suggests. Sort of a, a distinction between sort of, regardless of academia or not academia between like training labs and maybe focused labs. And, and you could say like, yes, you, you want trainees in focus. Like you want trainees to be exposed to focused research. But like at least sort of like thinking about those differences seems really important. [00:16:11] Seemay: Yes. Yeah. And in fact, like, you know, because I don't like to, I don't like to spend too much time, like. Criticizing people in academia, like we even grapple with this internally at Arcadia, [00:16:25] Ben: Yeah. [00:16:25] Seemay: like there is a fundamentally different phase of a project that we're talking about sort of like new, creating new ideas, [00:16:35] exploring de-risking and then some transition that happens where it is a sort of strike team effort of like, how do you expand on this? How do you make sure it's executed well? And there's probably many more buckets than the, just the two I said, but it it's worthy of like a little more thought around the way we set up like approvals and budgets and management, because they're too fundamentally different things, you know? [00:17:01] Ben: Yeah, that's actually something I, I wanted to ask about more explicitly. And this is a great segue is, is sort of like where, where do ideas come from at Arcadia? Like how, you know, it's like, there's, there's some spectrum where everybody's from, like everybody's working on, you know, their own thing to like you dictating everything. Everything in between. So like, yeah. Can you, can you go more into like, sort of how that, that flow works almost? [00:17:29] Seemay: So I might even reframe the question a little bit to [00:17:35] not where do ideas come from, but how do ideas evolve? Because it's [00:17:39] Ben: please. Yeah. That's a much better reframing. [00:17:41] Seemay: because it's rarely the case, regardless of who the idea is coming from at Arcadia, that it ends where it starts. and I think that that like fluidity is I the magic sauce. Right. And so by and large, the ideas tend to come from the scientists themselves. Occasionally of course, like I will have a thought or Che will have a thought, but I see our roles as much more being there to like shepherd ideas in the most strategic and productive direction. And so we like, you know, I spent a lot of time thinking about like, well, what kind of resources would this take? And, you know, Che definitely thinks about that piece as well as, you know, like what it, what would actually be the impact of this if it worked in terms of like both our innovation, as well as the knowledge base outside of Arcadia Practically speaking, something we've started doing, that's been really helpful because we've gone. We've already gone through different iterations of this too. Like we [00:18:35] started out of like, oh, let's put out a Google survey. People can fill out where they pitch a project to us. And that like fell really flat because there's no conversation to be had there. And now they're basically writing a proposal. Yeah. More streamlined, but it's not that qualitatively different of a process. So then we started doing these things called sandboxes, which I'm actually really enjoying right now. These are every Friday we have like an hour long session. The entire company goes and someone's up at the dry erase board. We call it, throwing them in the sandbox and they present some idea or set of ideas or even something they're really struggling. For everybody to like, basically converse with them about it. And this has actually been a much more productive way for us to source ideas. And also for me to think collaboratively with them about like the right level of like resources, the right sort of inflection points for like, when we decide go or no, go on things. And so that's how we're currently doing it. I mean, we're [00:19:35] like just shy of about 30 people. I, this process will probably break again. once we hit like 50 people or something, cuz it's actually just like logistically a lot of people to cram into a room and there is a level of sort of like, yeah, and then there's a level of formality that starts to happen when there's like that many people in the room. So we'll see how it goes, but that's how it's currently working today. [00:20:00] Ben: that's that's really cool. And, and, and so then, then like, let's, let's keep following the, the evolutionary path, right. So an idea gets sandboxed and you collectively come to some conclusion that it's like, okay, like this idea is, is like, well worth pursuing then what happens. [00:20:16] Seemay: So then and actually we're like very much still under construction right now around this. We're trying to figure out like, how do, how do we think about budget and stuff for this type of step? But then presumably, okay, the person starts working on it. I can tell you where we're trying to go. I, I'm not sure where there yet, where we're trying to go is turning our [00:20:35] publications into a way to like actually integrate into this process. Like, ideally I would love it as CEO, if I can be updated on what people in the order are doing through our pub site. [00:20:49] Ben: Oh [00:20:50] Seemay: And that, like, I'm not saying they publish every single thing they do every day. Of course, that's crazy, crazy talk, but like that it's somewhat in line with what's happening in real time. That that is an appropriate place for me to catch up on what they're doing and think about like high level decisions and get feedback and see the feedback from the community as well, because that matters, right? Like if, if our goal is to either generate products in the form of actual products in the world that we commercialize versus knowledge products that are useful to others and can stimulate either more thought or be used by others directly. Like I need to actually see that data in the form of like the outside world interacting with their releases. Right. [00:21:35] So that's what we're trying to move towards, but there's a lot of challenges associated with that. Like if a, if a scientist is like needing to publish very frequently, How do we make sure we have the right resources in place to help them with that? There may be some aspects of that, that like anyone can help with like formatting or website issues or, you know, even like schematic illustrations to try and just like reduce the amount of friction around this process as much as possible. [00:22:00] Ben: And I guess almost just like my, my concern with the like publishing everything openly very early. And this is, this is almost where, where I disagree with with some people is that there's what, what I believe Sahi Baca called like the, the like Wardy baby problem, where ideas, when you're first sort of like poking at them are just like really ugly and you like, can't even, you can't even, like, you can barely justify it to [00:22:35] anybody on your team who like, trust you let alone people who like don't have any insight into the process. And so. Do do you, do you worry at all about like, almost just being like completely demoralized, right? Like it's just, it's so much easier to point out why something won't work early on than why it will. [00:22:56] Seemay: Yeah, totally. Yeah. [00:22:59] Ben: how do you [00:22:59] Seemay: Well, I mean, yeah, no, I think that's a hard, hard challenge. I mean, and, and people, and I would say at a metal level, I get, I get a lot of that too. Like people pointing out all the ways Arcadia [00:23:09] Ben: Yeah, I'm [00:23:10] Seemay: or potentially going to fail. So a couple things, I mean, I think one is that just, of course I'm not asking our scientists to. They have a random thought in the shower, like put that out into the world. right. Like there's of course some balance, like, you know, go through some amount of like thinking and like, you know, feedback with, with their most local peers on it. More, more in terms more than anything, like [00:23:35] just to like make sure by the time it goes out into the world that you're capturing precious bandwidth strategically. Right. [00:23:41] Ben: Yeah, [00:23:41] Seemay: On the other hand though, like, you know, while we don't want like that totally raw thing, we are so far on the, under the spectrum right now in terms of like forgiveness of some wards. And, and it also ignores the fact that like, it's the process, right? Like ugly baby. Great. That's that's like, like the uglier the better, like put it out there because like you want that feedback. You're not trying to be. trying to get to some ground truth here. And rigor happens through lots of like feedback throughout the entire process, especially at the beginning. And it's not even like that, that rigor doesn't happen in our current system. It's just that it doesn't make it out into the public space. People do share their thoughts with others. They do it at the dry erase board. They share proposals with each other. There's a lot of this happening. It's just not visible. So I mean, the other thing just like culturally, what I've been trying to like emphasize at [00:24:35] Arcadia is like process, not outcomes that like, you know, talking about it directly, as well as we have like an exercise in the beginning of thinking about like, what is the correct level of like failure rate quote unquote, and like what's productive failure. And just like, if we are actually doing like high risk, interesting science that's worth doing fundamentally, there's gotta be some inherent level of failure built in that we expect. Otherwise, we are answering questions. We already know the answer to, and then what's the fucking point. Right? [00:25:05] Ben: Yeah, [00:25:06] Seemay: So it almost doesn't matter what the answer to that question is. Like people said like 20%, some people said 80%, there's a very wide range in people's heads. Cuz there's this, isn't not a precise question. Right. So there's not gonna be precise answers, but the point is like the acceptance of that fact. Right? [00:25:24] Ben: Yeah. And also, I, I think I'm not sure if you would agree with this, but like, I, I feel like even like failure is a very fuzzy concept. In this, in this context, [00:25:35] right? [00:25:35] Seemay: totally. I actually really hate that word. We, we are trying to rebrand it internally to pivots. [00:25:42] Ben: Yeah. Yeah. I like that. I also, I also hate in this context, the idea of like risk, right? Like risk makes sense when it's like, you're getting like cash on cash returns, but [00:25:54] Seemay: right. [00:25:54] Ben: when [00:25:55] Seemay: Yeah. Yeah. I mean, you can redefine that word in this case to say like, it's extremely risky for you to go down this safe path because you will be very likely, you know, uncovering boring things. That's a risk, right? [00:26:13] Ben: Yeah. And then just in terms of process, I wanna go one, one step further into the, sort of like the, the like strike teams around an idea. Is it like something like where, where people just volunteer do do they get, like how, how, how do you actually like form those teams? [00:26:30] Seemay: Yeah. So far there has not been like sort of top down forcing of people into things. I [00:26:35] mean, we are a small org at this point, but like, I think like personally, my philosophy is that like, people do their best work when they're, they feel agency and like sort of their own deep, inner inspiration to do it. And so I try to make things more ground up because of that. Not, not just because of like some fuzzy feeling, but actually I think you'll get the best work from people, if you'd set it up that way. Having said that, you know, there are starting to be situations where we see an opportunity for a strike team project where we can, like, we need to hire someone to come. [00:27:11] Ben: Mm-hmm [00:27:12] Seemay: because no one existing has that skill set. So that that's a level of like flexibility that like not everybody has in other organizations, right. That you have an idea now you can hire more people onto it. So I mean, that's like obviously a huge privilege. We have to be able to do that where now we can just like transparently be like, here's the thing who wants to do it? You know? [00:27:32] Ben: yeah, yeah. [00:27:35] That's, that's very cool. [00:27:36] Seemay: One more thing else. Can I just say one more thing about that [00:27:39] Ben: of course you can see as many things as you [00:27:40] Seemay: yeah. Actually the fact that that's possible, I feel like really liberates people at Arcadia to think more creatively because something very different happens when I ask people in the room. What other directions do you think you could go in versus what other directions do you think this project should go, could go in that we could hire someone from the outside to come do. Because now they like, oh, it doesn't have to be me. Or maybe they're maybe it's because they don't have the skillset or maybe they're attached to something else that they're working on. So making sure that in their mind, it's not framed as like an either or, but in if, and, and that they can stay in their lane with what they most wanna do. If we decide to move forward on that, you know? Cause I, I think that's often something that like in academia, we don't get to think about things that way. [00:28:30] Ben: Yeah, absolutely. And then the, the people that you would hire onto a [00:28:35] project, would they, like, so say, say, say the, the project then ends it, it reaches some endpoint. Do they like then sort of go back into the, the pool of people who are, are sandboxing? How do, how does that [00:28:49] Seemay: So we, So we haven't had that challenge on a large scale yet. I would say from a human perspective, I would really like to avoid a situation where like standard biotech companies, you know, if an area gets closed out, there's a bunch of layoffs. Like it would be nice to figure out how we can like, sort of reshuffle everybody. One of the ways this has happened, but it's not a problem yet is like we have these positions called arcade scientists, which is kind of meant for this to allow people to kind of like move around. So there's actually a couple of scientists that Arcadia that are quote unquote arcade it's meant to be like a playful term for someone who's a, a generalist in some area like biochemistry, [00:29:35] generalist computational generalist, something like that, where their job is literally to just work on like the first few months of any project. [00:29:44] Ben: oh, [00:29:45] Seemay: And help kind of like, de-risk like, they're really tolerant of that process. They like it. They like trying to get something brand new off the ground. And then once it becomes like more mature with like clear milestones, then we can hire someone else and then they move on to like the next thing, I think this is a skill in itself that doesn't really get highlighted in other places. And I think it's a skillset that actually resonates with me very much personally, because if I were applying to Arcadia, that is the position that I would want. [00:30:14] Ben: I, I think I'm in the same boat. Yeah, that, and that's, that's critical is like, there aren't a lot of organizations where you sort of like get to like come in for a stage of a project. In research, like there, it it's generally like you're, you're on this project. [00:30:29] Seemay: And how often do you hear people complain about that in science of like, oh, so and so they're, they're [00:30:35] really great at starting things, but not finishing things. It's like, well, like how do we capitalize on that then? [00:30:39] Ben: yeah. Make it a feature and not a bug. Yeah, no, it's like, it it's sort of like having, I I'm imagining like sort of just different positions on a, a sports team, for example. And, and I feel like I, I was thinking the other day that that analogies between like research organizations and sports teams are, are sort of underrated right. Like you don't expect like the goal to be going and like, like scoring. Right. And you don't, you don't say like, oh, you're underperforming goalie. You didn't score any goals. [00:31:08] Seemay: Right. That's so funny. I like literally just had a call with Sam Aman before this, where, where we were talking about this a little bit, we were talking about in a slightly different context about a role that I feel like is important in our organization of someone to help connect the dots across the different projects. What we were sort of like conceptualizing in my call with him as like the cross pollinators, like the bees in the organization that like, know what get in the [00:31:35] mix, know what everyone's doing and help everybody connect the dots. And like, I feel like this is some sort of a supportive role. That's better understood on sports teams. Like there's always someone that's like the glue, right? Maybe they're not the MVP, but they're the, the other guy that's like, or, you know, girl, whatever, UN gendered, but very important. Everybody understands that. And like, it's like celebrated, you know, [00:31:58] Ben: Yeah. Yeah. And it's like, and, and the trick is, is really seeing it more like a team. Right. So that's like the, the overarching thing. [00:32:07] Seemay: And then I'll just like, I don't know, just to highlight again though, how like these realities that you and I are talking about that I think is actually very well accepted across scientists. We all understand these different roles. Those don't come out in the very hierarchical authorship, byline of publications, which is the main currency of the system. And so, yeah, that's been fascinating to like, sort of like relearn because when we started this publishing experiment, [00:32:35] I was primarily thinking about the main benefit being our ability to do different formats and in a very open way. But now I see that this there's this whole other thing that's probably had the most immediate impact on Arcadia science, which is the removal of the authorship byline. [00:32:52] Ben: Mm. So, so you don't, you don't say who wrote the thing at all. [00:32:57] Seemay: We do it's at the bottom of the article, first of all. And then it's listed in a more descriptive way of who did what, it's not this like line that's like hierarchical, whether implicitly or explicitly and for my conversations with the scientists at Arcadia, like that has been really like a, a wonderful release for them in terms of like, thinking about how do they contribute to projects and interact with each other, because it's like, it doesn't matter anymore that that currency is like off the table. [00:33:27] Ben: Yeah. That that's very cool. And can, can I, can I change tracks a little bit and ask you about model organisms? [00:33:34] Seemay: sure [00:33:34] Ben: [00:33:35] so like, and this is, this is coming really from my, my naivete, but like, like what, what are model organisms? And like, why is having more of them important? [00:33:47] Seemay: So there's, this is super, super important for me to clarify there's model organisms and there's non-model organisms, but there's actually two different ways of thinking about non-model organisms. Okay. So let me start with model organisms. A model organism is some organism that provides an extremely useful proxy for studying typically like either human biology or some conserved element of biology. So, you know, the fact that like we have. Very similar genetic makeup to mice or flies. Like there's some shortcuts you can take in these systems that allow you to like quickly ask experimental questions that would not be easy to do in a human being. Right. Like we obviously can't do those kinds of experiments there.[00:34:35] And so, and so, so the same is true for like ASIS, which can be a model for plants or for like biology more generally. And so these are really, really useful tools, especially if you think about historically how challenging it's been to set up new organisms, like, think about in the fifties before we could like sequence genomes as quickly or something, you know, like you really have to band together to like build some tools in a few systems that give you useful shortcuts in general, as proxies for biology now. [00:35:11] Ben: can I, can I, can I just double click right there? What does it mean to like set it up? Like, like what, what does it mean? Like to like, yeah. [00:35:18] Seemay: Yeah. I mean, there's basic anything from like Turing, right? Like you have to learn how to like cultivate the organism, grow it, proliferate it. Yeah. You gotta learn how to do like basic processing of it. Like whether it's like dissections or [00:35:35] isolating cell types or something, usually some form of genetics is very useful. So you can perturb the system in some controlled way and then ask precise questions. So those are kinda like the range of things that are typically challenging to set up and different organisms. Like, I, you can think of them as like video game characters, they have like different strengths, right? Like different bars. Some are [00:35:56] Ben: Yeah. [00:35:59] Seemay: fantastic for some other reason. You know, whether it's cultivation or maybe something related to their biology. And so that's that's model organisms and. I am very much pro model organisms. Like our interest in non-model organisms is in no way in conflict with my desire to see model organisms flourish, right. That fulfills an important purpose. And we need more, I would say, non-model organisms. Now. This is where it gets a little murky with the semantics. There's two ways you could think about it. At least one is that these are organisms that haven't quite risen to the level of this, the [00:36:35] canonical model organisms in terms of like tooling and sort of community effort around it. And so they're on their way to becoming model, but they're just kinda like hipster, you know, model or model organisms. Maybe you could think about it like that. There's a totally different way to think about it, which is actually how Arcadia's thinking about it, is to not use them as proxy for shared biology at all. But focus on the biology that is unique about that organism that signals some unique biological innovation that happened for that organism or plate of organisms or something. So for example, ticks releasing a bunch of like crap in their saliva, into your skin. That's not a proxy for us, like feeding on other, you know, vertebrates that is an innovation that happened because ticks have this like enormous job they've had to evolve to learn, to do well, which is to manipulate everything about your [00:37:35] circulation, your skin barrier, to make sure it's one blood meal at each of its life stages happen successfully and can happen for days to over a week. It's extremely prolonged. It can't be detected. So that is a very cool facet about tech biology that we could now leverage to learn something different. That could be useful for human biology, but that's, it's not a proxy, right? [00:37:58] Ben: Yeah. And so, so I was gonna ask you why ticks are cool, but I think that that's sort of self explanatory. [00:38:05] Seemay: Oh, they're wild. Like they, like, they have this like one job to do, which is to drink your blood and not get found out. [00:38:15] Ben: and, and I guess like, is there, so, so like with ticks, I I'm trying to, to frame this, like, is there something useful in like comparing like ticks and mosquitoes? Do they like work by the same mechanisms? Are they like completely different [00:38:30] Seemay: yeah. There's no, there's definitely something interesting here to explore because blood [00:38:35] feeding as a behavior in some ways is a very risky behavior. Right. Any sort of parasitism like that. And actually blood [00:38:42] Ben: That's trying to drink my blood. [00:38:44] Seemay: Yes. That's the appropriate response. Blood feeding actually emerged multiple times over the course of evolution in different lineages and mosquitoes, leeches ticks are in very different clouds of organisms and they have like different strategies for solving the same problem that they've evolved independently. So there's some convergence there, but there's a lot of divergence there as well. So for example, mosquitoes, and if you think about mosquitoes, leaches, and tick, this is a great spectrum because what's critically different about them is the duration of the blood [00:39:18] Ben: Mm, [00:39:19] Seemay: feed for a few seconds. If they're lucky, maybe in the range of minutes, leaches are like minutes to hours. Ticks are dazed to over a week. Okay. So like temporally, like they have to deal with very different. For, for mosquitoes, they tend to focus on [00:39:35] like immediately numbing of the local area to getting it out. Right. Undetected, Lees. They they're there for a little bit longer, so they have very cool molecules around blood flow like that there's a dilation, like speeding up the amount of blood that they can intake during that period. And then ticks have to deal with not just the like immediate response, but also longer term response, inflammation, wound healing, all these other sensations that happen. If, imagine if you stuck a needle in yourself for a week, like a lot more is going on, right? [00:40:08] Ben: Yeah. Okay. That, that makes a lot of sense. And so, so they really are sort of unique in that temporal sense, which is actually important. [00:40:17] Seemay: Yeah. And whether it's positive or not, it does seem to track that duration of that blood meal at least correlates with sort of the molecular complexity in terms of Sliva composition from each of these different sets of organisms. I just list. So there's way more proteins in other molecules that [00:40:35] have been detected int saliva as opposed to mosquito saliva. [00:40:39] Ben: And, and so what you're sort of like one of your, your high level things is, is like figuring out which of those are important, what mixture of them are important and like how to replicate that for youthful purposes? [00:40:51] Seemay: Yeah. Right, exactly. Yeah. [00:40:54] Ben: and, and, and are there other, like, I mean, I, I guess we can imagine like farther into Arcadia's future and, and think about like, what do you have, like, almost like a, like a wishlist or roadmap of like, what other really weird organisms you want to start poking at? [00:41:13] Seemay: So actually, so that, that is originally how we were thinking about this problem for non-model organisms like which organisms, which opportunities and that itself has evolved in the last year. Well, we realized in part, because of our, just like total paralysis around this decision, because [00:41:35] what we didn't wanna do is say, okay, now Arcadia's basically decided to double down on these other five organisms. We've increased the Canon by five now. Great. Okay. But actually that's not what we're trying to do. Right. We're trying to highlight the like totally different way. You could think about capitalizing on interesting biology and our impact will be felt more strongly if it happens, not just in Arcadia, but beyond Arcadia for this to be a more common way. And, and I think like Symbio is really pushing for this as a field in general. So we've gone from sort of like which organisms to thinking about. Maybe one of our most important contributions is to ask the question, how do you decide which organism, like, what is even the right set of experiments to help you understand that? What is the right set of data? That you might wanna collect, that would help you decide, let's say for example, cuz this is an actual example. We're very interested in produce diatoms, algae, other things, which, [00:42:35] which species should you settle on? I don't know. Like there's so many, right? Like, so then we started collecting like as many we could get our hands on through publicly available databases or culture collections. And now we are asking the meta question of like, okay, we have these, what experiments should we be doing in a high throughput way across all of these to help us decide. And that itself, that process, that engine is something that I think could be really useful for us to share with the worlds that is like hard for an individual academic lab to think about. That is not aligned with realities of like grants and journal publications and stuff. And so, yeah. Is it like RNA seek data sets? What kind of like pheno assays might you want, you want to collect? And we now call this broadly organismal onboarding process. Like what do you need in the profile of the different organisms and like, is it, phenomics now there's structural [00:43:35] prediction pipelines that we could be running across these different genomes depending on your question, it also may be a different set of things, but wouldn't it be nice to sort of just slightly turn the ES serendipity around, like, you know, what was around you versus like, can we go in and actually systematically ask this question and get a little closer to something that is useful? You know, [00:43:59] Ben: Yeah. [00:43:59] Seemay: and I think the amazing thing about this is. You know, I, and I don't wanna ignore the fact that there's been like tons of work on this front from like the field of like integrative biology and evolutionary biologists. Like there's so much cool stuff that they have found. What I wanna do is like couple their thinking in their efforts with like the latest and greatest technologies to amplify it and just like broaden the reach of the way they ask those questions. And the thing that's awesome about biology is even if you didn't do any of this and you grabbed like a random butterfly, you would still find extremely cool stuff. So that's the [00:44:34] Ben: [00:44:35] Right. Yeah. [00:44:36] Seemay: like, where can we go from here now that we have all these different technologies at our disposal? [00:44:41] Ben: Yeah. No, that's, that's extremely cool. And I wanted to ask a few questions about Arcadia's business model. And so sort of like it's, it's a public fact, unlike a lot of research organizations, Arcadia is, is a for-profit organization now, of course, that's that's a, you and I know that that's a legal designation. And there's like, I, I almost think of there as being like some multidimensional space where it's like, on the one hand you have like, like the Chan Zuckerberg initiative, which is like, is nominally a for-profit right. In the sense of [00:45:12] Seemay: Yeah. [00:45:13] Ben: not a, it's not a non-profit organization. And then on the other hand, under the spectrum, you have maybe like something like a hedge fund where it's like, what is like the only purpose of this organization in the world is to turn money into more money. Right. And so like, I, I guess I'd love to know like how you, how you think about sort of like where in that domain you [00:45:34] Seemay: [00:45:35] Yeah. Yeah. So, okay. This [00:45:38] Ben: and like how you sort of came to that, that [00:45:41] Seemay: Yeah. This was not a straightforward decision because actually I originally conceived of the Arcadia as a, a non-profit entity. And I think there were a lot of assumptions and also some ignorance on my part going into that. So, okay. Lemme try and think about the succinct way to tell all this. So I [00:45:58] Ben: take, take, take your time. [00:46:00] Seemay: okay. I started talking to a lot of other people at organizations. Like new science type of organizations. And I'll sort of like refrain from naming names here out of respect for people. But like they ran into a lot of issues around being a nonprofit, you know, for one, it, it impacted sort of like just sort of like operational aspects, maintaining a nonprofit, which if, if you haven't done it before, and I learned like, by reading about all this and learning about all this, like it maintaining that status is in and [00:46:35] of itself and effort, it requires legal counsel. It requires boards, it requires oversight. It requires reporting. There's like a whole level of operations [00:46:45] Ben: Yeah. And you always sort of have the government looking over your shoulder, being [00:46:49] Seemay: Yep. And you have to go into it prepared for that. So it also introduces some friction around like how quickly you can iterate as an organization on different things. The other thing is that like Let's say we started as a nonprofit and we realized, oh, there's a bunch of like for-profit type activities. We wanna be doing the transition of converting a nonprofit to a for-profit is actually much harder than the other way around. [00:47:16] Ben: Mm. [00:47:17] Seemay: And so that sort of like reversibility was also important to me given that, like, I didn't know exactly what Arcadia would ultimately look like, and I still dunno [00:47:27] Ben: Yeah. So it's just more optionality. [00:47:29] Seemay: Yeah. And another point is that like I do have explicit for profit interests for [00:47:35] Arcadia. This is not like, oh, I like maybe no. Like we like really want to commercialize some of our products one day. And it's, it's not because we're trying to optimize revenue it's because it's very central to our financial experiment that we're trying to think about, like new structures. Basic scientists and basic science can be, can capture its own value in society a little bit more efficiently. And so if we believe the hypothesis that discovering new biology across a wide range of organisms could yield actionable lessons that could then be translated into real products. Then we have to make a play for figuring out how this, how to make all this work. And I like also see an opportunity to figure out how I can make it work, such that if we do have revenue, I make sure our basic scientists get to participate in that. You know, because that is like a huge frustration for me as a basic scientist that like we haven't solved this problem. [00:48:35] Like basic science. It's a bedrock for all downstream science. Yet we some have to have, yeah, we have to be like siloed away from it. Like we don't get to play a part in it. And also the scientists at our Katy, I would say are not like traditional academic scientists. Like I would, I, my estimate would be like, at least a third of them have an intentional explicit interest in being part of a company one day that they helped found or spin out. And so that's great. We have a lot of like very entrepreneurial scientists at Arcadia. And so I I'm, I'm not shying away from the fact that like, we are interested in a, for profit mission. Having said all of that, I think it's important to remember that like mission and values don't stem from tax structure, right? Like you, there are nonprofit organizations that have like rotten values. And there are also for-profit organizations that have rotten values, like that is not the [00:49:35] dividing line for this. And so I think it puts the onus on us at Arcadia though, to continuously be rigorous with ourselves accountable to ourselves, to like define our values and mission. But I don't think that they are like necessarily reliant on the tax structure, especially in a for-profit organization where there's only two people at the cap table and their original motivating reason to do doing this was to conduct a meta science experiment. So we have like a unique alignment with our funders on this that I think also makes us different from other for-profit orgs. We're not a C Corp, we're an LC. And actually we're going through the process right now of exploring like B Corp status, which means that you have a, a fundamental, like mix of mission and for profit. [00:50:21] Ben: Yeah. That was actually something that I was going to ask about just in, in terms of, I think, what sort of like implicitly. One of the reasons that people wonder about [00:50:35] the, the mixture of like research and for profit status is that like the, the, the time scales of research is just, are just long, right? Like, like re, re research research takes a long time and is expensive. And if, if you're like, sort of answering to investors who are like really like, primarily looking for a return on their investment I feel like that, I, I mean, at least just in, in my experience and like my, my thinking about this like that, that, that's, that's my worry about it is, is that like so, so what, like having like, really like a small number of really aligned investors seems like pretty critical to being able to like, stick to your values. [00:51:18] Seemay: Yeah, no, it's true. I mean, there were actually other people interested in funding, our Arcadian every once in a while I get reached out to still, but like me Jud and Sam and Che, like we went through the ringer together. Like we went on this journey together to get here, to [00:51:35] decide on this. And I think there is, I think built in an understanding that like, there's a chance this will fail financially and otherwise. Um, but, but I think the important case to consider is like that we discussed is like, what would happen if we are a scientific success, but a financial failure. What are each of you interested in doing. and that that's such an important answer. A question, right? So for both of them, the answer was we would consider the option of endowing this into a nonprofit, but only if the science is interesting. Okay. If that is, and I'm not saying that we're gonna target that end goal, like I'm gonna fight with all my might to figure out another way, but that is a super informative answer, right? Because [00:52:27] Ben: yeah, [00:52:27] Seemay: delineating what the priorities are. The priority is the science, the revenue is [00:52:35] subservient to that. And if it doesn't work fine, we will still iterate on that like top priority. [00:52:42] Ben: Yeah, it would also be, I mean, like that would be cool. It would also be cool if, if you, I mean, it's just like, everybody thinks about like growing forever, but I think it would be incredibly cool if you all just managed to make enough revenue that you can just like, keep the cycle going right. [00:52:58] Seemay: Yeah. It also opens us up to a whole new pocket of investments that is difficult in like more standard sort of like LP funded situations. So, you know, given that our goal is sustainability now, like things that are like two to five X ROI are totally on the table. [00:53:22] Ben: Yeah. Yeah, yeah. [00:53:24] Seemay: actually that opens up a huge competitive edge for us in an area of like tools or products that like are not really that interesting to [00:53:35] LPs that are looking to achieve something else. [00:53:38] Ben: yeah, with like a normal startup. And I think that I, I, that that's, I think really important. Like I, I think that is a big deal because there's, there's so many things that I see And, and it's like the two to five X on the amount of money that you could capture. Right. But like the, the, the amount of value that you create could be much, much larger than that. Right. Like, and this is the whole problem. Like, I, I, I mean, it's just like the, the thing that I always run into is you look at just like the ability of people to capture the value of research. And it just is very hard to, to like capture the whole thing. And often when people try to do that, it ends up sort of like constraining it. And so you're, you're just like, okay, with getting a reasonable return then it just lets you do so many other cool things. [00:54:27] Seemay: yeah. I'm yeah. I think that's the vibe. [00:54:32] Ben: that is an excellent vibe. And, and speaking [00:54:35] with the vibe and, and you mentioned this I'm, I. Interested in both, like how you like find, and then convince people to, to join Arcadia. Right. Because it's, it's like, you are, you are to some extent asking people to like play a completely different game. Right? Like you're asking people who have been in this, this like you know, like citations and, and paper game to say like, okay, you're gonna like, stop playing that and play this other thing. And so like, yeah. [00:55:04] Seemay: yeah. It's funny. Like I get asked this all the time, like, how do you protect the careers or whatever of people that come to Arcadia? And the solution is actually pretty simple, even though people don't think of it, which is you Don. You don't try and convince people to come. Like we are not trying to grow into an infinitely large organization. I don't even know if we'll ever reach that number 150. Like I was just talking to Sam about like, we may break before that point. Like, that's just sort of like my cap. We may find that [00:55:35] 50 people is like the perfect number 75 is. And you know, we're actually just trying to figure out like, what is, what are the right ingredients for the thing we're trying to do? And so therefore we don't need everybody to join. We need the right people to join and we can't absorb the risk of people who ultimately see a career path that is not well supported by Arcadia. If we absorb that, it will pull us back to the means. because we don't want anyone at Arcadia to be miserable. We want scientists to succeed. So actually the easiest way to do that is to not try and convince people to do something they're not comfortable with and find the people for whom it feels like a natural fit. So actually think, I think I saw on Twitter, someone ask this question in your thread about what's like the, oh, an important question you asked during your interviews. And like one of the most important questions I ask someone is where else have you applied for jobs? [00:56:35] And if they literally haven't applied anywhere outside of academia, like that's an opportunity for me to push [00:56:43] Ben: Mm. [00:56:44] Seemay: I'm very worried about that. Like, I, I don't want them to be quote unquote, making a sacrifice that doesn't resonate with where they're trying to go in their career. Cuz I can't help them AF like once they come. Arcadia has to evolve like its own organism. And like, sometimes that means things that are not great for people who wanna be in academia, including like the publishing and journal bit. And so yeah, what I tell them is like, look, you have two jobs at Arcadia and both have to be equally exciting to you. And you have to fully understand that there both your responsibility, your job is to be a scientist and a meta scientist. And that those two things have to be. You understand what that second thing is that your job is to evolve with me, provide me with feedback on like, what is working and not working [00:57:35] for you and actively participate in all the meta science experiments that we're doing around publishing translation, technology, all these things, right? Like it can't be passive. It has to be active. If that sounds exciting to you, this is a great place for you. If you're trying to figure out how you're going to do that, have your cake and eat it too, and still have a CV that's competitive for academia in case like in a year, you know, like you go back, I, this is not the place for you. And I, I can't as a human being, like, that's, I, I can't absorb that because like, I like, I can't help, but have some empathy for you once you're here as an individual, like, I don't want you to suffer. Right. And so we need to have those hard conversations early before they join. And there's been a few times where like, yeah, I think like I sufficiently scared someone away. So I think it was better for them. Right? Like it's better for [00:58:25] Ben: Yeah, totally. [00:58:25] Seemay: if that happens. Yeah, it's harder once they're here. [00:58:29] Ben: and, and so, so the like, The, they tend to be people who are sort of like already [00:58:35] thinking, like already have like one foot out the door of, of academia in the sense of like, they're, they're already sort of like exploring that possibility. So they've so you don't have to like get them to that point. [00:58:48] Seemay: Right. Yes. Because like, like that's a whole journey they need to go on in, on their own, because there's so many reasons why someone might be excited to leave academia and go to another organization like this. I mean, there's push and pull. Right. So I think that's a challenge, like separating out, like, like what is just like push, because they're like upset with how things are going there versus like, do they actually understand what joining us will entail? And are they, do they have the like optimism and the agency to like, help me do this experiment. It does require optimism. Right. [00:59:25] Ben: absolutely. [00:59:25] Seemay: So like sometimes like, you know, I push people, like what, where else have you applied for jobs? And they, if they can't seem to answer that very well I say, okay, let me change [00:59:35] this question. You come to Arcadia and I die. Arcadia dissolves. It's, it's an easier way of like, it's like, I can own it. Okay. Like I died and like me and Che and Jed die. Okay. Like now what are you gonna do with your career? And like, I is a silly question, but it's kind of a serious question. Like, you know, just like, what is, how does this fit into your context of how you think about your career and is it actually going to move you towards where you're trying to go? Because, I mean, I think like that's yeah. Another problem we're trying to solve is like scientists need to feel more agency and they won't feel agency by just jumping to another thing that they think is going to solve problems for them. [01:00:15] Ben: Yeah, that's a really good point. And so, so this is almost a selfish question, but like where do you find these people? Right? Like you seem to, you seem to be very good at it. [01:00:26] Seemay: Yeah. I actually don't I don't, I, I don't know the answer to that question fully because we [01:00:35] only just recently said, oh my God, we need to start collecting some data through like voluntary surveys from applicants of like, how do they know about us? You know? It seems to be a lot of like word of mouth, social media, maybe they read something that I wrote or that Che wrote or something. And while that's been fine so far, we also like wanna think about how we like broaden that reach further. It's definitely not through their, for the most part, not through their institutes or PIs that I know. [01:01:03] Ben: Yeah, I, but, but it is, it is like, it sounds like it does tend to be inbounds, right? Like it tends to be people like reaching out to you as opposed to the other way around. [01:01:16] Seemay: Yeah. You know, and that's not for lack of effort. I mean, there have been definitely times where. We have like proactively gone out and tried to scout people, but it does run into that problem that I just described before of like, [01:01:29] Ben: Yeah. [01:01:30] Seemay: if you find them yourself, are you trying to pull them in and have they gone through their own [01:01:35] journey yet? And so in some of those cases, while it seemed like, like we entertain like conversations for a while with a couple of candidates, we tried to scout, but ultimately that's where it ended was like, oh, they like, they need to go on their own. And like, sort of like fully explore for a bit, you know, this would be a bit risky. But it hasn't, you know, it hasn't been all, you know a failure like that, but it, it happens a lot. [01:01:58] Ben: Yeah, no, I mean that, that, frankly, that, that squares with my, my experience sort of like roughly, roughly trying to find people who, who fit a similar mold. So that that's, I mean, and that, that suggests a strategy, right. Is like, be like, be good at setting up some kind of lighthouse, which you, you seem to have done. [01:02:17] Seemay: The only challenge with this, I would say, and, and we are still grappling with this is that sort of approach does make it hard to reach candidates that are sort of like historically underrepresented, because they may not see themselves as like strong candidates for such and such. And [01:02:35] so now we're, now we have this other challenge to solve of like, how do we make sure people have gone through their own process on their own, but also make sure that the opportunity is getting communicated to the right people and that they like all, everybody understands that they're a candidate, you know, [01:02:53] Ben: Yeah. And I guess so , as long as we're recording this podcast, like what, what is that like, like if you were talking to someone who was like, what does that process even look like? Like what would I start doing? Like what would you, what would you tell someone? [01:03:08] Seemay: Oh, to like explore a role at Arcadia. [01:03:11] Ben: yeah. Or just like to like, go through that, like, like to, to start going through that [01:03:16] Seemay: Yeah. Yeah, I mean, I guess like, there's probably a couple of different things. Like, I mean, one is just some deep introspection on like, what are your priorities in your life, right? Like what are you trying to achieve in your career? Beyond just like the sort of ladder thing, like what's the, what are the most important, like north stars for you? And I think [01:03:35] like for a place like Arcadia or any of the other sort of like meta science experiments, That has to be part of it somehow. Right. Like being really interested and passionate about being part of finding a solution and being one of the risk takers for them. I think the other thing is like very pragmatic, just like literally go out there and like explore other jobs, please. Like, I feel like, you know, like, like what is your market value? You know? Like what [01:04:05] Ben: don't don't [01:04:05] Seemay: Yeah. Like, and like go get that information for yourself. And then you will also feel a sense of like security, because like, even if I die and Arcadia dissolves, you will realize through that process that you have a lot of other opportunities and your skillset is highly valuable. And so there is like solid ground underneath you, regardless of what happens here, that they need to absorb that. Right. And then also just. Like, trust me, your negotiations with me will go way better. If you come in [01:04:35] armed with information, like one of my goals with like compensation for example, is to be really accurate about making sure we're hitting the right market value for you and being equitable across the organization at Arcadia. So like the more information you can present with me about like real market data, the better and easier that conversation will be. Right. So, [01:04:55] Ben: no, that that's really good. I, I think it's important for people to, to think about that more. And, and I guess sort of to, to start to bring things more to a close Elon ger pointed out a really good question on, on Twitter. And so, and, and I'm sure you don't have like a, a. A really clear answer. So like, let's, let's like reflect on it together, but like, how do we, how do we create encourage, train more of, of you, right? Like deeply technical researchers who take the initiative to step out of their comfort zones and build, or join new research institutions and like, do, do you have any sense of [01:05:35] that? What, what would you [01:05:36] Seemay: there's so many, uh, by younger me, I mean, I've always been sort of like, I mean, I thought I saw Ethan reply to it too, about like, so that's the founder mentality basically. And I think he is something he said in there, I was like, oh, that's totally true. Like, I'm a definite like addict of like chaos and like disruption, you know? So, so maybe that there's certain elements of this that maybe are just naturally more comfortable to some than others. But I do think there's like an important step. We need to start taking in the general scientific ecosystem, which is to just stop gas, lighting each other. Right. Cause that's like, step number one. Like when you realize that, like your challenges are real and potentially generalizable and worthy of solving and not just something you need to like absorb because of some, something wrong with you. Like that is, seems like the critical first step that has to happen intellectually before anything can change. And that [01:06:35] people feel some agency to be agents of that change, because that is what happened for me. Right. Like when I started realizing like, oh, holy shit, like structures have changed before in history. Like what we're doing right now, isn't this like immutable thing. And then I started having conversations with other scientists that was key. I probably had like a hundred zooms or something to convince myself that, oh, this is not just me complaining, like, like me struggling with this, that these are like generalizable systemic problems. We should stop gas, lighting ourselves. And then. What's the solution [01:07:12] Ben: Yeah. [01:07:12] Seemay: like that is like, like agency and then optimism around that. Right. Like, I don't know if Arcadia's gonna work. The most important thing is that we try [01:07:21] Ben: Yeah. [01:07:22] Seemay: we need to like, get that across to scientists in like the next generation. [01:07:27] Ben: And, and do you have, I mean, it's a very valid answer to just say, like, it's an innate trait in you that [01:07:35] comes from wherever, but like, do, do you have any sense of like what sort of instilled both that, that agency and optimism in you? Right. Like how do we, how do we encourage more agency and optimism in people? And I, I mean, I, I have , I have no idea. But that, that seems like really cruxy, right? [01:07:53] Seemay: Yeah. I don't know. I mean, I mean, I think one thing we cannot ignore is that there's a huge amount of privilege here. Like [01:08:02] Ben: Oh [01:08:02] Seemay: I have access to resources. Both like throughout my life, as well as in my relationship with Jed, that allows me to like have a broader solution space to consider. So I, I, that's very important to remember on more personal note, I think I I've thought about this a little bit. Like just the fact that I grew up in like a very, very religious family and went through a process of sort of like leaving that religion and that culture was probably my first, you know, formative experience about [01:08:35] like questioning a system and then deciding to, [01:08:38] Ben: Taking [01:08:39] Seemay: you know, step away from it or, or explore around it or something. And that, I don't know, I guess like if you're like willing to like leave God, you're like very open to leaving other things, you know, it's, that's probably something we need to like instill in people at an even younger age is just like more like. Thoughtful questioning about like our systems and, and also like providing them then immediately with like, tools to think productively about that. Not just, you know, wallowing it. And that is where the privilege does come in. And like, I, I do wanna think more about like, yeah, how do we democratize this a little bit more through resource distribution? [01:09:23] Ben: Yeah. A thought that actually just came to me, I'd be interested in, in your response to it is like almost it, it's it not just encouraging people to be a agent, but then like [01:09:35] rewarding that agency. Like what, what, what, what I feel like I feel like there there's right now, almost like not a strong correlation between people acting energetically. being supported. And so like I'm imagining just some system where like, you, you have people just sort of like watching and being like, oh, like that person's like being really a agent I'm going to like, like, instead of like making them like, apply for a grant or whatever, it's just like, oh, it's like, you're being agent good job. Keep doing that. [01:10:07] Seemay: Yeah, I know. So I struggle with that. And I thought about that before. The reason I struggle with it is basically once you start, anytime you start putting metrics to something or rewarding a behavior, you may accidentally corrupt the ability to source like genuine behaviors in that regard. Right. And like, as someone who's like more entrepreneurial in their thinking, or like more of a disruptor, like the [01:10:35] greatest like reward you can give me is to not sit there and like obsess over this concept. Like, like if I didn't build Arcadia, I would be like nightly, like insomniac, anxious, looping around this, you know? And that is like, what drives me to do it? Not like some other external reward. So, I don't know like what the right balance is around that, I guess, [01:10:56] Ben: Yeah, totally. [01:10:59] Seemay: yeah [01:10:59] Ben: well this is, this is awesome. I, I really appreciate you, like going, going into this and, and sort of like being really like, straightforward about the, like the tensions and the thought process. And I guess something that I, I like to ask people is just like, what, what is something that you think people should be thinking about more that they're not [01:11:23] Seemay: I think they should be thinking more about how to like, in the meta science space about how to make more of the building process like visible. Actually this relates to a question that happened on your [01:11:35] thread that I was like, oh my God, I like wanna answer that. But not for the reasons that person probably thinks they were like, basically they were like, you know, like, why do you think you'll succeed when Calco didn't. And like, I would love to answer that question with some level of precision, but I can't, because I have no idea what Calco is doing. so like, if someone can help [01:11:56] Ben: can't even, we can't even benchmark against it. [01:11:58] Seemay: to compare and I would love to avoid the pitfalls. I mean, I think there's some obvious differences between us, but it actually, the larger point is like these experiments have to happen openly. And I'm actually in the process of trying to figure this out with like, Institute for progress. Like how do we make sure that all the different things that are happening right now, the information is available to others so that when or lose, like, I don't even think about it in that way at Arcadia. Right? It's not about winning or losing it's about [01:12:25] Ben: Yeah. [01:12:27] Seemay: and we can't learn together if we can't talk about it. So would love to answer that question somehow, but I can't [01:12:34] Ben: I [01:12:35] am so on board with that. Let's, let's figure it out. [01:12:39] Seemay: awesome. [01:12:41] Ben: all right. Well, see, man, thank you so much for, for being on the podcast. I'm deeply [01:12:46] Seemay: Yeah, you're welcome. Hi, it's Ben again. As an experiment, I'm going to try giving you a few pointers towards places that you might want to go. If you found this podcast compelling. I know, I rarely look at show notes when I'm listening to a podcast. So I'm going to verbally highlight some things you might want to look into. So he may is active on Twitter. She's at CMA to just her name. We even sourced some excellent questions for this podcast there. And so it's a good place to ask questions and, and engage. CMA wrote a piece about why she was building Arcadia that expands on some things we talked about. On medium I'll link to that in the show notes. If you want to see some of the open publications that CMA talked about during the podcast. You can go to research dot Arcadia science.com. And if you [01:13:35] liked this episode of the podcast in particular, You might want to listen to the ones that I've done with Arthur. And Ilan ger. And if you liked this experiment at any of the other ones that I've done, have ideas for things to try in the podcast format or have any other feedback, just let me know.…
I
Idea Machines


1 DARPA and Advanced Manufacturing with William Bonvillian [Idea Machines #46] 48:26
48:26
Play Later
Play Later
Lists
Like
Liked48:26
William Bonvillian does a deep dive about his decades of research on how DARPA works and his more recent work on advanced manufacturing. William is a Lecturer at MIT and the Senior Director of Special Projects,at MIT’s Office of Digital Learning. Before joining MIT he spent almost two decades as a senior policy advisor for the US senate. He’s also published many papers and a detailed book exploring the DARPA model. Links William's Website The DARPA Model for Transformative Technologies Transcript [00:00:35] In this podcast, William Bonvillian, and I do a deep dive about his decades of research about how DARPA works and his more recent work on advanced manufacturing. Well humans, a lecturer at MIT and a senior director of special projects at MIT is office of digital learning. Before joining MIT. He spent almost two decades as a senior policy advisor for the us Senate. He's published many papers and a detailed book exploring the DARPA model. I've wanted [00:01:35] to compare notes with him for years. And it was a pleasure. And an honor to finally catch up with him. Here's my conversation with William [00:01:42] Ben: The place that I I'd love to start off is how did you get interested in, in DARPA and the DARPA model in the first place you've been writing about it for more than a decade now. And, and you're probably one of the, the foremost people who who've explored it. So how'd you get there in the first. [00:01:58] William: You know, I, I I worked for the us Senate as a advisor in the Senate for for about 15 years before coming to MIT then. And I I worked for a us Senator who is on the on the armed services committee. And so I began doing a substantial amount of that staffing, given my interest in science technology, R and D and you know, got early contact with DARPA with some of DARPA's both program managers and the DARPA directors, and kind of got to know the agency that way spent some time with them over in their [00:02:35] offices. You know, really kind of got to know the program and began to realize what a, what a dynamic force it was. And, you know, we're talking 20, 20 plus years ago when frankly DARPA was a lot less known than it is now. So yeah, just like you know, kind of suddenly finding this, this Jewelbox varied in the. It was it was a real discovery for me and I became very, very interested in the, kind of the model they had, which was so different than the other federal R and D agencies. [00:03:05] Ben: Yeah. And, and actually um, It sort of in your mind, what is the for, for people who I, I think tend to see different federal agencies that give money to researchers as, as all being in the same bucket. What, what do you, what would you describe the difference between DARPA and the NSF as being [00:03:24] William: well? I mean, there's a big difference. So the NSF model is to support basic research. And they have, you know, the equivalent of project [00:03:35] managers there and they, they don't do the selecting of the research projects. Instead they queue up applicants for funds and then they supervise a peer review process. Of experts, you know, largely from academia who evaluate, you know, a host of proposals in a, in a given R and D area mm-hmm and and make valuations as to which ones would qualify. What are the kind of best most competitive applicants for NSFs basic research. So DARPA's got a different project going on, so it doesn't work from the bottom up. It, it has strong program managers who are in effect kind of empowered to go out and create new things. So they're not just, you know, responding to. Grant applications for basic research, they come into DARPA and develop a [00:04:35] vision of a new breakthrough technology area. They wanna stand up. And so it's, and there's no peer review here. It's really, you hire talented program managers. And you unleash them, you turn them loose, you empower them to go out and find the best work that's going on in the country. And that's, that can be from, from universities and often ends in this breakthrough technology area they've identified. But it also could be from comp companies, often smaller companies and typically they'll construct kind of a hybrid model where they've got academics. Companies working on a project, the companies are already always oriented to getting the technology out the door. Right. Cause they have to survive, but the researchers are often in touch with some of the more breakthrough capabilities behind the research. So bringing those two together is something that the program manager at DARPA does. So while at [00:05:35] NSF, the program manager equivalent, you know, their big job is getting grant out the door and supervising a complex selection process by committee mm-hmm . The role of the, of the ARPA of the, of the DARPA program manager is selecting the award winners is just the beginning of the job. Then in effect you move into their home, right? You work with them on an ongoing basis. DARPA program managers are spending at least one third of their time on the road, linking up with their, you know, with their grantees, the folks they've contracted with sort of helping them along in the process. And then, you know, the typically fund a group of research awards in an area they'll also work on putting together kind of a thinking community amongst those award winners. Contract winners so that they begin to share their best ideas. And that's not a, that's not easy, right? Yeah. Yeah. If you're an academic [00:06:35] or you, a company, you stuff, you trading ideas is a complicated process, but that's one of the tasks. That the DARPA program manager has, is to really build these thinking communities around problems. And that's what they that's what they're driven to do. So it's a very, very different situation. This is, this is the different world here that Dar is created [00:07:01] Ben: and, and sort of actually to, to, to click on The, the how DARPA program managers interact with ideas. Do you have a sense of how they incentivize that idea sharing? Is it just the, the concept that if you share these ideas, they might get funded in a way that they wouldn't or like what, how do they sort of construct that That trust that people for people could actually be sharing those ideas. [00:07:28] William: Yeah. In, in some ways then it starts out at an all stage. So before, you know, a new [00:07:35] program manager arrives at DARPA and often they'll have, I mean, this could be ape. It could be I RPA, which worked slightly different ways, but similar kind of approach RPE is our energy DARPA. I, APA is our intelligence Dar. Right. And then soon we'll have a help DARPA, which has now been funded. Yeah. I wanna [00:07:55] Ben: your opinion on that later. [00:07:57] William: Okay. Well, we're working away on this model here. You know, you hire a program manager and you hire somebody. Who's gonna be, you know, talent and dynamic and kind of entrepreneurial and standing up a new program. They get the DARPA and they begin to work on this new technology area. And a requirement of DARPA is that really be a breakthrough. They don't wanna fund incremental work that somebody else may be doing. They wanna find a new, new territory. That's their job, revolutionary breakthroughs. To get there. They'll often convene workshops, 1, 2, 3 workshops with some of the best thinkers around the country, including people, [00:08:35] people who may be applying for the funding, but they'll, they'll look for the best people bringing together and get, you know, a day long process going um, often in several different locations to kind of think through. Technology advance opportunity. How, how it might shape up what might contribute, how might you organize it? What research might go into it, what research areas and that kind of begins the kind of thinking process of building a community around a problem. And then they'll make grant awards. And then similarly, they're gonna be frequently convening this group and everybody can sit on their hands and keep their mouth shut. But you know, that's not often the way technologists work. They'll get into a problem and start wanting to share ideas and brainstorm. And that's, that's typically what then takes place and part of the job of the, of. Partner manager DARPA is to really encourage that kind of dialogue and get a lot of ideas on the table and really promote it. Yeah. [00:09:34] Ben: [00:09:35] And, and then also with, with those ideas do, do you have, like, in your, your having looked at this so much, do you have a sense of how much there there's this tension? You know, it's like people generally do the best research when they feel a lot of ownership over their own ideas and they feel like they're, they're really working on. The, the thing that they want to work on. But then at the same time to sort of for, for, for the, a project to play into a broader program, you often need to sort of adjust ideas towards sort of a, a bigger system or a bigger goal. Do you have, do you have an idea of how much Program managers sort of shape what people are working on versus just sort of enabling people to work on things that they would want to work on. Otherwise. [00:10:24] William: Yeah. The program manager in communication with DARPA's office directors and director. Right, right. So it's a very flat organization. You know, and [00:10:35] there'll be an office director and a number of program managers working with that office director. For example in the field of, of biological technologies, a fairly new DARPA office set up about a decade ago. Yeah. You know, there'll be a group of DARPA program managers with expertise in that field and they will often have often a combination of experiences. They'll have some company experience as well as some academic research experience that they're kind of walking on both sides. They'll come into DARPA often with some ideas about things they want to pursue, right. And then they'll start the whittle down process to get after what they really wanna do. And that's, that's a very, very critical stage. They'll do it often in dialogue with fellow program managers at DARPA who will contribute ideas and often with their office. Who kind of oversees the portfolio and we can feed that DARPA program manager into other areas of expertise around DARPA. So coming up with a big breakthrough idea, then [00:11:35] you test it out in these workshops, as I mentioned, right. As well as in dialogue with your colleagues at DARPA. And then if it looks like it's gonna work, then you can move it rapidly to the approval process. But DARPA is, you know, I mean, it's what its name says. It's advanced research projects agency. So it's not just doing research. It very much wants to do projects. And you know, it's an agency and it's a defense agency, so they're gonna be, have to be related to the defense sector. Although there's often spill over into huge areas of civilian economy, like in the it world really pioneer a lot. But essentially the big idea to pursue that's being developed by the program manager and refined by the program manager. And then they'll put out, you know, often what's called a broad area announcement, a BIA. We wanna get a technology that will do this. Right. Right. Give us your best [00:12:35] ideas. And put this out, this broad area announcement out and get people to start applying. And if it's, if the area is somewhat iffy, they can, you know, proceed with smaller awards to see how it kind of tests out rather than going into a full, larger, larger award process with kind of seedlings they'll plant. So there's a variety of mechanisms that it uses, but getting that big breakthrough revolution or idea is the key job at a program manager. And then they'll, they're empowered to go out and do it. And look, Dora's very cooperative. The program managers really work with each other. Yeah. But in addition, it's competitive and everybody knows whose technology is getting ahead, whose technology is moving out and what breakthroughs it might lead to. So there's a certain amount of competition amongst the program managers too, as to how their revolution is coming along. Nice. [00:13:28] Ben: And, and then sort of to, to go sort of like one level down the hierarchy, if you will. When [00:13:35] they put out these, these BAAs do you have a sense for, of how often the performers will sort of either shift their focus to, to, towards a APA program or like how much sort of haggling is there between the performer and the, the program manager in terms of Sort of finding this balance between work that supports the, the broader program goals and work that sort of supports a researcher's already existing agenda. Right. Because, you know, it's like people in their labs, they, they sort of have this, the things that they're pursuing and maybe they're, they're like sort of roughly in the same direction as a program, but need to be, need to be shifted. [00:14:20] William: Yeah. It's, you know, the role of the program manager is to put out a new technological vision, you know, some kind of new breakthrough territory. That's gonna really be a very significant [00:14:35] advance that can be implemented. It's gonna be applied. It's not discovery. It's implementation that they're oriented to. They want to create a new thing that can be implemented. So they're gonna put the vision out there and look the evaluation process. Is gonna look hard at whether or not this exact question you're raising. It's gonna look hard at whether or not the, the applicant researcher is kind of doing their own thing or can actually contribute to the, to the implementation of the vision. And that's gonna be the cutoff. Will it serve the vision or not? And if it's not, it's not gonna get the award. So look, that's an issue with DARPA. DARPA is going at their particular technology visions. NSF will fund, you know, it's driven by the applicants. They will think of ideas they wanna pursue and see if they can get NSF funding for it at DARPA's the other way around the program manager has vision [00:15:35] and then sees who's willing to pursue that vision with him or her. Yeah. Right. So it's a, it's more of a, I won't say top down because DARPA's very collaborative, but it's more of a top down approach than as opposed to NSF, which is bottom up. They're going for technology visions, not to see what neat stuff is out there. right. [00:15:56] Ben: Yeah. And just to, to shift a little bit you, you mentioned I a RPA and ARPA, E as, as other government agencies that, that used the same model you wrote an article in 2011 about ARPA E and, and I I'm interested in. What like how you think that it has played out over, over the past decade? Like how, like how well do you think that they, they have implemented the model? Do you think that it, it does work there. And like what other places do you think, I guess do, do you have a sense of like how to know whether a DARPA, the DARPA [00:16:35] model is applicable to an area more broadly? [00:16:39] William: Yeah. I mean, look that's, and that's kind of a, that's kind of a key question, you know, if you wanna do a, if you wanna do a DARPA, like thing, is it gonna work in the territory that you wanna work in? But let's, let's look at this energy issue. You know, I was involved in, you know, some of the early discussions about creating an, a. For for energy and, you know, the net result of that was that Congressman named bar Gordon led an effort on the house science committee to really create an ARPA energy. And, and that approach had been recommended by a national academies committee. And it you know, it seemed to make a term on a sense. So what was going on in energy at the time of formulation of this. Like the 2007 rough time period. You know, 2008, what was happening was that there was significant amount of investment that was moving from, in, [00:17:35] in moving in venture capital towards new energy, clean tech technologies. So the venture capital sector in that timetable was ramping. It's 2006, 2007 time period was ramping up its venture funding and Cleantech. And that's when AR was being proposed and consider. So it looks like it looked to us, looks everybody, like there would be a way of doing the scale up. Right. In other words, it's not enough just to have, you know, Cool things that come out of an agency, you need to implement the technology. So who's gonna implement it. Right. Who's gonna do that scale up into actual implementation. And that's a very key underlying issue to consider when you're trying to set up a DARPA model. DARPA has the advantage of a huge defense procurement budget. So, right. It can, you know, it can formulate a new technology breakthrough, like [00:18:35] saying stealth, right. Mm-hmm or in you know, UAVs and drones. And then it can turn to the defense department that will spend procuring money to actually stand up the model on a good day. Cause that doesn't always happen. doesn't always go. Right. But, but it's there, what's the scale up model gonna be for energy? Well, we thought there was gonna be venture capital money to scale up Cleantech. And then the bottom fell out of the Cleantech venture funding side in the 2008, 2009 time table and venture money really pulled out. So, you know, in 2009 is. Harpa E first received it, significant early funding. Got an appropriation of 400 million had been authorized for the science committee and then it got an appropriation. Could you say that again? And the there was a big risk there. So look, RPE was then created, had a very dynamic leader named Maju. Who's now at Stanford leading the energy initiatives there. Aroon [00:19:35] saw the challenge and he frankly rose to it. So if they weren't gonna get this, these technologies scaled up through venture capital, like everybody assumed would work. How are they gonna do scale up? So who did a whole series of very creative things? There was some venture left. So we maintained, you know, good relations with the venture world. But also with the corporate world, because there were a lot of corporations that were interested in kind of moving in some of these directions. If these new technologies complimented technologies, they were already pursuing, right. So room created this annual. RPE summit where all of its award winners would, you know, present their technologies and, you know, fabulous, you know, presentations and booths all around this conference. It rapidly became the leading energy technology conference in the us wide widely attended by thousands of people. Venture capital may not be funding much, but they were there. But more importantly, [00:20:35] companies were there. And, you know, looking at what these technologies were to see how they could get to get stood up. So that was a way of exposing what was RPE was doing in a really big way. Right. Right. Another approach they tried was very successfully was to create what they call the tech to market group. So in addition to your program manager at RPE, You stand up a new project and assigned to that project would be somebody with expertise in the commercialization of technology by whatever route the financing might be obtained. And they brought in a series of experts who had done this, who knew venture, who knew startups, who also knew federal government contracting in case the feds were gonna buy this stuff, particularly a D O D and this tech to market group became, you know, that was part of the discipline of standing up a project was to really make sure there was gonna be a pathway to commercialization. In fact, that approach. [00:21:35] Was so successful and DARPA for a number of years later hired away RPE tech tech to tech, to market director to run and set up its own tech to market program. Right. Which was, you know, the, the new child is just taught the parent a lesson here is what the, what the point was. So there's now a tech to market group at, at DARPA as well. Another approach they did. Was, you know, there's a, there's a substantial amount of other R and D funding, more incrementally oriented at the department of energy. The E E R E program, but other programs in different energy technology areas will support, you know, companies, company research, as well as academic research. So RP built very good ties. With E E R E the applied research wing for renewable energy and other applied research, arms of department of energy so that they could provide the kind of next stage in funding. So you do the [00:22:35] prototyping through APA E and then some of the scale up could occur through through. Some of the applied agencies within the department of department of energy. So that was, there were other things they attempted as well. But those were some of the most creative and, you know, they got around this problem. Now there's an underlying issue in energy technology and, and it's true for many. DARPA like approaches the technologies don't stand up overnight. In other words, you don't do your applied work and end up with an early prototype and expect it to become a major business within two weeks. Right. Right. That process can take 10 years or 15 years, particularly in the hard tech area. Right. Anything that requires manufacturing? Yeah. Energy technology stand up. That's a, that's a 10 to 15 year process in the United States. So RPS only been around what, you know, 11, 12 years, something like that. They're still, you know, their technology are still emerging. They have made a lot of [00:23:35] technology contributions in a lot of technology areas that have helped expand opportunity spaces. Yeah. In many interesting areas. So they really helped, I believe. In identifying kind of new territories where there can be advances. But you know, have we transformed the world and solve climate change because of RPE yet? No, no, that's, that's a longer term project. So you have to have that expectation when you look at these different story of software and in some it sectors, DARPAs played a huge role in the evolution of those. Those could be shorter. Yeah, but anything really in the heart tech area is gonna take a much more extended period. Yeah. So you have to be patient. The politicians can't expect change in two weeks or two years. They're gonna have to be a little more patient. [00:24:24] Ben: And, and another sort of just issue that I, I, I'm not sure is, is a real thing, but that I've noticed is that a difference between DARPA and RPE is that [00:24:35] with, with DARPA, when you have the, the DOD acquiring technologies, they can sort of gather together all the different projects that were in, in, within a program and sort of integrate them into an entire system where. When you have a, an ape E program ending there's, there are a number of different projects, but there, there, isn't a great way of sort of integrating all the different pieces of a program. Is that an accurate assessment or am I, am I off base on that? [00:25:07] William: No, Ben, I think that's, I think that's accurate. I know. I mean the part of energy doesn't have a procurement budget. Right, right. Like the defense department does, it's not spending 700 billion a year to make things. So it can't play that system scale up kind of role in the kind of way the defense department does. Now. Look, I, I don't wanna overstate this because DARPA has definitely stood up technologies outside of defense, above procurement. So [00:25:35] most of its it revolution stuff. Where it played a, you know, big role, for example, as you know, in the development of desktop computing and, and a huge role in, in supporting the internet development of the internet. Absolutely. You know, those got stood up, not particularly through DOD, they got stood up in the civilian sector. So DARPA, you know, works on both sides of the street here. If it appears advantageous to, to stand it up on the civilian side, let it scale up and then the can buy it. Right. Mm-hmm , it'll do that. But on the other hand, there's, you know, there's very critical areas. Defense's gonna have to be a lead on like, you know, GPS, for example and really scale up the system. And then it can be shifted over to serve a dual use. [00:26:22] Ben: And, and then, so, so sort of like looking forward to the, the future how do you see all these considerations playing out with with ARPA H the, the health ARPA that is, I think been approved, [00:26:35] but hasn't actually started doing anything yet. [00:26:39] William: Yeah. It's got money appropriated. So you, and it's a priority of the. Of the current administration. So, you know, I believe it's gonna happen here. I mean, look, you know, there, there's some things that just need to be in place for a DARPA model to work well, mm-hmm, scale up is one that we've talked about and, you know, there is a pathway to scale up for new breakthroughs in in, you know, biomedicine and and, and medical device. We've got strong venture capital support in that area for a series of historical reasons. So that follow one pickup in many fields, right, is gonna be is gonna be available in many biomedical kind of fields. Know, there are issues. There, there was a big debate about an issue that I'll call island bridge, right? What you want, what you wanna do [00:27:35] with your DARPA is you want to put your, your DARPA team on an island. You wanna protect that island and keep the bureaucracy away from it. Right? Let 'em do their thing out there and do great stuff. And don't let the bureaucracy, the suits interfere with them. Yeah. On the other hand, they really need a bridge back to the mainland to get their technologies scaled up. So DARPA, for example, reports to the, in effect to the secretary defense and can undertake projects that the secretary defense can then, you know, in effect force the military services to pick up or, you know, use, use budgeting authority to encourage the military services to pick up DARPA has it's an island. It's got a separate building. It's about five miles away from the Pentagon. It's got its team there. It's got its own established culture. But then it's got a bridge back to the mainland, through the secretary of defense, into the defense procurement system. What's gonna be ARPA HS [00:28:35] relationship there. So there've been a lot of. About where to put ARPA-h do you put it in NIH, which is another, like NSF, another peer review, basic research agencies by far the biggest it's got its own culture and that culture frankly, is not a DARPA culture, right? That's not a strong program manager culture. It's a peer review culture. Do you really want to put your DARPA like thing within NIH? And within that NIH culture on the other hand, where else are you gonna put it? Right. So at the moment we've gotta compromise the ARPA H is gonna report to the secretary of HHS, but the secretary of HHS. Doesn't have money to scale up new technologies to speak up. Right. Right. There is an assistant secretary of health who oversees BARDA and some other entities. So, you know, that's, that's a possibility. But NIH has got a lot of ongoing research going on. [00:29:35] There could be a lot of following research that came out of NIH, NIH. So it's, this is a challenge. This is a challenge to set up the right kind of island bridge model for this new ARPA H. We've kind of got a compromise there at the moment. It will be located somewhere on the NIH campus. Hopefully in a separate, you know, building or location. Yeah. And then report to the secondary of HHS. But how are these, how is this scale up gonna work here? What's the bridge to the mainland gonna be and will it be protected enough from a very different culture at NIH? With lots of look, lots of jealousies, you know, when RPE was created for energy, the labs saw the, you know, there's 14 major energy labs, right? They saw RPE as a big competitor for funding that was gonna take money away from the labs. It took a long time to build those relationships so that the lab saw RPE, not as a competitor, but as a way in which their stuff could help. Move ahead. [00:30:35] Yeah. Uh, And that took a while to kind of sort out. So there's a series of these issues that are gonna have to get well thought through for for this new ARPA H that opening culture is absolutely critical. Say more about that and it, yeah. In other words, the culture of strong program managers that are empowered and ready to pursue breakthrough technologies. That's the heart of the darker culture, that culture locks in, in the opening months. If you get it wrong, it's very hard to fix it later. You really can't go back. Yeah. So hiring the right people, having a DARPA director who understands, for example, an ARPA age director who really understands the DARPA model and how to implement it that's gonna be key in setting that culture upright to the. [00:31:23] Ben: Yeah. And, and, and you've mentioned a, a couple of times the, sort of the effect of physical location on, on the culture. Have you, have you seen that, that, like where [00:31:35] people are physically located really like have an effect on, on resulting cultures? [00:31:41] William: Yeah. I mean, look, obviously post pandemic, we're exploring remote work a lot. Yes. But there's a lot to be said for getting your, your thinking team in one place where they're bouncing off ideas, each other with each other all the time. Yeah. Where they're exposed and, and critiqued and evaluated. And they just can't see each other, remind each other kind of all the time. So creating that island. With your talent on it so that they can interact and, and inevitably work pretty intensively together. Yeah. That's a, you know, I think that's a, something of a prerequisite to getting these kinds of organizations together. You've gotta build that earliest free to core and that early culture that that's very empowered. [00:32:30] Ben: And, and so just sort of to, to take, to take a, a right turn [00:32:35] and, and talk a little bit about your, your work on, on advanced manufacturing. This is, this is an area I personally know much less about. But like, I guess one, one sort of basic thing is I think a lot of people Like don't have a good sense of what sort of advanced manufacturing actually means. Like what, what is, what is, what, what, what does advanced actually entail in this situation? [00:33:00] William: Yeah, let me, let me tell you know, a little bit of a story here. Yeah, please. The there are a suite of new technologies. Corresponding processes that are kind of emerging, right. And some have, you know, some have emerged. Some are earlier at an earlier stage but areas like robotics, you know, 3d printing, additive manufacturing obviously digital digital production technologies. Where it is built into kind of every stage. All of your factory floor equipment is all [00:33:35] linked. You're doing continuous analytics on each machine, but then able to connect them to see the processes as a whole that kind of it revolution side. Then there's a whole series of advances in critical, you know, materials. that will enable us to do kind of designer materials in a way we've never done before, because we can now really operate at the medical level in designing materials. So, you know, we can have, you know, in the, in the clean tech space or automotive space, for example, we can have much lighter, much stronger materials. And in a related area, composites are now, you know, an emerging opportunity space. For a lot of, kind of new manufacturing. We may be able to do electronics, which is a whole new generation of electronics based on light and with whole kind of range of new speed for electronics as a result of that and new efficiencies. So there's a lot of technologies that are, that are [00:34:35] available. Some are starting to enter. Some are further back like Flo for example. But they could completely transform the way in which we make things. And that's what advanced manufacturing is. Can we move to these new technologies and, and the processes that go with them in completely transforming the way in which we make. [00:34:57] Ben: Yeah. And, and like, so, so this is, I'm very interested in this and it, it feels like there isn't like, like sort of answering that question involves real research. Right? Cause you, you sort of need to, to rethink processes, you need to rethink how you do design. But at the same time, there, there aren't a lot of. Institutions that are, are organized to do that sort of research. [00:35:23] William: Yeah, that's look that this has been a big gap in our R and D portfolio in the United States. So at the end of world war II, Ben you know, veever Bush designs, the postwar [00:35:35] system for science. Right, right. So. We do this amazing connected system in world war II. We have industries working with universities, working with government that're closely tied. We do incredible advances that lead to, you know, the, they lead to the electronics industry. They lead to the whole aerospace industry, right at the kind of scale we have now, they lead to, they lead to nuclear power. Amazing stuff comes out of world war. I. And we had a very connected system. Then we, we dismantled the military at the end of the work. Cause we thought mistakenly there was gonna be world peace and all those 16 million, you know, soldiers, sailors, airmen that are overseas start to come home and VIN Bush steps in and he says, wait a minute, let's hang on to some of this. We built this amazing R D capability in the course of the war. Let's hold on to some of it. So he says let's support basic research. That's the cheapest stage, right? Applied research costs a lot more. Yep. So we decided let's hang onto that. [00:36:35] And then we began during the war with a lot of federal research funding and universities really for the first time. So my school MIT got 80 times. Amount of federal research funding in four years of world war II, as it did in all of its previous 80 years of history, wow. That's happening at a whole bunch of schools. We're creating this incredible jewel in the American system. We're creating the federally funded research university. So it leads to that which is big, positive, but neighbor Bush's basic research model leaves out the applied side. And the assumption he's got, it's kind of a, what he, what others refer to as a pipeline model. But the federal government role is let's dump basic research into one end of the innovation pipeline. Let's hope that mysterious things occur and great products emerge. Right. And it's the job of industry to do that interim stage. That's kind the model. That is what it, [00:37:35] your fingers hoping something is gonna happen in that pipeline. And whereas in world war II, every stage that pipeline was pretty well organized in a coordinated kind of way. So we move away from that world war II connected system to a very disconnected system. We in effect institutionalized the valley of death, right. There's gonna be a gap between research side and on one side of the valley. And. You know, the actual technology implementation applied late stage applied side on the other side of the valley with a big gap, big valley gap in between the two and very few bridging mechanisms across. So we built that into our system. And look, VIN Bush was worried about science. How are we gonna fund basic science? That's his worry. And we built, you know, the us, wasn't the science leader going into world war II. Yeah. Germany, Britain, war. We weren't, we managed [00:38:35] to bring over lots of immigrants to help lead science in the us. And they, they took up the reigns and we trained a lot of great talent here in the course of the war. And you know, we got ourselves in a position where the us was the science leader by the end of the. We were going into the war, the world manufacturing leader. We weren't the science leader. We were the world manufacturing leader. We had built a system of mass production that nobody else had ever seen. Right? Yeah. We went into the war with eight times the production capacity of Japan and four times the production capacity in Germany going into the war. You can only imagine what were coming outta the war. Yeah, exactly. So the least thing on Genever Bush's mind was manufacturing that's in great shape. He sort [00:39:24] Ben: took that as a given [00:39:25] William: almost right. That's a given we're always gonna have that. Right. But he was wrong. We weren't always gonna have that. Uh, And Japan taught us, [00:39:35] you That ended up costing the us it's electronic sector leadership in the electronic sector and leadership in the auto sector, two industry sectors that we had completely dominated. So, and then, you know, comes to China and we have further erosion as well. So the reason why advanced manufacturing is important is you. We, we got two moves to compete with China. China's lower wage, lower cost. We can lower our wages to Chinese wage levels. That's probably not gonna happen. Right. Or alternatively, although we've been working on it, cause we've definitely stagnated wages in us, manufacturing, believe me. But secondly, we can get much more efficient, much more productive. We can apply our innovation system to manufacturing. Right. So NSF doesn't have an R and D portfolio related to manufac. Star doesn't have an R D portfolio that's terribly related to manufacturing either. Right? NIH certainly [00:40:35] doesn't we don't do manufacturing. We don't do these manufacturing technologies and processes in our I D system. Let's get that very talented, still very able us innovation system onto manufacturing. So that's the basic idea and that's the way we're gonna have to compete. We sort of got no other move. Right? We can just have continued erosion with all kinds of social disruption. And a real decline in the American working glass, we can continue to do that and we watch what that's doing to our democracy, or we can get our act together and do advanced manufacturing. Yeah. And, and [00:41:12] Ben: do you look, I guess, like, what are some of the most sort of promising efforts in that area, in, in that you've seen? [00:41:21] William: Well, there's, there's amazing work going on that we already see in a whole new kind of robotics. You know, the old industrial robots weighed 10 times. They're very dangerous. You have to put cages around them and make sure that the workers don't go near them. [00:41:35] And they do, you know, they lift up something heavy and they'll do like one perfect spot weld, and then they'll move to the next, you know, next piece of, you know, next piece, moving down the assembly line. Yeah. That's the old kinda robotics. The new kind of robotics are lightweight, collaborative robotics. Just as you know, we're talking on cell phones, it's like the relationship between me and the cell phone. It's a big enabler for me. It helps me I can do voice commands to the robot and it's, you know, and can work in a precision kind of way, but it was also knows me works around me. Doesn't endanger. It's a helper, not a, you know, a caged beast that has to be behind a fence. So we're moving to that kind of new robotics. That's a whole new C change in manufacturing. We're doing 3d printing, which you know, is instead of. Instead of subtractive manufacturing, where you cut away a huge piece of metal [00:42:35] and end up with a smaller part with real limits to what the shape and dimensions and content of that, that, that part can be additive enables you to build a part from scratch with these, with powders shape it to exactly the role you want often with new materials and we're moving into. Metal 3d printing. So it's no longer plastics and resins only, it's a whole new kind of it's metal of production. And look, you know, we haven't figured out yet how to get the volumes that are similar to, to mass production for 3d printing, but there are plenty of product lines where you you're making limited numbers that are, have to be extremely precise, right? Yeah. Like. Jet engines, right. You know, you're not turning out millions of jet engines every day. You're turning out small numbers, but the precision that additive [00:43:35] can bring potentially with new materials, like ceramics to creating those turbine blades is really quite dramatic. So there's a whole series of industrial sectors that'll be suited to, to additive. And that's already moving in on some of these sectors and we're learning how to. All kinds of, of new materials for additives, you know, particularly in the metal side and new material side. So that's another huge territory of opportunity to transform their actually new ways. [00:44:03] Ben: And, and something that I'm particularly interested in is, is so. You could think of, of many of these, these new technologies as sort of components in a broader system. And what it seems like I, I don't personally see a lot of, is kind of like the like process research work to really sort of rethink the entire The the, the entire, like, call it a manufacturing line or the entire system and sort of ask, like what, how would you like redesign the product around how you're making it? Have you seen any [00:44:35] sort of like institutions that are sort of trying to do that sort of work? [00:44:40] William: Yeah. I mean this, this whole idea of, you know, for a long time, you know, we gear. The design had to fit the manufacturing, right? So we moved to, you know, design for manufacturing, right. To make it easily manufacturable. But now. The manufacturing can be much more embedded into the design process because you can come up with a whole new suite of capabilities that will effectuate new design opportunities. Right? So rather than manufacturing, being a limiting factor on, on design, it's a, it's now an enabler of design and additive manufacturing is an example of. So a whole new relationship between the production process and design process really possible here with these new technologies. And then getting back to your systems point. You know, now we've got the opportunity through digital [00:45:35] technologies to really take a look at a production. Operation, not as this, a series of isolated machines where material has to be carted from one machine suite to the next machine suite. Right now we've got the ability to integrate them in, in ways that we have never had before with running the kind of level of data analytics on, on performance for each machine, but also running a new level of analytics on the system itself. Right? So we're now in a position to really connect, collect the metrics. To a very fine scale and level on the production process itself in a way that we've never really had before. So the opportunities for efficiencies here I think are quite dramatic. And I think that's the way we're gonna have to compete. But a lot of people worry, you know, are we gonna eliminate all work? Right? Are the, are the robots gonna displace the workers? But the reality of advanced manufacturing is actually something [00:46:35] of the opposite. You know, the robot will display some jobs, but much more frequently, the robot will create all kinds of new possibilities within existing jobs. Yeah. And then thirdly, there will be jobs to get created because we need to make robots right. And operate program. And so they're gonna be a lot of jobs. So the net job loss problem, I just don't think is a real. Right. Yeah. Instead we get these new possibilities of kind of moving ahead and look at the center of these kinds of new factory systems are gonna be people, right? Yeah. People in the are the ones that have ideas you know, software and AI. And robotics just can't do a whole lot of things that people are, are able to do. They don't have the kind of conceptual frameworks and the ability to kind of Intuit [00:47:35] change that people have got. So I think in a way the new manufacturing system is going be, you know, more people centric than it's been before. Instead [00:47:47] Ben: of people just acting like robots. [00:47:49] William: Yeah. Lot people act acting like robots. It's people, you know, doing the organization and designing and management and the systems and the programming and the processed way that we're gonna need. Yeah, [00:48:07] Ben: This was awesome. I'm so grateful. And now a quick word from our sponsors. If you listen to podcasts you've surely heard about advertisements for all sorts of amazing mattresses ones that can get hot or cold firmer or softer but now with the pod you can sleep in a tank of hydrostatic fluid make gravity while you sleep a thing of the past [00:48:35]…
I
Idea Machines


1 Philanthropically Funding the Foundation of Fields with Adam Falk [Idea Machines #45] 1:05:27
1:05:27
Play Later
Play Later
Lists
Like
Liked1:05:27
In this conversation, Adam Falk and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P. Sloan Foundation, which was started by the eponymous founder of General Motors and has been funding science and education efforts for almost nine decades. They’ve funded everything from iPython Notebooks to the Wikimedia foundation to an astronomical survey of the entire sky. If you’re like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan Foundation, Adam was the president of Williams College and a high energy physicist focused on elementary particle physics and quantum field theory. His combined experience in research, academic administration, and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem. I hope you enjoy this as much as I did. Links - The Sloan Foundation - Adam Falk on Wikipedia - Philanthropy and the Future of Science and Technology Highlight Timestamps - How do you measure success in science? [00:01:31] - Thinking about programs on long timescales [00:05:27] - How does the Sloan Foundation decide which programs to do? [00:08:08] - Sloan's Matter to Life Program [00:12:54] - How does the Sloan Foundation think about coordination? [00:18:24] - Finding and incentivizing program directors [00:22:32] - What should academics know about the funding world and what should the funding world know about academics? [00:28:03] - Grants and academics as the primary way research happens [00:33:42] - Problems with grants and common grant applications [00:44:49] - Addressing the criticism of philanthropy being inefficient because it lacks market mechanisms [00:47:16] - Engaging with the idea that people who create value should be able to capture that value [00:53:05] Transcript [00:00:35] In this conversation, Adam Falk, and I talk about running research programs with impact over long timescales, creating new fields, philanthropic science funding, and so much more. Adam is the president of the Alfred P Sloan foundation, which was started by the eponymous founder of general motors. And has been funding science and education efforts for almost nine decades. They funded everything from IP. I fond [00:01:35] notebooks to Wikimedia foundation. To an astronomical survey of the entire sky. If you're like me, their name is familiar from the acknowledgement part of PBS science shows. Before becoming the president of the Sloan foundation. Adam was the president of Williams college and I high energy physicist focused on elementary particle physics in quantum field theory. His combined experience in research. Uh, Academic administration and philanthropic funding give him a unique and fascinating perspective on the innovation ecosystem i hope you enjoy this as much as i did [00:02:06] Ben: Let's start with like a, sort of a really tricky thing that I'm, I'm myself always thinking about is that, you know, it's really hard to like measure success in science, right? Like you, you know, this better than anybody. And so just like at, at the foundation, how do you, how do you think about success? Like, what is, what does success look like? What is the difference between. Success and failure mean to [00:02:34] Adam: you? [00:02:35] I mean, I think that's a, that's a really good question. And I think it's a mistake to think that there are some magic metrics that if only you are clever enough to come up with build them out of citations and publications you could get some fine tune measure of success. I mean, obviously if we fund in a scientific area, we're funding investigators who we think are going to have a real impact with their work individually, and then collectively. And so of course, you know, if they're not publishing, it's a failure. We expect them to publish. We expect people to publish in high-impact journals, but we look for broader measures as well if we fund a new area. So for example, A number of years ago, we had a program in the microbiology of the built environment, kind of studying all the microbes that live in inside, which turns out to be a very different ecosystem than outside. When we started in that program, there were a few investigators interested in this question. There weren't a lot of tools that were good for studying it. [00:03:35] By 10 years later, when we'd left, there was a journal, there were conferences, there was a community of people who were doing this work, and that was another measure, really tangible measure of success that we kind of entered a field that, that needed some support in order to get going. And by the time we got out, it was, it was going strong and the community of people doing that work had an identity and funding paths and a real future. Yeah. [00:04:01] Ben: So I guess one way that I've been thinking about it, it's just, it's almost like counterfactual impact. Right. Whereas like if you hadn't gone in, then it, the, it wouldn't be [00:04:12] Adam: there. Yeah. I think that's the way we think about it. Of course that's a hard to, to measure. Yeah. But I think that Since a lot of the work we fund is not close to technology, right. We don't have available to ourselves, you know, did we spin out products? Did we spin out? Companies did a lot of the things that might directly connect that work to, [00:04:35] to activities that are outside of the research enterprise, that in other fields you can measure impact with. So the impact is pretty internal. That is for the most part, it is, you know, Has it been impact on other parts of science that, you know, again, that we think might not have happened if we hadn't hadn't funded what we funded. As I said before, have communities grown up another interesting measure of impact from our project that we funded for about 25 years now, the Sloan digital sky survey is in papers published in the following sense that one of the innovations, when the Sloan digital sky survey launched in the early. Was that the data that came out of it, which was all for the first time, digital was shared broadly with the community. That is, this was a survey of the night sky that looked at millions of objects. So they're very large databases. And the investigators who built this, the, the built the, the, the telescope certainly had first crack at analyzing that [00:05:35] data. But there was so much richness in the data that the decision was made at. Sloan's urging early on that this data after a year should be made public 90% of the publications that came out of the Sloan digital sky survey have not come from collaborators, but it come from people who use that data after it's been publicly released. Yeah. So that's another way of kind of seeing impact and success of a project. And it's reached beyond its own borders. [00:06:02] Ben: And you mentioned like both. Just like that timescale, right? Like that, that, that 25 years something that I think is just really cool about the Sloan foundation is like how, how long you've been around and sort of like your capability of thinking on those on like a quarter century timescale. And I guess, how do you, how do you think about timescales on things? Right. Because it's like, on the one hand, this is like, obviously like science can take [00:06:35] 25 years on the other hand, you know, it's like, you need to be, you can't just sort of like do nothing for 25 years. [00:06:44] Adam: So if you had told people back in the nineties that the Sloan digital sky survey was going to still be going after a quarter of a century, they probably never would have funded it. So, you know, I think that That you have an advantage in the foundation world, as opposed to the the, the federal funding, which is that you can have some flexibility about the timescales on what you think. And so you don't have to simply go from grant to grant and you're not kind of at the mercy of a Congress that changes its own funding commitments every couple of years. We at the Sloan foundation tend to think that it takes five years at a minimum to have impact into any new field that you go into it. And we enter a new science field, you know, as we just entered, we just started a new program matter to life, which we can talk about. [00:07:35] That's initially a five-year commitment to put about $10 million a year. Into this discipline, understanding that if things are going well, we'll re up for another five years. So we kind of think of that as a decadal program. And I would say the time scale we think on for programs is decades. The timescale we think of for grants is about three years, right? But a program itself consists of many grants may do a large number of investigators. And that's really the timescale where we think you can have, have an impact over that time. But we're constantly re-evaluating. I would say the timescale for rethinking a program is shorter. That's more like five years and we react. So in our ongoing programs, about every five years, we'll take a step back and do a review. You know, whether we're having an impact on the program, we'll get some outside perspectives on it and whether we need to keep it going exactly as it is, or adjust in some [00:08:35] interesting ways or shut it down and move the resources somewhere else. So [00:08:39] Ben: I like that, that you have, you almost have like a hierarchy of timescales, right? Like you have sort of multiple going at once. I think that's, that's like under underappreciated and so w one thing they want to ask about, and maybe the the, the life program is a good sort of like case study in this is like, how do you, how do you decide what pro, like, how do you decide what programs to do, right? Like you could do anything. [00:09:04] Adam: So th that is a terrific question and a hard one to get. Right. And we just came out of a process of thinking very deeply about it. So it's a great time to talk about it. Let's do it. So To frame the large, the problem in the largest sense, if we want to start a new grantmaking program where we are going to allocate about $10 million a year, over a five to 10 year period, which is typical for us, the first thing you realize is that that's not a lot of money on the scale that the federal government [00:09:35] invest. So if your first thought is, well, let's figure out the most interesting thing science that people are doing you quickly realize that those are things where they're already a hundred times that much money going in, right? I mean, quantum materials would be something that everybody is talking about. The Sloan foundation, putting $10 million a year into quantum materials is not going to change anything. Interesting. So you start to look for that. You start to look for structural reasons that something that there's a field or an emerging field, and I'll talk about what some of those might be, where an investment at the scale that we can make can have a real impact. And And so what might some of those areas be? There are fields that are very interdisciplinary in ways that make it hard for individual projects to find a home in the federal funding landscape and one overly simplified, but maybe helpful way to think about it is that the federal funding landscape [00:10:35] is, is governed large, is organized largely by disciplines. That if you look at the NSF, there's a division, there's a director of chemistry and on physics and so forth. And but many questions don't map well onto a single discipline. And sometimes questions such as some of the ones we're exploring in the, you know, the matter to life program, which I can explain more about what that is. Some of those questions. Require collaborations that are not naturally fundable in any of the silos the federal government has. So that's very interdisciplinary. Work is one area. Second is emerging disciplines. And again, often that couples to interdisciplinary work in a way that often disciplines emerge in interesting ways at the boundaries of other disciplines. Sometimes the subject matter is the boundary. Sometimes it's a situation where techniques developed in one discipline are migrating to being used in another discipline. And that often happens with physics, the [00:11:35] physicist, figure out how to do something, like grab the end of a molecule and move it around with a laser. And suddenly the biologists realize that's a super interesting thing for them. And they would like to do that. So then there's work. That's at the boundary of those kind of those disciplines. You know, a third is area that the ways in which that that can happen is that you can have. Scale issues where, where kind of work needs to happen at a certain scale that is too big to be a single investigator, but too small to kind of qualify for the kind of big project funding that you have in the, in the, in the federal government. And so you're looking, you could also certainly find things that are not funded because they're not very interesting. And those are not the ones we want to fund, but you often have to sift through quite a bit of that to find something. So that's what you're looking for now, the way you look for it is not that you sit in a conference room and get real smart and think that you're going to see [00:12:35] things, other people aren't going to see rather you. You source it out, out in the field. Right. And so we had an 18 month process in which we invited kind of proposals for what you could do on a program at that scale, from major research universities around the country, we had more than a hundred ideas. We had external panels of experts who evaluated these ideas. And that's what kind of led us in the end to this particular framing of the new program that we're starting. So and, and that, and that process was enough to convince us that this was interesting, that it was, you know, emergent as a field, that it was hard to fund in other ways. And that the people doing the work are truly extraordinary. Yeah. And that's, that's the, that's what you're looking for. And I think in some ways there are pieces of that in all of the programs that particularly the research programs that. [00:13:29] Ben: And so, so actually, could you describe the matter to life program and like, [00:13:35] and sort of highlight how it fits into all of those buckets? [00:13:38] Adam: Absolutely. So the, the, the matter of the life program is an investigation into the principles, particularly the physical principles that matter uses in order to organize itself into living systems. The first distinction to make is this is not a program about how did life evolve on earth, and it's actually meant to be a broader question then how is life on earth organized the idea behind it is that life. Is a particular example of some larger phenomenon, which is life. And I'm not going to define life for you. That is, we know what things are living and we know things that aren't living and there's a boundary in between. And part of the purpose of this program is to explore that it's a think of it as kind of out there, on, out there in the field. And, and mapmaking, and you know, over here is, you [00:14:35] know, is a block of ice. That's not alive. And, you know, over here is a frog and that's alive and there's all sorts of intermediate spaces in there. And there are ideas out there that, that go, you know, that are interesting ideas about, for example, at the cellular level how is information can date around a cell? What might the role of. Things like non-equilibrium thermodynamics be playing is how does, can evolution be can it can systems that are, non-biological be induced to evolve in interesting ways. And so we're studying both biotic and non biotic systems. There are three strains, stray strands in this. One is building life. That is it was said by I think I, I find men that if you can't build something, you don't understand it. And so the idea, and there are people who want to build an actual cell. I think that's, that's a hard thing to do, but we have people who are building in the laboratory little bio-molecular machines understanding how that might [00:15:35] work. We, we fund people who are kind of constructing, protocells thinking about ways that the, the ways that liquid separate might provide SEP diff divisions between inside and outside, within. Chemical reactions could take place. We funded businesses to have made tiny little, you know, micron scale magnets that you mix them together and you can get them to kind of organize themselves in interesting ways. Yeah. In emerge. What are the ways in which emergent behaviors come to this air couple into this. And so that's kind of building life. Can you kind of build systems that have features that feel essential to life and by doing that, learn something general about, say the reproduction of, of, of, of DNA or something simple about how inside gets differentiated from outside. Second strand is principles of life, and that's a little bit more around are [00:16:35] there physics principles that govern the organization of life? And again, are there ways in which the kinds of thinking that informed thermodynamics, which is kind of the study of. Piles of gas and liquid and so forth. Those kinds of thinking about bulk properties and emergent behavior can tell us something about what's the difference between life that's life and matter. That's not alive. And the third strain is signs of life. And, you know, we have all of these telescopes that are out there now discover thousands of exoplanets. And of course the thing we all want to know is, is there life on them? We were never going to go to them. We maybe if we go, we'll never come back. And and we yet we can look and see the chemical composition of these. Protoplanets just starting to be able to see that. And they transition in front of a star, the atmospheres of these planets absorb light from the stars and the and the light that's absorbed tells you something about the chemical composition of the atmosphere. [00:17:35] So there's a really interesting question. Kind of chemical. Are there elements of the chemical composition of an atmosphere that would tell you that that life is present there and life in general? Right. I, you know, if, if you, if you're going to look for kind of DNA or something, that might be way too narrow, a thing to kind of look for. Right. So we've made a very interesting grant to a collaboration that is trying to understand the general properties of atmospheres of Rocky planets. And if you kind of knew all of the things that an atmosphere of an Earth-like planet might look like, and then you saw something that isn't one in one of those, you think, well, something other might've done that. Yeah. So that's a bit of a flavor. What I'd say about the nature of the research is it is, as you could tell highly interdisciplinary. Yeah. Right. So this last project I mentioned requires geoscience and astrophysics and chemistry and geochemistry and a vulcanology an ocean science [00:18:35] and, and Who's going to fund that. Yeah. Right. It's also in very emerging area because it comes at the boundary between geoscience, the understanding of what's going on on earth and absolutely cutting edge astrophysics, the ability to kind of look out into the cosmos and see other planets. So people working at that boundary it's where interesting things often, often happen. [00:18:59] Ben: And you mentioned that when, when you're looking at programs, you're, you're looking for things that are sort of bigger than like a single pie. And like, how do you, how do you think about sort of the, the different projects, like individual projects within a program? Becoming greater than the sum of their parts. Like, like, you know, there's, there's some, there's like one end of the spectrum where you've just sort of say, like, go, go do your things. And everybody's sort of runs off. And then there's another end of the spectrum where you like very explicitly tell people like who should be working on what and [00:19:35] how to, how to collaborate. So like, how do you, [00:19:37] Adam: so one of the wonderful things about being at a foundation is you have a convening power. Yeah. I mean, in part, because you're giving away money, people will, will want to come gather when you say let's come together, you know? And in part, because you just have a way of operating, that's a bit independent. And so the issue you're raising is a very important one, you know, in the individual at a program at a say, science grant making program we will fund a lot of individual projects, which may be a single investigator, or they may be big collab, collaborations, but we also are thinking from the beginning about how. Create help create a field. Right. And it may not always be obvious how that's going to work. I think with matter to life we're early on and we're, you know, we're not sure is this a single field, are there sub fields here? But we're already thinking about how to bring our pies together to kind of share the work they're doing and get to share perspectives. I can give you another example from a program Reno law, we recently [00:20:35] closed, which was a chemistry of the indoor environment. Where we were funded kind of coming out of our work in the microbiology indoors. It turns out that there's also very interesting chemistry going on indoors which is different from the environmental chemistry that we think about outdoors indoors. There are people in all the stuff that they exude, there's an enormous number of surfaces. And so surface chemistry is really important. And, and again, there were people who were doing this work in isolation, interested in, in these kinds of topics. And we were funding them individually, but once we had funded a whole community of people doing. They decided that be really interesting to do a project where, which they called home cam, where they went to a test house and kind of did all sorts of indoor activities like cooking Thanksgiving dinner and studying the chemistry together. And this is an amazing collaboration. So we had, so many of our grantees came together in one [00:21:35] place around kind of one experiment or one experimental environment and did work then where it could really speak to each other. Right. And which they they'd done experiments that were similar enough that they, the people who were studying one aspect of the chemistry and another could do that in a more coherent way. And I think that never would have happened without the Sloan foundation having funded this chemistry of indoor environments program. Both because of the critical mass we created, but also because of the community of scholars that we, that we help foster. [00:22:07] Ben: So, it's like you're playing it a very important role, but then it, it is sort of like a very then bottom up sort of saying like, like almost like put, like saying like, oh, like you people all actually belong together and then they look around and like, oh yeah, yeah, [00:22:24] Adam: we do. I think that's exactly right. And yeah. You don't want to be too directive because, you know, we're, we're just a foundation where we got some program directors and, you know, [00:22:35] we, we do know some things about the science we're funding, but the real expertise lives with these researchers who do this work every day. Right. And so what we're trying to see when, when we think we can see some things that they can't, it's not going to be in the individual details of the work they're doing, but it may be there from up here on the 22nd floor of the Rockefeller center, we can see the landscape a little bit better and are in a position to make connections that then will be fruitful. You know, if we were right, there'll be fruitful because the people on the ground doing the work with the expertise, believe that they're fruitful. Sometimes we make a connection and it's not fruitful in that. It doesn't fruit and that's fine too. You know, we're not always right about everything either, but we have an opportunity to do that. That comes from the. Particular in special place that we happen to sit. Yeah. [00:23:28] Ben: Yeah. And just speaking of program directors, how do you, how do you think about, I mean, like [00:23:35] you're, you're sort of in charge and so how do you think about directing them and, and sort of how do you think about setting up incentives so that, you know, good work like so that they do good work on their programs and and like how much sort of autonomy do you give them? Sort of how does, how does all of that work? [00:23:56] Adam: Absolutely. So I spent most of my career in universities and colleges. I was my own background is as, as, as a theoretical physicist. And I spent quite a bit of time as a Dean and a college president. And I think the key to being a successful academic administrator is understanding deep in your bones, that the faculty are the heart of the institution. They are the intellectual heart and soul of the institution. And that you will have a great institution. If you hire terrific faculty and support them you aren't telling them, you know, you as, and they don't require a lot of telling them what to do, but the [00:24:35] leadership role does require a lot of deciding where to allocate the resources and helping figure out and, and figuring out how, and in what ways, and at what times you can be helpful to them. Yeah. The program directors at the Sloan foundation are very much. The faculty of a, of a university and we have six right now it's five PhDs and a road scholar. Right. And they are, each of them truly respect, deeply respected intellectual leaders in the fields in which they're making grants. Right. And my job is to first off to hire and retain a terrific group of program directors who know way more about the things they're doing than I do. And then to kind of help them figure out how to craft their programs. And you know, there's different kinds of, you know, different kinds of help that different kind of program directors needs. Sometimes they just need resources. Sometimes they need, you know, a collaborative conversation. You know, [00:25:35] sometimes, you know, we talk about the ways in which their individual programs are gonna fit together into the larger. Programs at the Sloan foundation sometimes what we talk about is ways in which we can and should, or shouldn't change what we do in order to build a collaboration elsewhere. But I don't do much directing of the work that program directors to just like, I don't, didn't ever do much of any directing of the work that, that that the faculty did. And I think what keeps a program director engaged at a place like the Sloan foundation is the opportunity to be a leader. Yeah. [00:26:10] Ben: It's actually sort of to double click on that. And on, on, on hiring program directors, it seems it like, I, I, I would imagine that it is, it is sometimes tough to get really, really good program directors, cause people who would make good program directors could probably have, you know, their pick. Amazing roles. And, and to some extent, and, and [00:26:35] they, they, they do get to be a leader, but to some extent, like they're, they're not directly running a lab, right. Like they're, they, they don't have sort of that direct power. And they're, they're not like making as much money as they could be, you know, working at Google or something. And so, so like how do you both like find, and then convince people to, to come do that? [00:26:57] Adam: So that's a great question. I mean, I think there's a certain, you know, P people are meant to be program directors are, are not the, usually the place like the Sloan foundation and different foundations work differently. Right. So but in our case are not people who Otherwise, who would rather be spending their time in the lab. Yeah. Right. And many of them have spent time as serious scholars in one discipline or another, but much like faculty who move into administration, they've come to a point in their careers, whether that was earlier or later in their [00:27:35] career where the larger scope that's afforded by doing it by being a program director compensates for the fact that they can't focus in the same way on a particular problem, that, that the way a faculty member does or a researcher. Yes. So the, the other thing you have to feel really in your bones, which is, again, much like being an academic administrator is that there's a deep reward in finding really talented people and giving them the resources. They need to do great things. Right. And in the case, if you're a program director, what you're doing is finding grantees and When a grantee does something really exciting. We celebrate that here at the foundation as, as a success of the foundation. Not that we're trying to claim their success, but because that's what we're trying to do, we're trying to find other people who can do great things and give them the resources to do those great things. So you have to get a great kind of professional satisfaction from. So there are people who have a [00:28:35] broader view or want to move into a, a time in their careers when they can take that broader view about a field or an area that they already feel passionate about. And then who have the disposition that, that, you know, that wanting to help people is deeply rewarding to them. And, you know, say you, how do you find these folks? It's, it's just like, it's hard to find people who were really good at academic administration. You have to look really hard for people who are going to be great at this work. And you persuade them to do it precisely because they happen to be people who want to do this kind of work. Yeah. [00:29:09] Ben: And actually and so, so you, you sort of are, are highlighting a lot of parallels between academic administration and, and sort of your role now. I think it. Is there anything that, but at the same time, I think that there are many things that like academics don't understand about sort of like science funding and and, and this, that, that world, and then there's many things that it seems like science funders don't understand about [00:29:35] research and, and you're, you're one of the few people who've sort of done in both. And so I guess just a very open-ended question is like, like what, what do you wish that more academics understood about the funding world and things you have to think about here? And what do you wish more people in the funding world understood about, about research? Yeah, [00:29:54] Adam: that is, that is great. So I can give you a couple of things. The, I think at a high level, I, I always wish that on both sides of that divide, there was a deeper understanding of the constraints under which people on the other side are operating. And those are both material constraints and what I might call intellectual constraints. So there's a parallelism here. I, if I first say from the point of view of the, of as a foundation president, what do I wish that academics really understood? I, I, I'm always having to reinforce to people that we really do mean it when we say we do fund, we fund X and we don't fund Y [00:30:35] yeah. And that please don't spend time trying to persuade me that Z, that you do really is close enough to X, that we should fund it and get offended. When I tell you that's not what we fund, we say no to a lot of things that are intrinsically great, but that we're not funding because it's not what we fund. Yeah. We as, and we make choices about what to fund that are very specific and what areas to fund in that are very specific so that we can have some impact, right. And we don't make those decisions lightly, you know, for almost any work someone is doing, we're not the only foundation who might fund it. So move on to someone else. If you're not fitting our program, then argue with us and just understand why it is that, that we do that. Right. I think that is that's a come across that a lot. There's a total parallel, which I think is very important for people in foundations who have very strong ideas about what they should fund to understand that, you know, academics are not going to drop what they're doing and start doing something else because there's a [00:31:35] little bit of money available that, you know, is an academic, of course, you're trying to make. Your questions, two ways, things you can support, but usually driven because some question is really important to you. And if, you know, if some foundation comes to you and says, well, stop doing that and do this, I'll find it. You know why maybe that's, you're pretty desperate. You're not going to do that. So the best program directors spend a lot of time looking for people who already are interested in the thing that the foundation is funding, right? And really underst understand that you can't bribe people into doing something that they, that they, that they otherwise wouldn't do. And so I think those are very parallel. I mean, to both to understand the set of commitments that people are operating under, I would say the other thing that I think it's really important for foundations to understand about about universities is and other institutions is that these institutions. Are not just platforms [00:32:35] on which one can do a project, right? They are institutions that require support on their own. And somebody has to pay the debt service on the building and take out the garbage and cut the grass and clean the building and, you know hire the secretaries and do all of the kind of infrastructure work that makes it possible for a foundation such as Sloan to give somebody $338,000 to hire some postdocs and do some interesting experiments, but somebody is still turning on the lights and overhead goes to the overhead is really important and the overhead is not some kind of profit that universities are taking. It is the money they need in order to operate in ways that make it possible to do the grants. And. You know, there's a longer story here. I mean, even foundations like Sloan don't pay the full overhead and we can do that because [00:33:35] we typically are a very small part of the funding stream. But during the pandemic, we raised our overhead permanently from the 15% we used to pay to the 20% that we pay now, precisely because we've, we felt it was important to signal our support for the institutions. And some of those aren't universities, some of those are nonprofits, right? That other kinds of nonprofits that we're housing, the activities that we were interested in funding. And I just think it's really important for foundations to understand that. And I do think that my own time as a Dean at a college president, when I needed that overhead in order to turn on the lights, so some chemist could hire the post-docs has made me particularly sensitive [00:34:16] Ben: to that. Yeah, no, that's, that's a really good. Totally that I don't think about enough. So, so, so I really appreciate that. And I think sort of implicit implicit in our conversation has been two sort of core things. One, is that the way that you [00:34:35] fund work is through grants and two, is that the, the primary people doing the research are academics and I guess it just, w let's say, w w what is, what's the actual question there it's like, is it like, do you, do you think that that is the best way of doing it? Have you like explored other ways? Because it, it, it feels like those are sort of both you know, it's like has been the way that people have done it for a long time. [00:35:04] Adam: So there's, there's two answers to that question. The first is just to acknowledge that the Sloan foundation. Probably 50 out of the $90 million a year in grants we make are for research. And almost all of that research is done at universities, I think primarily because we're really funding basic research and that's where basic research has done. If we were funding other kinds of research, a lot of use inspired research research that was closer to kind of technology. We would be, you might be [00:35:35] funding people who worked in different spaces, but the kind of work we fund that's really where it's done. But we have another significant part of the foundation that funds things that aren't quite research, that the public understanding of science and technology diversity, equity and inclusion in stem, higher ed of course, much of that is, is money that goes into universities, but also into other institutions that are trying to bring about cultural change in the sciences badly needed cultural change. And then our technology program, which looks at all sorts of technologies. Modern technologies that support scholarships such as software scholarly communication, but as increasingly come to support modes of collaboration and other kinds of more kind of social science aspects of how people do research. And there are a lot of that funding is not being given to universities. A lot of that funding is given to other sorts of institutions, nonprofits, always because we're a [00:36:35] foundation, we can only fund nonprofits, but that go beyond the kind of institutional space that universities occupy. We're really looking for. You know, we're not driven by a kind of a sense of who we should fund followed by what we should fund. We're interested in funding problems and questions. And then we look to see who it is that that is doing that work. So in public understanding some of that's in the universities, but most of it isn't and [00:37:00] Ben: actually the two to go back. One thing that I wanted to to ask about is like It seems like there's, if you're primarily wanting to find people who are already doing the sort of work that is within scope of a program, does it, like, I guess it almost like raises the chicken and egg problem of like, how, how do you, like, what if there's an area where people really should be doing work, but nobody is, is doing that work [00:37:35] because there is no funding to do that work. Right. Like this is just something that I struggled with. It's not right. And so, so it's like, how do you, how do you sort of like bootstrap thing? Yes. [00:37:46] Adam: I mean, I think that the way to think about it is that you work incrementally. That is if, if once, and I think you're, you're quite right. That is in some sense, we are looking for areas that. Under inhabited, scientifically because people aren't supporting that work. And that's another way of saying what I said at the beginning about how we're looking for maybe interdisciplinary fields that are hard to support. One way you can tell that they're hard to support is that there isn't a support people aren't doing it, but typically you're working in from the edges, right. There's people on the boundaries of those spaces chomping at the bit. Right. And when you say, you know, what is the work? You can't do what you would do if you add some funding and tell [00:38:35] us why it's super interesting. That's the question you're asking. And that's kind of the question that drives what we talked about before, which is how do you identify a new area, but it's it it's actually to your point, precisely, it's not the area where everybody already is. Cause there's already a lot of money there. Right? So I would say. You know, if you really had to bootstrap it out in the vacuum, you would have to have the insights that we don't pretend to have. You'd have this ability to kind of look out into the vacuum of space and conjure something that should be there and then have in conjure who should do it and have the resources to start the whole thing. That's not the Sloan foundation we do. We don't operate at that scale, but there's another version of that, which is a more incremental and recognizes the exciting ideas that researchers who are adjacent to an underfunded field. Can't th th th th th the, the excitement that they have to go into a new [00:39:35] area, that's just adjacent to where they are and being responsive to that. [00:39:39] Ben: No, that's, and that's, it sort of ties back in my mind to. Y you need to do programs on that ten-year timescale, right? Like, you know, it's like the first three years you go a little bit in the next three years, you do a little bit in, and by like the end of the 10 years, then you're actually in, in [00:39:59] Adam: that new. No, I think that's exactly right. And the other thing is you can, you know, be more risky or more speculative. I like the word speculative better than risky. Risky makes it sound like you don't know what you're doing. Speculative is meant to say, you don't know where you're going to go. So I don't ever think the grants we're funding are particularly risky in the sense that they're going to, the projects will fail. They're speculative in the sense that you don't know if they're going to lead somewhere really interesting. And this is where. The current funding landscape is really in the federal funding. Landscape is really challenging because [00:40:35] the competition for funding is so high that you really need to be able to guarantee success, which doesn't just mean guarantee that your project will work, but that it will, you know, we will contribute in some really meaningful way to moving the field forward, which means that you actually have to have done half the project already before that's, what's called preliminary data playmate. As far as I'm concerned, preliminary data means I already did it. And now I'm just going to clean it up with this grant. And that is, that's a terrible constraint and we can, we're not bound by that kind of constraint in funding things. So we can have failures that are failures in the sense that that didn't turn out to be as interesting as we hoped it would be. Yeah. I, [00:41:17] Ben: I love your point on, on the risk. I, I, I dunno. I, I think that it's, especially with like science, right? It's like, what is it. The risk, right? Like, you're going to discover something. You might discover that, you know, this is like the phenomenon we thought was a [00:41:35] phenomenon is not really there. Right. But it's, it's still, it's, it's not risky because you weren't like investing for, [00:41:43] Adam: for an ROI. Can I give you another example? I think it was a really good one. Is, is it in the matter of the life program? We made a grant to a guy named David Baker, the university of Washington and hated him. And so, you know, David Baker. And so David Baker builds these little nanoscale machines and he has an enormous Institute for doing this. It's extraordinarily exciting work and. Almost all of the work that he is able to do is tool directed toward applications, particularly biomedical applications. Totally understandable. There's a lot of money there. There's a lot of need there. Everybody wants to live forever. I don't, but everybody else seems to want to, but, so why did, why would, why do we think that we should fund them with all of the money that's in the Institute for protein engineering? Which I think is what it's called. It's because we actually funded him to do some basic science.[00:42:35] Yeah to build machines that didn't have an application, but to learn something about the kinds of machines and the kinds of machinery inside cells, by building something that doesn't have an application, but as an interesting basic science component to it, and that's actually a real impact, it was a terrific grant for us because there's all of this arc, all of this architecture that's already been built, but a new direction that he can go with his colleagues that that he actually, for all of the funding he has, he can't do under the content under the. Umbrella of kind of biomedicine. And so that's another way in which things can be more speculative, right? That's speculative where he doesn't know where it's going. He doesn't know the application it's going to. And so even for him, that's a lot harder to do unless something like Sloan steps in and says, well, this is more speculative. It's certainly not risky. I don't think it's risky to fund David bay could do anything, but it's speculative about where this particular [00:43:35] project is going to lead. [00:43:36] Ben: Yeah, no, I like that. It's just like more, more speculation. And, and you, you mentioned just. Slight tangent, but you mentioned that, you know, Sloan Sloan operates at a certain skill. Do you ever, do you ever team up with other philanthropies? Is that, is that a thing? [00:43:51] Adam: Yeah, we, we do and we love, we love co-funding. We've, we've done that in many of our programs in the technology program. We funded co-funded with more, more foundation on data science in the, we have a tabletop physics program, which I haven't talked about, but basically measuring, you know, fundamental properties of the electron in a laboratory, the size of this office rather than a laboratory. You know, the Jura mountains, CERN and there we, it was a partnership actually with the national science foundation and also with the Moore foundation we have in our energy and environment program partnered with the research corporation, which runs these fascinating program called CYA logs, where they bring young investigators out to Tucson, Arizona, or on to zoom lately, but [00:44:35] basically out to Tucson, Arizona, and mix them up together around an interesting problem for a few days, and then fund a small, small kind of pilot projects out of that. We've worked with them on negative emission science and on battery technologies. Really interesting science projects. And so we come in as a co-funder with them there, I think, to do that, you really need an alignment of interests. Yeah. You really both have to be interested in the same thing. And you have to be a little bit flexible about the ways in which you evaluate proposals and put together grants and so forth so that, so that you don't drive the PIs crazy by having them satisfy two foundations at the same time, but where that is productive, that can be really exciting. [00:45:24] Ben: Cause it seems like I'm sure you're familiar with, they feel like the common application for college. It just, it seems like, I mean, like one of the, sort of my biggest [00:45:35] criticisms of grants in general is that, you know, it's like you sort of need to be sending them everywhere. And there's, there's sort of like the, the well-known issue where, you know, like PI has spend some ridiculous proportion of their time writing grants and it. Sort of a, like a philanthropic network where like, it just got routed to the right people and like sort of a lot happened behind the scenes. That seems like it could be really powerful. Yeah. [00:46:03] Adam: I think that actually would be another level of kind of collective collaboration. Like the common app. I think it would actually in this way, I love the idea. I have to say it's probably hard to make it happen because pre-site, for a couple of reasons that don't make it a bad idea, but it just kind of what planet earth is like. You know, one is that we have these very specific programs and so almost any grant has to be a little bit re-engineered in order to fit into because the programs are so specific fit into a new foundations [00:46:35] program. And the second is. We can certainly at the Sloan foundation, very finicky about what review looks like. And very foundations have different processes for assuring quality. And the hardest work I find in a collaboration is aligning those processes because we get very attached to them. It's a little like the tenure review processes at university. Every single university has its own, right. They have their own tenure process and they think that it was crafted by Moses on Mount Sinai and can never be changed as the best that it possibly ever could be. And then you go to another institution, that thing is different and they feel the same way. That is a feature. I mean really a bug of of the foundation, but it's kind of part of the reality. And, and we certainly, if, if what we really need in order for there to be more collaboration, I strongly feel is for everyone to adopt the Sloan foundation, grant proposal guidelines and review practices. And then all this collaboration stuff would be a piece of cake.[00:47:35] It's like, [00:47:35] Ben: like standards anywhere, right. Where it's like, oh, of course I'm willing to use the standard. It has to be exactly. [00:47:41] Adam: We have a standard we're done. If you would just, if you would just recognize that we're better this would be so much simpler. It's just, it's like, it's the way you make a good marriage work. [00:47:51] Ben: And speaking of just foundations and philanthropic funding more generally sort of like one of the criticisms that gets leveled against foundations especially in, in Silicon valley, is that because there's, there's sort of no market mechanism driving the process that, you know, it's like, it, it can be inefficient and all of that. And I, personally don't think that that marketing mechanisms are good for everything, but I'd be interested in and just like. Sort of response to, to [00:48:23] Adam: that. Yeah. So let me broaden that criticism and because I think there's something there that's really important. There's the enormous discretion that [00:48:35] foundations have is both their greatest strength. And I think their greatest danger that is, you know what, because there is not a discipline that is forcing them to make certain sets of choices in a certain structure. Right. And whether that's markets or whether you think that more generally as, as a, as a kind of other discipline in it, disciplining forces too much freedom can, or I shouldn't say too much freedom, but I would say a lot of freedom can lead to decision-making that is idiosyncratic and And inconsistent and inconstant, right? That is a nicer, a more direct way to say it is that if no one constraints what you do and you just do what you feel like maybe what you feel like isn't the best guide for what you should do. And you need to be governed by a context which assure is strategic [00:49:35] consistencies, strategic alignment with what is going on at other places in, in ways that serve your, you know, that serve the field a commitment to quality other kinds of commitments that make sure that your work is having high impact as a, as a funder. And those don't come from the outside. Right. And so you have to come up with ways. Internally to assure that you keep yourself on the straight and narrow. Yeah. I think there's some similar consideration about which is beyond science funding and philanthropy about the necessity of doing philanthropic work for the public. Good. Yeah. Right. And I think that's a powerful, ethical commitment that we have to have the money that we have from the Sloan foundation or that the Ford foundation, as of the Rockefeller foundation as are in it, I didn't make that money. What's more Alfred P Sloan who left us this money made the money in a context in which lots of people did a lot of work [00:50:35] that don't have that money. Right. A lot of people working at general motors plants and, and, you know, he made that work in a society that support. The accumulation of that fortune and that it's all tax-free. So the federal government is subsidizing this implicitly. The society is subsidizing the work we do because it's it's tax exempt. So that imposes on us, I think, an obligation to develop a coherent idea of what using our funding for the public good means, and not every foundation is going to have that same definition, but we have an obligation to develop that sense in a thoughtful way, and then to follow it. And that is one of the governors on simply following our whims. Right? So we think about that a lot here at the Sloan foundation and the ways in which our funding is justifiable as having a positive, good [00:51:35] that You know, that, that, that attaches to the science we fund or, or just society in general. And that if we don't see that, you know, we, we think really hard about whether we want to do that grant making. Yeah. So it's [00:51:47] Ben: like, I, and I think about things in terms of, of, of like systems engineering. And so it's like, you sort of have these like self-imposed feedback loops. Yes. While it's not, it's not an external market sort of giving you that feedback loop, you still there, you can still sort of like send, like to set up these loops so [00:52:09] Adam: that, so my colleague, one of the program directors here, my colleague, Evan, Michelson is written entire book on. On science philanthropy, and on applying a certain framework that's been developed largely in used in Europe, but also known here in this state, it's called responsible research and innovation, which provides a particular framework for asking these kinds of questions about who you fund and how you fund, what sorts of funding you do, what [00:52:35] sorts of communities you fund into how you would think about doing that in a responsible way. And it's not a book that provides answers, but it's a book that provides a framework for thinking about the questions. And I think that's really important. And as I say, I'm just going to say it again. I think we have an ethical imperative to apply that kind of lens to the work we do. We don't have an ethical imperative to come up with any particular answer, but we have an ethical imperative to do the thinking and I recommend Evan's book to all right. [00:53:06] Ben: I will read it recommendation accepted. And I think, I think. Broadly, and this is just something that, I mean, sort of selfishly, but I also think like there's a lot of people who have made a lot of money in, especially in, in technology. And it's interesting because you look at sort of like you could, you could think of Alfred P Sloan and, and Rockefeller and a lot of [00:53:35] in Carnegie's as these people who made a lot of money and then started, started these foundations. But then you don't see as much of that now. Right? Like you have, you have, you have some but really the, the, the sentiment that I've engaged with a lot is that again, like sort of prioritizing market mechanisms, a implicit idea that, that, like anything, anything valuable should be able to capture that value. And I don't know. It's just like, like how do you, like, have you [00:54:08] Adam: talked to people about, yeah, I think that's a really interesting observation. I think that, and I think it's something we think about a lot is the, the different, I think about a lot is the differences in the ways that today's, you know, newly wealthy, you know, business people, particularly the tech entrepreneurs think about philanthropy. As relates to the way that they made their money. So if we look at Alfred [00:54:35] P Sloan, he he basically built general motors, right? He was a brilliant young engineer who manufactured the best ball bearings in the country for about 20 years, which turned out at the nascent automobile industry. As you can imagine, reducing friction is incredibly important and ball bearings were incredibly important and he made the best ball-bearings right. That is a real nuts. And, but nothing sexy about ball-bearings right. That is the perspective you get on auto manufacturer is that the little parts need to work really well in order for the whole thing to work. And he built a big complicated institution. General motors is a case study is the case study in American business about how you build a large. In large business that has kind of semi-autonomous parts as a way of getting to scale, right? How do you get general motors to scale? You have, you know, you have Chevy and you have a Buick and you're a [00:55:35] Pontiac and you have old's and you have Cadillac and GMC and all, you know, and this was, he was relentlessly kind of practical and institutional thinker, right across a big institution. And the big question for him was how do I create stable institutional structures that allow individual people to exercise judgment and intelligence so they can drive their parts of that thing forward. So he didn't believe that people were cogs in some machine, but he believed that the structure of the machine needed to enable the flourishing of the individual. And that's, that's how we built general motors. That does not describe. The structure of a tech startup, right? Those are move fast and break things, right? That is the mantra. There. You have an idea, you build it quickly. You don't worry about all the things you get to scale as fast as you can with as little structure as you can. You [00:56:35] don't worry about the collateral damage or frankly, much about the people that are, that are kind of maybe the collateral damage. You just get to scale and follow your kind of single minded vision and people can build some amazing institutions that way. I mean, I think it's, it's been very successful, right? For building over the last decades, you know, this incredible tech economy. Right? So I don't fault people for thinking about their business that way. But when you turn that thinking to now funding science, There's a real mismatch, I think between that thinking about institutions and institutions don't matter, the old ones are broken and the new ones can be created immediately. Right? And the fact that real research while it requires often individual leaps forward in acts of brilliance requires a longstanding functioning community. It [00:57:35] fires institutions to fund that research, to host that research that people have long, you know, that the best research is actually done by people who were engaged in various parts of very long decades, careers doing a certain thing that it takes a long time to build expertise and Eva, as brilliant as you are, you need people around you with expertise and experience. There's a real mismatch. And so there can be a reluctance to fund. Th the reluctance to have the commitment to timescales or reluctance to invest in institutions to invest in. There's a I, I think has developed a sense that we should fund projects rather than people and institutions. And that's really good for solving certain kinds of problems, but it's actually a real challenge for basic research and moving basic research forward. So I think there's a lot of opportunity to educate people. And these are super smart people in the tech sector, right. About the [00:58:35] differences between universities and which are very important institutions in all of this and star tech startups. And they really are different sorts of institutions. So I think that's a challenge for us in this sector right now. [00:58:48] Ben: What I liked. To do is tease apart why, why is this different? Like, why can't you just put in more nights to your research and like come up with the, come out with the, like the brilliant insight faster. [00:59:01] Adam: Yeah. I mean, these people who are already working pretty hard, I would say, I mean, you, you know, you're of course, you know, this really well, there are different, I mean, science has, you know, has different parts of science that work on different sorts of problems and, you know, there's, there are problems. Where there's a much more immediate goal of producing a technology that would be usable and applicable. And those require a diff organism organizing efforts in different ways. And, you know, as you well know, the, the national, you know, [00:59:35] the, the private laboratories like bell labs and Xerox labs, and so forth, played a really important role in doing basic research that was really inspired by a particular application. And they were in the ecosystem in a somewhat different way than the basic research done in the universities. You need both of them. And so it, it's not that the way that say the Sloan foundation fund sciences, if everybody only funded science that way, that would not be good. Right. But, but the, the, the big money that's coming out of the, the newly wealthy has the opportunity to have a really positive impact on basically. Yeah, but only if it can be deployed in ways that are consistent with the way that basic sciences is done. And I think that requires some education and, [01:00:22] Ben: and sort of speaking of, of institutions. The, like, as I know, you're aware, there's, there's sort of like this, this like weird Cambridge and explosion of people trying stuff. And I, I guess, like, in addition [01:00:35] to just your, your thoughts on that, I'm, I'm interested particularly if you see, if you see gaps. That people aren't trying to fill, but like, you, you, you think that you would sort of like want to, to shine spotlights on just from, from, from your, your overview position. [01:00:52] Adam: I mean, that's a great question. I, I'm not going to be able to give you any interesting insight into what we need to do. I do think I'm in great favor of trying lots of things. I mean, I love what's going on right now that people are, you know, the, that people are trying different experiments about how to, to fund science. I think that I have a couple of thoughts. I mean, I do think that most of them will fail because in the Cambrian explosion, most of things fail. Right. That is that's if they all succeeded people, aren't trying interesting enough things. Right. So that's fine. I think that there is a, I think that a danger in too much reinventing the wheel. And I, you know, one of the things I, you know, when notice is, is [01:01:35] that you know, some of the new organizations, many of them are kind of set up as a little bit hybrid organizations that they do some funding. And, but they also want to do some advocacy. They're not 5 0 1 they maybe want to monetize the thing that they're, that they're doing. And I think, you know, the, you know, if you want to set a bell labs set up bell labs, there aren't. Magic bullets for some magic hybrid organization, that's going to span research all the way from basic to products, right. And that is going to mysteriously solve the problem of plugging all of the holes in the kind of research, you know, research ecosystem. And so I think it's great that people are trying a lot of different things. I hope that people are also willing to invest in the sorts of institutions we already have. And and that there's a, that there is kind of a balance. There's [01:02:35] a little bit of a language that you start to hear that kind of runs down, that it kind of takes a perspective that everything is broken in the way we're doing things now. And I don't think that everything is broken in the way we do things. Now. I don't think that the entire research institution needs to be reinvented. I think. Interesting ideas should be tried. Right. I think there's a distinction between those two things. And I would hate to see the money disproportionately going into inventing new things. Yeah. I don't know what the right balance is. And I don't have a global picture of how it's all distributed. I would like to see both of those things happening, but I worry a little bit that if we get a kind of a narrative that the tech billionaires all start to all start to buy into that the system is broken and they shouldn't invest in it. I think that will be broken, then it will be broken and we'll [01:03:35] miss a great opportunity to do really great things, right? I mean, the, you know, the, what Carnegie and Rockefeller left behind were great institutions that have persisted long after Carnegie and Rockefeller. We're long gone and informs that Carnegie and Rockefeller could never have imagined. And I would like that to be the aspiration and the outcome of the newly wealthy tech billionaires. The idea that you might leave something behind that, that 50 or a hundred years from now, you don't recognize, but it's doing good right. Long past your own ability to direct it. Right. And that requires a long-term sense of your investment in society, your trust in other people to carry something on after you to think more institutionally and less about what's wrong with institutions, I think would be a [01:04:35] helpful corrective to much of the narrative that I see there. And that is not inconsistent with trying exciting new things. It really isn't. And I'm all in favor of that. But the system we have has actually produced. More technological progress than any other system at any other point in history by a factor that is absolutely incalculable. So we can't be doing everything wrong. [01:04:58] Ben: I think that is a perfect place to stop. Adam. Thanks for being part of idea machines. And now a quick word from our sponsors. Is getting into orbit a drag. Are you tired of the noise from rockets? Well, now with Zipple the award-winning space elevator company, you can get a subscription service for only $1,200 a month. Just go to zipple.com/ideamachines for 20% off your first two months. That's zipple.com/ideamachines.…
I
Idea Machines


1 Managing Mathematics with Semon Rezchikov [Idea Machines #44] 57:16
57:16
Play Later
Play Later
Lists
Like
Liked57:16
In this conversation, Semon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction, and a lot more! Semon is currently a postdoc in mathematics at Harvard where he specializes in symplectic geometry. He has an amazing ability to go up and down the ladder of abstraction — doing extremely hardcore math while at the same time paying attention to *how* he’s doing that work and the broader institutional structures that it fits into. Semon is worth listening to both because he has great ideas and also because in many ways, academic mathematics feels like it stands apart from other disciplines. Not just because of the subject matter, but because it has managed to buck many of the trend that other fields experienced over the course of the 20th century. Links Semon's Website Transcript [00:00:35] Welcome back to idea machines. Before we get started, I'm going to do two quick pieces of housekeeping. I realized that my updates have been a little bit erratic. My excuse is that I've been working on my own idea machine. That being said, I've gotten enough feedback that people do get something out of the podcast and I have enough fun doing it that I am going to try to commit to a once a month cadence probably releasing on the pressure second [00:01:35] day of. Second thing is that I want to start doing more experiments with the podcast. I don't hear enough experiments in podcasting and I'm in this sort of unique position where I don't really care about revenue or listener numbers. I don't actually look at them. And, and I don't make any revenue. So with that in mind, I, I want to try some stuff. The podcast will continue to be a long form conversation that that won't change. But I do want to figure out if there are ways to. Maybe something like fake commercials for lesser known scientific concepts, micro interviews. If you have ideas, send them to me in an email or on Twitter. So that's, that's the housekeeping. This conversation, Simon Rezchikov and I talk about what other disciplines can learn from mathematics, creating and cultivating collaborations, working at different levels of abstraction. is currently a post-doc in mathematics at Harvard, where he specializes in symplectic geometry. He has an amazing ability to go up, go up and down the ladder of [00:02:35] abstraction, doing extremely hardcore math while at the same time, paying attention to how he's doing the work and the broader institutional structures that affect. He's worth listening to both because he has great ideas. And also because in many ways, academic mathematics feels like it stands apart from other disciplines, not just because of the subject matter, but because it has managed to buck many of the trends that other fields experience of the course of the 20th century. So it's worth sort of poking at why that happened and perhaps. How other fields might be able to replicate some of the healthier parts of mathematics. So without further ado, here's our conversation. [00:03:16] Ben: I want to start with the notion that I think most people have that the way that mathematicians go about a working on things and be thinking about how to work on things like what to work on is that you like go in a room and you maybe read some papers and you think really hard, and then [00:03:35] you find some problem. And then. You like spend some number of years on a Blackboard and then you come up with a solution. But apparently that's not that that's not how it actually works. [00:03:49] Semon: Okay. I don't think that's a complete description. So definitely people spend time in front of blackboards. I think the length of a typical length of a project can definitely. Vary between disciplines I think yeah, within mathematics. So I think, but also on the other hand, it's also hard to define what is a single project. As you know, a single, there might be kind of a single intellectual art through which several papers are produced, where you don't even quite know the end of the project when you start. But, and so, you know, two, a two years on a single project is probably kind of a significant project for many people. Because that's just a lot of time, but it's true that, you know, even a graduate student might spend several years working on at least a single kind of larger set of ideas because the community does have enough [00:04:35] sort of stability to allow for that. But it's not entirely true that people work alone. I think these days mathematics is pretty collaborative people. Yeah. If you're mad, you know, in the end, you're kind of, you probably are making a lot of stuff up and sort of doing self consistency checks through this sort of formal algebra or this sort of, kind of technique of proof. It makes you make sure helps you stay sane. But when other people kind of can think about the same objects from a different perspective, usually things go faster and at the very least it helps you kind of decide which parts of the mathematical ideas are really. So often, you know, people work with collaborators or there might be a community of people who were kind of talking about some set of ideas and they may be, there may be misunderstanding one another, a little bit. And then they're kind of biting off pieces of a sort of, kind of collective and collectively imagined [00:05:35] mathematical construct to kind of make real on their own or with smaller groups of people. So all of those [00:05:40] Ben: happen. And how did these collaborations. Like come about and [00:05:44] Semon: how do you structure them? That's great. That's a great question. So I think this is probably several different models. I can tell you some that I've run across. So during, so sometimes there are conferences and then people might start. So recently I was at a conference and I went out to dinner with a few people, and then after dinner, we were. We were talking about like some of our recent work and trying to understand like where it might go up. And somebody, you know, I was like, oh, you know, I didn't get to ask you any questions. You know, here's something I've always wanted to know from you. And they were like, oh yes, this is how this should work. But here's something I don't know. And then somehow we realized that you know, there was some reasonable kind of very reasonable guests as to what the answer is. Something that needed to be known would be so I guess now we're writing a paper together, [00:06:35] hopefully that guess works. So that's one way to start a collaboration. You go out to a fancy dinner and afterwards you're like, Hey, I guess we maybe solved the problem. There is other ways sometimes people just to two people might realize they're confused about the same thing. So. Collaboration like that kind of from somewhat different types of technical backgrounds, we both realized we're confused about a related set of ideas. We were like, okay, well I guess maybe we can try to get unconfused together. [00:07:00] Ben: Can I, can I interject, like, I think it's actually realizing that you are confused about the same problem as someone who's coming at it from a different direction is actually hard in and of itself. Yes. Yes. How, how does, like, what is actually the process of realizing that the problem that both of you have is in fact the same problem? Well, [00:07:28] Semon: you probably have to understand a little bit about the other person's work and you probably have to in some [00:07:35] way, have some basal amount of rapport with the other person first, because. You know, you're not going to get yourself to like, engage with this different foreign language, unless you kind of like liked them to some degree. So that's actually a crucial thing. It's like the personal aspect of it. Then you know it because maybe you'll you kind of like this person's work and maybe you like the way they go about it. That's interesting to you. Then you can try to, you know, talk about what you've recently been thinking about. And then, you know, the same mathematical object might pop up. And then that, that sort of, that might be you know, I'm not any kind of truly any mathematical object worth studying, usually has incarnations and different formal languages, which are related to one another through kind of highly non-obvious transformation. So for example, everyone knows about a circle, but a circle. Could you could think of that as like the set of points of distance one, you could think of it as some sort of close, not right. You can, you can sort of, there are many different concrete [00:08:35] intuitions through which you can grapple with this sort of object. And usually if that's true, that sort of tells you that it's an interesting object. If a mathematical object only exists because of a technicality, it maybe isn't so interesting. So that's why it's maybe possible to notice that the same object occurs in two different peoples. Misunderstandings. [00:08:53] Ben: Yeah. But I think the cruxy thing for me is that it is at the end of the day, it's like a really human process. There's not a way of sort of colliding what both of, you know, without hanging out. [00:09:11] Semon: So people. And people can try to communicate what they know through texts. So people write reviews on. I gave a few talks recently in a number of people have asked me to write like a review of this subject. There's no subject, just to be clear. I kind of gave a talk with the kind of impression that there is a subject to be worked on, but nobody's really done any work on it that you're [00:09:35] meeting this subject into existence. That's definitely part of your job as an academic. But you know, then that's one way of explaining, I think that, that can be a little bit less, like one-on-one less personal. People also write these a different version of that is people write kind of problems. People write problem statements. Like I think these are interesting problems and then our goal. So there's all these famous, like lists of conjectures, which you know, in any given discipline. Usually when people decide, oh, there's an interesting mathematical area to be developed. At some point they have a conference and somebody writes down like a list of problems and the, the conditions for these problems are that they should kind of matter. They should help you understand like the larger structure of this area and that they should, the problems to solve should be precise enough that you don't need some very complex motivation to be able to engage with them. So that's part of, I think the, the trick in mathematics. You know, different people have very different like internal understandings of something, but you reduce the statements or [00:10:35] the problems or the theorems, ideally down to something that you don't need a huge superstructure in order to engage with, because then people will different, like techniques or perspective can engage with the same thing. So that can makes it that depersonalizes it. Yeah. That's true. Kind of a deliberate, I think tactic. And [00:10:51] Ben: do you think that mathematics is. Unique in its ability to sort of have those both like clean problem statements. And, and I think like I get the sense that it's, it's almost like it's higher status in mathematics to just declare problems. Whereas it feels like in other discipline, One, there are, the problems are much more implicit. Like anybody in, in some specialization has, has an idea of what they are, but they're very rarely made lightly explicit. And then to pointing out [00:11:35] problems is fairly low status, unless you simultaneously point out the problem and then solve it. Do you think there's like a cultural difference? [00:11:45] Semon: Potentially. So I think, yeah, anyone can make conjectures in that, but usually if you make a conjecture, it's either wrong or on. Interesting. It's a true for resulting proof is boring. So to get anyone to listen to you, when you make problem, you state problems, you need to, you need to have a certain amount of kind of controllers. Simultaneously, you know, maybe if you have a cell while you're in, it's clear. Okay. You don't understand the salary. You don't understand what's in it. It's a blob that does magic. Okay. The problem is understand the magic Nath and you don't, you can't see the thing. Right? So in some sense, defining problems as part of. That's very similar to somebody showing somebody look, here's a protein. Oh, interesting. That's a very [00:12:35] similar process. And I do think that pointing out, like, look, here's a protein that we don't understand. And you didn't know about the existence of this protein. That can be a fairly high status work say in biology. So that might be a better analogy. Yeah. [00:12:46] Ben: Yeah, no, I like that a lot that math does not have, you could almost say like the substrate, that the context of reality. [00:12:56] Semon: I mean it's there, right? It's just that you have to know what to look for in order to see it. So, right. Like, you know, number theorists, love examples like this, you know, like, oh, everybody knows about the natural numbers, but you know, they just love pointing out. Like, here's this crazy pattern. You would never think of this pattern because you don't have this kind of overarching perspective on it that they have developed over a few thousand years. [00:13:22] Ben: It's not my thing really been around for a few thousand years. It's pretty [00:13:25] Semon: old. Yeah. [00:13:27] Ben: W w what would you, [00:13:30] Semon: this is just curiosity. What, what would [00:13:32] Ben: you call the first [00:13:35] instance of number theory in history? [00:13:38] Semon: I'm not really sure. I don't think I'm not a historian in that sense. I mean, certainly, you know, the Bell's equation is related to like all kinds of problems in. Like I think grease or something. I don't exactly know when the Chinese, when the Chinese remainder theorem is from, like, I I'm, I'm just not history. Unfortunately, I'm just curious. But I do think the basic server very old, I mean, you know, it was squared of two is a very old thing. Right. That's the sort of irrationality, the skirt of two is really ancient. So it must predate that by quite a bit. Cause that's a very sophisticated question. [00:14:13] Ben: Okay. Yeah. So then going, going back to collaborations I think it's a surprising thing that you've told me about in the past is that collaborations in mathematics are like, people have different specializations in the sense that the collaborations are not just completely flat of like everybody just sort of [00:14:35] stabbing at a place. And that you you've actually had pretty interesting collaborations structures. [00:14:43] Semon: Yeah. So I think different people are naturally drawn to different kinds of thinking. And so they naturally develop different sort of thinking styles. So some people, for example, are very interested in someone had there's different kinds. Parts of mathematics, like analysis or algebra or you know, technical questions and typology or whatnot. And some people just happen to know certain techniques better than others. That's one access that you could sort of classify people on a different access is about question about sort of tasting what they think is important. So some people. Wants to have a very kind of rich, formal structure. Other people want to have a very concrete, intuitive structure, and those are very different, those lead to very different questions. Which, you know, that's sort of something I've had to navigate with recently where there's a group of people who are sort of mathematical physicists and they kind of like a very rich, formal structure. And there's other [00:15:35] people who do geometric analysis. Kind of geometric objects defined by partial differential equations and they want something very concrete. And there are relations between questions about areas. So I've spent some time trying to think about how one can kind of profitably move from one to the other. But did Nash there's that, that sort of forces you to navigate a certain kind of tension. So. Maybe you have different access is whether people like these are the here's one, there's the frogs and birds.com. And you know, this, this is a real, this is a very strong phenomenon and mathematics is this, this [00:16:09] Ben: that was originally dice. [00:16:11] Semon: And maybe I'm not sure, but it's certainly a very helpful framework. I think some people really want to take a single problem and like kind of stab at it. Other people want to see the big picture and how everything fits. And both of these types of work can be useful or useless depending on sort of the flavor of the, sort of the way the person approached it. So, you know, often, you know, often collaborations have like one person who's obviously more kind of hot and kind [00:16:35] of more birdlike and more frog like, and that can be a very productive. [00:16:40] Ben: And how do you make your, like let's, let's let's date? Let's, let's frog that a little bit. And so like, what are the situations. W what, what are the, both like the success and failure modes of birds in the success and failure modes of [00:16:54] Semon: frocks. Great, good. This is, I feel like this is somehow like very clearly known. So the success so-so what frogs fail at is they can get stuck on a technical problem, which does not matter to the larger aspect of the larger university. Hmm. And so in the long run, they can spend a lot of work resolving technical issues which are then like, kind of, not really looked out there because in the end they, you know, maybe the, you know, they didn't matter for kind of like progress. Yeah. What they can do is they can discover something that is not obvious from any larger superstructure. Right. So they can sort of by directly [00:17:35] engaging with kind of the lower level details of mathematical reality. So. They can show the birds something they could never see and simultaneously they often have a lot of technical capacity. And so they can, you know, there might be some hard problem, which you know, no one, a large perspective can help you solve. You just have to actually understand that problem. And then they can remove the problem. So that can learn to lead opened kind of to a new new world. That's the frog. The birds have an opposite success and failure. Remember. The success mode is that they point out, oh, here's something that you could have done. That was easier. Here's kind of a missing piece in the puzzle. And then it turns out that's the easy way to go. So you know, get mathematical physicists, have a history of kind of being birds in this way, where they kind of point out, well, you guys were studying this equation to kind of study the typology of format of holes instead of, and you should study, set a different equation, which is much easier. And we'll tell you all this. And the reason for this as sort of like incomprehensible to mathematician, but the math has made it much easier to solve a lot of problems. That's kind of the [00:18:35] ultimate bird success. The failure mode is that you spend a lot of time piecing things together, but then you only work on problems, which are, which makes sense from this huge perspective. And those problems ended up being uninteresting to everyone else. And you end up being trapped by this. Kind of elaborate complexity of your own perspective. So you start working on kind of like an abstruse kind of, you know, you're like computing some quantity, which is interesting only if you understand this vast picture and it doesn't really shed light on anything. That's simple for people to understand. That's usually not good. If you develop a new formal world that sort of in, maybe it's fine to work on it on this. But it is in the, and partially validated by solving problems that other people could ask without any of this larger understanding. That's [00:19:26] Ben: yeah. Like you can actually be too, [00:19:31] Semon: too general, almost. That's very often a [00:19:35] problem. So so you know, one thing that one bit of mathematics that is popular among non mathematicians for interesting reasons is category. So I know a lot of computer scientists are sort of familiar with category theory because it's been applied to programming languages fairly successfully. Now category theory is extremely general. It is, you know, the, the mathematical kind of joke description of it is that it's abstract nonsense. So, so that's a technical term approved by abstract now. this is a tech, there are a number of interesting technical terms like morally true, and the proof by abstract nonsense and so forth, which have, I think interesting connotation so approved by abstract nonsense is you have some concrete question where you realize, and you want to answer it and you realize that its answer follows from the categorical structure of the question. Like if you fit this question into the [00:20:35] framework of categories, There's a very general theorem and category theory, which implies what you wanted, what that tells you in some sense of that. Your question was not interesting because it had no, you know, it really wasn't a question about the concrete objects you were looking at at all. It was a question about like relations between relations, right? So, you know, the. S. So, you know, there's this other phrase that the purpose of category theory is to make the trivial trivially trivial. And this is very useful because it lets you skip over the boring stuff and the boring stuff could actually, you get to get stuck on it for a very long time and it can have a lot of content. But so category theory in mathematics is on one hand, extremely useful. And on the other hand can be viewed with a certain amount of. Because people can start working on kind of these very upstream, categorical constructions some more complicated than the ones that appear in programming languages, which, you know, most mathematicians can't make heads or tails of what they're about. And some of those [00:21:35] are kind of not necessarily developed in a way to be made relevant to the rest of mathematics and that there is a sort of natural tension that anyone is interested in. Category theory has to navigate. How far do you go into the land of abstract nonsense? So, you know, even as the mathematicians are kind of viewed as like the abstract nonsense people by most people, even within mathematics is hierarchy continues and is it's factal yeah. The hierarchy is preserved for the same reasons. [00:22:02] Ben: And actually that actually goes back to I think you mentioned when you're, you're talking about the failure mode of frogs, which is that they can end up working on things that. Ultimately don't matter. And I want to like poke how you think about what things matter and don't matter in mathematics because sort of, I think about this a lot in the context of like technologies, like people, people always think like technology needs to be useful for, to like some, [00:22:35] but like some end consumer. But then. You often need to do things to me. Like you need to do some useless things in order to eventually build a useful thing. And then, but then mathematics, like the concept of usefulness on the like like I'm going to use this for a thing in the world. Not, not the metric, like yeah. But there's still things that like matter and don't matter. So [00:23:01] Semon: how do you think about, so it's definitely not true that people decide which mathematics matters based on its applicability to real-world concerns. That might be true and applied with medics actually, which has maybe in as much as there's a distinction that it's sort of a distinction of value and judgment. But in mathematics, So I said that mathematical object is more real in some sense, when it can be viewed from many perspectives. So there are certain objects which therefore many different kinds of mathematicians can grapple with. And there are certain questions which kind of any mathematician can [00:23:35] understand. And that is one of the ways in which people decide that mathematics is important. So for example you might ask a question. Okay. So this might be some, so here's a, here's a question which I would think is important. I'm just going to say something technical, but I can kind of explain what it means, you know, understand sort of statements about the representation theory of of the fundamental group of a surface. Okay. So what that means is if you have any loop in a surface, then you can assign to that loop a matrix. Okay. And then if you kind of compose. And then the condition of that for this assignment is that if you compose the loops, but kind of going from one after the other, then you assign that composed loop the product of his two matrices. Okay. And if you deformed the loop then the matrix you assign is preserved under the defamation. Okay. So that's the, that's the sort of question was, can you classify these things? Can you understand them? They turn out to be kind of relevant to differential equations, to partial, to of all different kinds to physics, to kind of typology. Hasn't got a very bad. So, you know, progress on that is kind of [00:24:35] obviously important because it turns out to be connected to other questions and all of mathematics. So that's one perspective, kind of the, the, the simplest, like the questions that any mathematician would kind of find interesting. Cause they can understand them and they're like, oh yeah, that's nice. Those are that's one way of measuring importance and a different one is about the. Sort of the narrative, you know, mathematics method, you just spend a lot of time tying making sure that kind of all the mathematics is kind of in practice connected with the rest of it. And there are all these big narratives which tie it together. So those narratives often tell us a lot of things that are go far beyond what we can prove. So we know a lot more about numbers. Than we can prove. In some sense, we have much more evidence. So, you know, one, maybe one thing is the Remont hypothesis is important and we kind of have much more evidence for the Riemann hypothesis in some sense, then we have for [00:25:35] any physical belief about our world. And it's not just important to, because it's kind of some basic question it's important because it's some Keystone in some much larger narrative about the statistics of many kinds of number, theoretic questions. So You know, there are other more questions which might sound abstruse and are not so simple to state, but because they kind of would clarify a piece of this larger conceptual understanding when all these conjectures and heuristics and so forth. Yeah. You know, like making it heuristic rigorous can be very valuable and that heuristic might be to that statement might be extremely complex. But it means that this larger understanding of how you generate all the heuristics is correct or not correct. And that is important. There's also a surprise. So people might have questions about which they expect the answer to be something. And then you show it's not that that's important. So if there are strong expectations, it's not that easy to form expectations in mathematics, but, [00:26:30] Ben: but as you were saying that there, there are these like narrative arcs. [00:26:35] Do something that is both like correct and defies the narrative. [00:26:39] Semon: That's an interesting, that means there must be something there, or maybe not. Maybe it's only because there was some technicality and like, you know, the technicality is not kind of, it doesn't enlighten the rest of the narrative. So that's some sort of balance which people argue about and is determined in the end, I guess, socially, but also through the production of, I don't know, results and theorems and expect mathematical experiments and so forth. [00:27:04] Ben: And to, to, so I'm gonna, I'm going to yank us back to, to the, the, the collaborations. And just like in the past, we've talked about like how you actually do like program management around these collaborations. And it felt like I got the impression that mathematics actually has like pretty good standards for how this is. What [00:27:29] Semon: do you mean by program management? Meaning [00:27:31] Ben: like like you're like, like how, like [00:27:35] how you were basically just managing your collaborators, like you you're talking about like how, what was it? It was like, you need to like wrangle people for, for. I, or yeah, or like, yeah. So you've got like, just like how to manage your collaborators. [00:27:51] Semon: So I guess [00:27:54] Ben: we were developing like a theory on that. [00:27:56] Semon: Yeah, I think a little bit. So on one hand, I guess in mathematics and math, every, so in the sciences, there's usually somebody with money and then they kind of determined what has. Is [00:28:08] Ben: this, is this a funder or is this like [00:28:10] Semon: a, I would think the guy pie is huge. So yeah, in the sciences, maybe the model is what like funding agencies, PI is and and lab members, right. And often the PIs are setting the direction. The grant people are kind of essentially putting constraints on what's possible. So they steer the direction some much larger way, but they kind of can't really see the ground to all right. And [00:28:35] then a bunch of creative work happens at lowest level. But you know, you're very constrained by what's possible in your lab in mathematics. There aren't really labs, right. You know, there are certainly places where people know more. Other places about certain parts of mathematics. So it's hard to do certain kinds of mathematics without kind of people around you who know something because most of the mathematics isn't written down. And [00:28:58] Ben: that, that statement is shocking in and of itself. [00:29:01] Semon: The second is also similar with the sciences, right? Like most things people know about the natural world aren't really that well-documented that's why it pays to be sometimes lower down the chain. You might find something that isn't known. Yeah. But so because of that, people kind of can work very independently and even misunderstand one another, which is good because that leads to like the misunderstanding can then lead to kind of creative, like developments where people through different tastes might find different aspects of the same problem. Interesting. And the whole thing is then kind of better that way. And then [00:29:34] Ben: [00:29:35] like resolving, resolving. The confusion in a legible way, [00:29:40] Semon: it sort of pushes the field. So that's, but also because everyone kind of can work on their own, you know, coordination involves, you know, a certain amount of narrative alignment. And so you have to understand like, oh, this person is naturally suited to this kind of question. This person is naturally suited to this kind of question. So what are questions where both people are. First of all, you would need both people to make progress on it. That gives you competitive advantage, which is important, extremely important in kind of any scientific landscape. And secondly if you can find a question of overlap, then, you know, there's some natural division of labor or some natural way in which both people can enlighten the other in surprising ways. If you can do everything yourself and you have some other person, like write it up, that's sort of not that phonic club. So yeah, so there's, and then there's like, kind of on a [00:30:35] larger, but that's like kind of one on a single project collaboration to do larger collaboration. You have to kind of, you know, give you have to assign essentially you have to assign social value to questions, right? Like math, unlike sort of the math is small enough that it can just barely survive. It's credit assigning system almost entirely on the basis of the social network of mathematicians. Oh, interesting. Okay. It is certainly important to have papers refereed because like it's important for somebody to read a paper and check the details. So the journals do matter, but a lot happen. So, you know, it doesn't have the same scaling. The biology or machine learning has in part, because it's a small, [00:31:20] Ben: do you know, like roughly how many mathematicians. I can, I can look this [00:31:25] Semon: up. I mean, it depends on who you count as a mathematician. So that's the technique I'm asking you. The reason, the reason I'm asking [00:31:35] that is because of course there's the American mathematical society and they publish, like, this is the number of mathematicians. And the thing is like, they count like quite a lot of people. So you actually have the decision actually dramatically changes your answer. I would say there are on the order of the. Tens of thousands of mathematicians. Like if you think about like the number of attendees of the ICM, the international Congress of mathematicians, like, and then, you know, the thing is a lot of people, so it depends on like pure mathematicians, how pure, you know, that's going to go up and down. But that's sort of the right order of magnitude. Okay. Cause which is a very small given that [00:32:12] Ben: a compared to, to most other disciplines then, especially compared to even. Science as a whole like research [00:32:20] Semon: has a whole. Yeah. So yeah, I think like if you look at like, you know, all the, if you say like, well look at the Harvard Kennedy school of business, and then they have an MBA program, which is my impression is it's serious. [00:32:35] And then you also look at like all the math pieces. Graduates and like the top 15 kind of us schools are kind of like, you know, I think the MBAs are like several times lecture. Yes. So that's, maybe I was surprised to learn that [00:32:50] Ben: that's also good. Instead of [00:32:51] Semon: like, you can look at the output rate, the flow rate, that's a very easy way to decide. Yeah. But yeah, so you have to, yeah. So kind of you, there's like kind of, depending on how, if you can let go. There are certain you have to, if you want to work with people, you have to find you, there's not, you can't really be a PI in mathematics, but if you are good at talking to people, you can encourage people to work on certain questions. So that over time kind of a larger set of questions get answered, and you can also make public statements to the and which are in some ways, invitations, like. If you guys do these [00:33:35] things, then it'll be better for you because they fit into a larger context. So therefore your work is more significant that you're actually doing them a service by explaining some larger context. And simultaneously by sort of pointing out that maybe some problem is easy or comparatively, easy to some people that you, you might not do. So that helps you if then they solve the problem because you kind of made a correct prediction of like, there is good mathematic. Yeah. So this is some complicated social game that, you know, mathematicians are not like, you know, they're kind of strange socially, but they do kind of play this game and the way in which they play this game depends on their personal preferences and how social they are. [00:34:13] Ben: And actually speaking of the social nature of mathematics I get the impression that mathematics sort of as a discipline is. It feels much closer to what one might think of as like old academia then many other disciplines in the sense that my, my impression is [00:34:35] that your, your tenure isn't as much based on like how much grant money you're getting in. And It's, it's not quite as much like a paper mill up and out [00:34:46] Semon: gay. Yeah. There's definitely pressure to publish. There, the expected publishing rate definitely depends on the area. So, you know, probability publishers more, in some ways it's a little bit more like applied mathematics, which has more of a kind of paper mill quality to it. I don't want to overstate that. But so there is space for people to write just a few papers if they're good and have got a job. Yeah. And so it's definitely true as I think in the rest of the sciences, that kind of high quality trumps quantity. Right. Then, you know, but modular, the fact that you do have, you do have to produce a certain amount of work in order to stay in academia and You know, in the end, like where you end up is very much determined on the significance of your work. Right. And if you're very productive, consistently, certainly helps with people are kind of not as [00:35:35] worried. But yeah, it's definitely not determined based on grant money because essentially there's not that much grant money to go around. So that makes it have more of this old-school flavor. And it's also true that it's still not, it's genuinely not strange for people to graduate with like just their thesis to graduate from a PhD program. And they can do very well. So long as they, during grad school learn something that other people don't know and that matters. That seems that that's helpful, but so that allows for, yeah, this. You know, th this there's this weird trick that mathematicians play, where like proofs are kind of supposedly a universal language that everyone can read. And that's not quite true, but it tries to approximate that ideal. But everyone has sort of allowed to go on their own little journey and the communities does spend a lot of work trying to defend that. What, [00:36:25] Ben: what sort of, what, what does that work [00:36:27] Semon: actually look like? Well, I think it's true that it is actually true that grad students are not required to like publish a paper a year. Yeah, [00:36:35] that's true. And that's great that people, I think, do defend that kind of position and they are willing to put their reputation on the line and the kind of larger hiring process to defend that SAC separately. It's true that, you know, You know, work that is not coming out of one of the top three people or something is can still be considered legitimate. You know, because like total it's approved, it's approved. No one can disagree with it. So if some random person makes some progress, you know, it's actually very quickly. If, if people can understand it, it's very quickly kind of. And this allows communities to work without quite understanding one or other for awhile and maybe make progress that way, which can be [00:37:18] Ben: helpful. Yeah. And and most of the funding for math departments actually comes from teaching. Is that [00:37:26] Semon: yeah, I think that a lot of it comes from teaching. A certain chunk of it comes from grants. Like basically people use grants to, in order to teach less. Yeah. That's more or [00:37:35] less how it works. You know, of course there's this, as in, you know, mathematics has this kind of current phenomenon where, you know, rich individuals like fund a department or something or they fund a prize. But by and large, it seems to be less dependent on these gigantic institutional handouts from say the NSF or the NIH, because that the expenses aren't quite yet. But it does also mean that like, it is sort of constrained and you know, it can't, you know, like big biology has like, kind of so much money, maybe not enough, not as much as it needs. I mean, these grant acceptance rates are extremely low. [00:38:13] Ben: If it's, for some reason, it's every mathematician magically had say order of magnitude more funding [00:38:21] Semon: when it matters. Yeah. So it's not clear that they would know what to do with that. There is, I thought a lot about the question of, to what degree does the mathematics is some kind of social enterprise and that's maybe true of every research [00:38:35] program, but it's particularly true in mathematics because it's sort of so dependent on individual creativity. So I've thought a lot about to what degree you could scale the social enterprise and in what directions it could scale because it's true that kind of producing mathematicians is essentially an expensive and ad hoc process. But at the same time, Plausibly true that people might be able to do research of a somewhat different kind just in terms of collaborations or in terms of like what they felt to do free to do research on if they had access to different kind of funding, like math itself is cheap, but the. Kind of freedom to say, okay, well, these next two years, I'm going to do this kind of crazy different thing. And that does not have to fit with my existing research program that could, that you have to sort of fight for. And that's like a more basic stroke thing about the structure of kind of math academia. I feel like [00:39:27] Ben: that's, that's like structurally baked into almost the entire world where there's just a ton of it's, it's [00:39:35] very hard to do something completely different than the things that you have done. Right? People, people, boat, people. Our book more inclined to help you do things like what you've done in the past. And they are inclined to push against you doing different things. Yeah, [00:39:50] Semon: that's true. [00:39:50] Ben: And, and sort of speaking of, of money in the past, you've also pointed out that math is terrible at capturing the value that it creates in this. [00:40:02] Semon: Well, yeah. You know, math is, I mean, it may be hard to estimate kind of human capital value. Like maybe all mathematicians should be doing something else. I don't really know how to reason about that, but it's definitely objectively very cheap. Just in the sense of like all the funding that goes into mathematics is very little and arguably the [00:40:21] Ben: sort of downstream, like basically every, every technical anything we have is to some extent downstream. Mathematics [00:40:32] Semon: th there is an argument to be made of that kind. You know, [00:40:35] I don't think one should over I think, you know, there are extreme versions of this argument, which I think are maybe not helpful for thinking about the world. Like you shouldn't think like, ah, yes, computer science is downstream of the program. Like this turning thing. Like, I don't really know that it's fair to say that, but it is true that whenever mathematicians produce something that's kind of more pragmatically useful for other people, it tends to be. It tends to be easy to replicate and it tends to be very robust. So there are lots of other ideas of this kind and, you know, separately, even a bunch of the value of mathematics to the larger world seems to me to not even be about specific mathematical discoveries, but to be about like the existence of this larger language and culture. So, you know, neural network people now, you know, they have all of these like echo variant neural networks. Yeah. You know, that's all very old mathematics. But it's very helpful to have kind of that stuff feel like totally, like you need to have those kinds of ideas be completely explored [00:41:35] before a totally different community can really engage with them. And that kind of complete kind of that sort of underlying cultural substrate actually does allow for different kinds of things, because doing that exploration takes a few people a lot of time. So in that sense, then it's very hard to like you know, yeah. What you do well, most mathematicians do things which will have no relevance to the larger world. Although it may be necessary for the progress of the sort of more useful basal things. Like the idea of a manifold came out of like studying elliptic functions historically and manifolds are very useful idea. And I looked at functions are or something. I mean, they're also useful, but they maybe less well known. Certainly I think a typical scientist does not know about them. Yeah. It came out, but it did come out of like studying transformation laws for elliptic functions, which is a pretty abstruse sounding thing. So, but because of that, there's just, there's no S it's very hard to find a way for mathematicians to kind of like dip into the future. And because like, you can have a startup. You know, like it's not going to be industrially useful, but it is [00:42:35] clearly on this sort of path in a way that you kind of, it's very hard to imagine removing a completely. Yeah. [00:42:42] Ben: So, no, I like it also because it's, again, it's, it's sort of this extreme example of some kind of continuum where it's like, everybody knows that math is really important, but then everybody also knows that it's not a. Immediately [00:43:02] Semon: applicable. Yeah. And there's this question of, how do you kind of make the navigation that continuum smoother and that has you know, that's like a cultural issue and like an institutional issue to some degree, you know, it's probably true that new managers do know lots of stuff, empirically they get hired and then they get, they like, their lives are fine. So it seems that, you know, people recognize that but the, you know, various also in part too, because mathematicians try to kind of preserve this sort of space for [00:43:35] people to explore. There is a lot of resistance in the pure mathematics community for people to try to like try random stuff and collaborate with people. And, you know, there is probably some niche for you know, Interactions between mathematically minded people and kind of things which are more relevant to the contemporary world or near contemporary world. And that niches one where it's navigation was a little bit obscure. It's not There aren't, there are some institutions around it, but it's, it doesn't seem to me to be like completely systematized. And that's in part because of the resistance of the pure mathematics community. Like historically, I mean, you know, it's true that like statistics, departments kind of used to be part of pure mathematics departments and then they got kicked out, probably they left and they were like, we can make more money than you. No, seriously. I don't know. I mean, there's like, I don't know the history of Berkeley stats department isn't famously one of the first ones that have this. I don't know the detailed history, but there was definitely some kind of conflict and it was a cultural conflict. Yeah. So these sorts of cultural [00:44:35] issues are things that I guess anyone has a saying, and I, I'm kind of very curious how they will evolve in the coming 50 years. Yeah. [00:44:42] Ben: To, to change the subject just a bit again the, can you, can you dig into how. Do you call them retreats? Like when, when the, the thing where you get a bunch of mathematicians and you get them to all live in a place [00:44:56] Semon: for like, so there's this interesting well that's, there are things with a couple CS there. Of course they're there. That's maybe. So there are kind of research programs. So that's where some Institute has flies together. Post-docs maybe some grad students, maybe some sort of senior faculty and they all spend time in one area for a couple of months in order to maybe make progress on some kind of idea of a question. So, yeah. That is something that there are kind of dedicated institutes to doing. In some sense, this is one of the places where like kind of external [00:45:35] funding has changed the structure of mathematics. Cause like the Institute of advanced study is basically one of these things. Yes. This Institute at Princeton where like basically a few old people, I mean, I'm kind of joking, but you know, there's a few kind of totemic people like people who have gone there because they sort of did something famous and they sit there. And then what the Institute has done yesterday actually does in mathematics is it has these semester, longer year long programs. We're just house funding for a bunch of people to space. Been there spent a year there or half a year there, where to fly in there for a few weeks, a few times in the year. And that gets everyone together in one area and maybe by interacting, they can kind of figure out what's going on in some theoretical question, a different thing that people have done in much more short term is there's like a, kind of an interesting conference format, which is like, reminds me a little bit of like unconferences or whatnot, but it's actually kind of very serious where people choose you know, hot topic. In a [00:46:35] kind of contemporary research and then they like rent out a giant house and then they have, I don't know, 20 people live in this house and maybe cook together and stuff. And then, you know, everyone there's like every learning center is like a week long learning seminar where there's some people who are like real experts in the area, a bunch of people who don't know that much, but would like to learn. And then everyone has to give a talk on subjects that they don't know. And then there's serious people. The older people can go and point out where some, if there is a confusion and yeah, everyone. So there's like talks from nine to five and it's pretty exhausting. And then afterwards, you know, everyone goes on a hike or sits in the hot tub and talks about life and mathematics and that can be extremely productive and very fun. And it's also extremely cheap because it's much cheaper to rent out a giant house than it is to rent out a bunch of hotels. So. If you're willing to do that, which most mathematicians are and a story, [00:47:25] Ben: like, I don't know if I'm misremembering this, but I remember you telling me a story where like, there were, there were two people who like needed [00:47:35] to figure something out together and like they never would have done it except for the fact that they just were like sitting at dinner together every night for, for some number of nights. [00:47:45] Semon: I. I mean, there are definitely apocryphal stories of that kind where eventually people realize that they're talking about the same thing. I can't think of an example, right? I think I told you, you asked me, you know, is there an example of like a research program where it's clear that some major advance happened because two people were in the same area. And I gave an example, which was a very contemporary example, which is far outside of my area of expertise, but which is this. You know, Peter Schultz Lauren far kind of local geometric language and stuff where basically there was at one of these at this Institute in Berkeley. They had a program and these two people were there and Schultz was like a really technically visionary guy and Fargo talked very deeply about certain ideas. And then they realized that basically like the sort of like fart, his dream could actually be made. And I think before that [00:48:35] people didn't quite realize like how far this would go. So that's kinda, I just gave you that as an example and that happens on a regular basis. That's maybe the reason why people have these programs and conferences, but it's hard to predict because so, you know, I don't really, like, I wish I could measure a rate. Yes. [00:48:50] Ben: You just need that marination. It's actually like, okay. Oh, a weird thought that just occurred to me. Yeah. That this sort of like just getting people to hang out and talk is unique in mathematics because you do not need to do cause like you can actually do real work by talking and writing on a whiteboard. And that like, if you wanna to replicate this in some other field, you would actually need that house to be like stocked with laboratory. Or something so that people could actually like, instead of just talking, they could actually like poke at whatever the [00:49:33] Semon: subject is. That would [00:49:35] be ideal, but that would be hard because experiments are slow. The thing that you could imagine doing, or I could imagine doing is people are willing to like, share like very preliminary data, then they could kind of both look at something and figure out oh, I have something to say about your final. And I, that I don't know to what degree that really happens at say biology conferences, because there is a lot of competitive pressure to be very deliberate in the disclosure of data since it's sort of your biggest asset. Yeah. [00:50:05] Ben: And is it, how, how does mathematics not fall into that trap? [00:50:11] Semon: That is a great question. In part there is. So I'm part, there are somewhat strong norms against that, like, because the community is small enough. If it's everyone finds out like, oh yeah, well this person just like scooped kind of, yeah. There's a very strong norm against scooping. That's lovely. It's okay. In certain contexts, like if, if, if it's clear for everyone, like somebody [00:50:35] could do this and somebody does the thing and it's because it was that it's sort of not really scooping. Sure. But if you, if there is really You know, word gets around, like who kind of had which ideas and when people behave in a way that seems particularly adversarial that has consequences for them. So that's one way in which mathematics avoids that another way is that there's just like maybe it's, it's actually true that different people have kind of different skills. It is a little bit less competitive structurally because it isn't like everyone is working at the same kind of three problems. And everyone has like all the money to go and like, just do the thing. And [00:51:16] Ben: it's like small enough that everybody can have a specialization such that there are people like you, you can always do something that someone else can't. [00:51:24] Semon: Often there are people, I mean, that, that might depend on who you are. But yeah, often people with. It's more like it's large enough for that to be the case. Right? Like you [00:51:35] can develop some intuition about some area where yeah. Other people might be able to kind of prove what you're proving, but you might be much better at it than them. So people will be like, yeah, why don't you do it? That's helpful. Yeah. It's that's useful. I mean, it certainly can happen that in the end, like, oh, there's some area on everyone has the same tools and then it does get competitive and people do start. Sorry. I think in some ways it has to do with like a diversity of tools. Like if, if every different lab kind of has a tool, which like the other labs don't have, then there's less reason to kind of compete. You know, then you might as well kind of, but also that has to do with the norms, right? Like your, the pressure of being the person on the ground is that's a very harsh constraint. That's not. Premiere. I mean that my understand, I guess, that is largely imposed by the norms of the community itself in the sense that like a lot of like an NIH grants are actually kind of determined by scientific committees [00:52:35] or committees of scientists. So, [00:52:38] Ben: I mean, you could argue about that, right? Because [00:52:41] Semon: don't, [00:52:42] Ben: is it, is it like, I mean, yes, but then like, those committees are sort of mandated by the structure of the funding agencies. Right. And so is it which, and there's of course a feedback loop and they've been so intertwined for decades that I'm clear which way that causality runs. [00:53:02] Semon: Yeah. So I remember those are my two guesses for how it's like one, there's just like a very strong norm against this. And you don't, you just don't, you know, if you're the person with the idea. And then you put the other person on the paper because they like were helpful. You don't lose that much. So it's just, you're not that disincentivized from doing it. Like in the end, people will kind of find out like, who did what work to some degree, even though officially credit is shared. And that means that, you know, everyone can kind of get. [00:53:35] [00:53:35] Ben: It seems like a lot of this does is depend on, on [00:53:38] Semon: scale. Yeah. It's very scale because you can actually find out. Right. And that's a trade-off right. Obviously. So, but maybe not as bad a trade off in mathematics, because it's not really clear what you would do with a lot more scale. On the other hand, you don't know, like, you know, if you look at, say a machine learning, this is a subject that's grown tremendously. And in part, you know, they have all these crazy research directions, which you, I think in the end kind of can only happen because they've had so many different kinds of people look at the same set of ideas. So when you have a lot of people looking at something and they're like empowered to try it, it is often true that you kind of progress goes faster. I don't really know why that would be false in mathematics. [00:54:23] Ben: Do you want to say anything about choosing the right level of Metta newness? Hmm. [00:54:28] Semon: Yeah. You're thinking about, I guess this is a, this is like a question [00:54:35] for, this is like a personal question for everyone almost. I mean, everyone who has some freedom over what they work on, which is actually not that many people you know, You in any problem domain, whether that's like science, like science research or whether that's like career or whatnot, or even, you know, in a company there's this kind of the, the bird frog dichotomy is replicated. What Altitude's. Yeah. So for example, you know, in math, in mathematics, you could either be someone who. Puts together, lots of pieces and spend lots of time understanding how things fit together. Or you can be someone who looks at a single problem and makes hard progress at it. Similarly, maybe in biology, you can also mean maybe I have a friend who was trying to decide whether she should be in an individual contributor machine learning research company, or. And that for her in part is Metta non-metro choice. So she [00:55:35] really likes doing kind of like explicit work on something, being down to the ground as a faculty, she would have to do more coordination based work. But that, like, let's see, you kind of have more scope. And also in many cases you are so in many areas, but not in all doing the. Is a higher status thing, or maybe it's not higher status, but it's better compensated. So like on a larger scale, obviously we have like people who work in finance and may in some ways do kind of the most amount of work and they're compensated extremely well by society. And but you need people you need, you know, very kind of talented people to work with. Yeah, problems down to the ground because otherwise nothing will happen. Like you can't actually progress by just rearranging incentive flows and having that kind of both sides of this be kind of the incentives be appropriately structured is a very, very challenging balancing act because you need both kinds of people. But you know, you need a larger system in which they work and there's no reason for that [00:56:35] system. A B there's just no structural reason why the system would be compensating people appropriately, unless like, there are specific people who are really trying to arrange for that to be the case. And that's you know, that's very hard. Yeah. So everyone kind of struggles with this. And I think because in sort of gets resolved based on personal preference. Yeah. [00:56:54] Ben: I think, I think that's, yeah. I liked that idea that the. Unless sort of by default, both like status and compensation will flow to the more Metta people. But then that ultimately will be disastrous if, if, if taken to its logical conclusion. And so it's like, we need to sort of stand up for the trend. [00:57:35]…
I
Idea Machines


1 Scientific Irrationality with Michael Strevens [Idea Machines #43] 1:03:07
1:03:07
Play Later
Play Later
Lists
Like
Liked1:03:07
Professor Michael Strevens discusses the line between scientific knowledge and everything else, the contrast between what scientists as people do and the formalized process of science, why Kuhn and Popper are both right and both wrong, and more. Michael is a professor of Philosophy at New York University where he studies the philosophy of science and the philosophical implications of cognitive science. He’s the author of the outstanding book “The Knowledge Machine” which is the focus of most of our conversation. Two ideas from the book that we touch on: 1. “The iron rule of science”. The iron rule states that “`[The Iron Rule] directs scientists to resolve their differences of opinion by conducting empirical tests rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power` in the book Michael Makes a strong argument that scientists following the iron rule is what makes science work. 2. “The Tychonic principle.” Named after the astronomer Tycho Brahe who was one of the first to realize that very sensitive measurements can unlock new knowledge about the world, this is the idea that the secrets of the universe lie in minute details that can discriminate between two competing theories. The classic example here is the amount of change in star positions during an eclipse dictated whether Einstein or Newton was more correct about the nature of gravity. Links Michael’s Website The Knowledge Machine on BetterWorldBooks Michael Strevens talks about The Knowledge Machine on The Night Science Podcast Michael Strevens talks about The Knowledge Machine on The Jim Rutt Show Automated Transcript [00:00:35] In this conversation. Uh, Professor Michael And I talk about the line between scientific knowledge and everything else. The contrast between what scientists as people do and the formalized process of science, why Coon and popper are both right, and both wrong and more. Michael is a professor of philosophy at New York university, where he studies the philosophy of science and the philosophical implications [00:01:35] of cognitive science. He's the author of the outstanding book, the knowledge machine, which is the focus of most of our conversation. A quick warning. This is a very Tyler Cowen ESCA episode. In other words, that's the conversation I wanted to have with Michael? Not necessarily the one that you want to hear. That being said I want to briefly introduce two ideas from the book, which we focus on pretty heavily. First it's what Michael calls the iron rule of science. Direct quote from the book dine rule states that the iron rule direct scientists to resolve their differences of opinion by conducting empirical tests, rather than by shouting or fighting or philosophizing or moralizing or marrying or calling on a higher power. In the book, Michael makes a strong argument that scientist's following the iron rule is what makes science work. The other idea from the book is what Michael calls the Taconic principle. Named after the astronomer Tycho Brahe, who is one of the first to realize that very sensitive measurements can unlock new [00:02:35] knowledge about the world. This is the idea that the secrets of the universe that lie into my new details that can discriminate between two competing theories. The classic example, here is the amount of change in a Star's position during an eclipse dictating whether Einstein or Newton was more correct about the nature of gravity. So with that background, here's my conversation with professor Michael strengthens. [00:02:58] Ben: Where did this idea of the, this, the sort of conceptual framework that you came up with come from? Like, what's like almost the story behind the story here. [00:03:10] Michael: Well, there is an interesting origin story, or at least it's interesting in a, in a nerdy kind of way. So it was interested in an actually teaching the, like what philosophers call that logic of confirmation, how, how evidence supports or undermines theories. And I was interested in getting across some ideas from that 1940s and fifties. Scientists philosophers of science these days [00:03:35] look back on it and think of as being a little bit naive and clueless. And I had at some point in trying to make this stuff appealing in the right sort of way to my students so that they would see it it's really worth paying attention. And just not just completely superseded. I had a bit of a gear shift looking at it, and I realized that in some sense, what this old theory was a theory of, wasn't the thing that we were talking about now, but a different thing. So it wasn't so much about how to assess how much a piece of evidence supports a theory or undermines it. But was it more a theory of just what counts as evidence in the first place? And that got me thinking that this question alone is, could be a important one to, to, to think about now, I ended up as you know, in my book, the knowledge machine, I'm putting my finger on that as the most important thing in all of science. And I can't say it at that point, I had yet had that idea, but it was, [00:04:35] it was kind of puzzling me why it would be that there would, there would be this very kind of objective standard for something counting is evidence that nevertheless offered you more or less, no help in deciding what the evidence was actually telling you. Why would, why would this be so important at first? I thought maybe, maybe it was just the sheer objectivity of it. That's important. And I still think there's something to that, but the objectivity alone didn't seem to be doing enough. And then I connected it with this idea in Thomas Kuhn's book, the structure of scientific revolutions that, that science is is a really difficult pursuit that I've heard. And of course it's wonderful some of the time, but a lot of. requires just that kind of perseverance in the face of very discouraging sometimes. Oh, it's I got the idea that this very objective standard for evidence could be playing the same role that Coon Coon thought was played by what he called the paradigm bar, providing a kind of a very objective framework, which is also a kind of a safe framework, [00:05:35] like a game where everyone agrees on the rules and where people could be feeling more comfortable about the validity and importance of what they were doing. Not necessarily because they would be convinced it would lead to the truth, but just because they felt secure in playing a certain kind of game. So it was a long, it was a long process that began with this sort of just something didn't seem right about these. It didn't seem right that these ideas from the 1940s and fifties could be so, so so wrong as answers to the question. Philosophers in my generation, but answering. Yeah, no, it's, [00:06:11] Ben: I love that. I feel in a way you did is like you like step one, sort of synthesized Coon and popper, and then went like one step beyond them. It's, it's this thing where I'm sure you'd go this, this, the concept that whenever you have like two, two theories that seem equally right. But are [00:06:35] contradictory, that demand is like that, that is a place where, you know, you need more theory, right? Because like, you look at popper and it's like, oh yeah, that seems, that seems right. But then there's you look at Kuhn and you're like, oh, that seems right. And then you're like, wait a minute. Because like, they sort of can't both live in the broom without [00:06:56] Michael: adding something. Although there is something there's actually something I think. Pop Harrington about Koons ideas now. And there's lots of things that are very unpopped period, but you know, Papa's basic idea is science proceeds through reputation and Koons picture of science is a little bit like a very large scale version of that, where we're scientists now, unlike in Papa's story by scientists, we're all desperately trying to undermine theories, you know, the great Britain negative spirits. And with, with, they just assume that that prevailing way of doing things, the paradigm is going to work out okay. But in presuming that they push it to its breaking point. And [00:07:35] that process, if you kind of take a few steps back, has the look of pop and science in the sense that, in the sense that scientists, but now unwittingly rather than with their critical faculties, fully engaged and wittingly are, are taking the theory to a point where it just cannot be sustained anymore in the face of the evidence. And it progresses made because the theory just becomes antenna. Some other theory needs to be counted. So there's at, at the largest scale, there's this process of that, of success of reputation and theories. Now, Coon reputation is not quite the right word. That sounds too orderly and logical to capture what it's doing, but it is nevertheless, there is being annihilated by facts and in a way that's actually quite a period. I think that interesting. [00:08:20] Ben: So it's like, like you could almost phrase Coon as like systemic pop area. Isn't right. To like no individual scientist is trying to do reputation, but then you have like the system eventually [00:08:35] refutes. And that like, that is what the paradigm shift [00:08:37] Michael: is. That's exactly right. Oh, [00:08:39] Ben: that's fast. Another thing that I wanted to ask before we dig into the actual meat of the book is like, wow, this is, this is almost a very, very selfish question, but like, why should people care about this? Like, I really care about it. There's some, and by this, I mean like sort of the, like theories of how science works, right? Like, but I know, I know many scientists who don't care. They're just like, I tried to, I talked to them about that because then they're like, like I just, you know, it's like I do, I do. I think, [00:09:12] Michael: you know, in a way that, and that's completely fine, you know, people to drive a car, you don't know how the engine works. And in fact the best drivers may not have very much mechanical understanding at all. And it's fine for scientists to be a part of the system and do what the system requires of them without really grasping how it works most of the time. 1, 1, 1 way it becomes important is when people start wanting.[00:09:35] Science might not be improved in some ways. So there's a few, there's always a little bit of that going on at the margin. So some string theorists now want to want to relax the standards for what counts as a, as a acceptable scientific arguments so that the elegance or economy of an explanation kind of officially count in favor of a theory as well as, as well as the empirical evidence in the fashion sense. Or there's, there's quite a bit of, of momentum for reform of the publishing system and science coming out of things like the replicability crisis, the idea that actually that, you know, it's talking about science as a game, but science has been gamified to the point where it's being gamed. Yes. And so, you know, there a certain kind of ambitious individual goes into science and yeah, not necessarily. One who has no interest in knowledge, but they, once they see what the rules are, they cannot resist playing those rules to the, to the limit. And what you get is a sequence of scientists sometimes call it the least publishable unit. That's tiny little [00:10:35] results that are designed more to be published and cited in advance of scientist's career than to be the most useful, a summary of research. And then you, and you get time to simply then even worse, choosing their research direction, less out of curiosity, or the sense that they can really do something valuable for the world at large then because they see a narrower and shorter term opportunity to make their own name. Know that's not always a bad thing, but you know, no system of no system of rules, as perfect as people explain the rules more and more that the direction of science as a whole can start to veer a little bit away. Now it's a complicated issue because you changed the rules and you may lose a lot of what's good about the system. Things that you may, it may all look like it's very noble and, and so on, but you can still lose some of what's good about the system as well as fixing what's bad. So I think it's really important to understand how the whole thing works before just charging in and, and, and making a whole series of reforms. [00:11:34] Ben: [00:11:35] Yeah. Okay. That makes a lot of sense. It's like, what are the, what are the actual, like core pieces that, that drive the engine? [00:11:42] Michael: So that's the practical, that's the practical side of the answer to your question. You might, people should care. I thing it's a fascinating story. I mean, I love these kinds of stories. Like the Coon story, where we're at turn, everything turns out to be working in completely different way from the way it seems to be working with that ideology turns out to be not such a great guide to the actual mechanics of the thing. Yeah, [00:12:03] Ben: yeah, no, I mean, yeah. I think that I like that there are some people who just like, think it's fascinating and it's like also just. My, my bias has also the, like how it sort of like weaves between history, right? Like you have to like, really like, look at all of these like fascinating case studies and be like, oh, what's actually going on there. So actually to build on two things you just said could, could you make the argument that with the ref replicability crisis and [00:12:35] like sort of this idea of like P hacking, you're actually seeing, you're seeing what you like th the, the mechanisms that you described in the book in play where it sort of, it used to be that looking at P values was like, like having a good P value was considered sufficient evidence, but then we like now see that, like, having that sufficient P value doesn't, isn't actually predictive. And so now. Everybody is sort of starting to say like, well, maybe like that, that P felt like the using P value as evidence is, is no longer sufficient. And so, because the, the observations didn't match the, the, like what is considered evidence it's like the, what is considered evidence is evolving. Is that like, basically like a case, like, [00:13:29] Michael: exactly. That's exactly right. So the, the whole, the significance testing is one of these, it's a [00:13:35] particular kind of instanciation of the sort of broadest set of rules. We, this whole rule based approach to science where you set up things. So that it's very clear what counts as, as publishable evidence, you have to have a statistically significant results in that P P value testing and stuff is the, is the most widespread of kind of way of thinking about statistical significance. So it's all very straightforward, you know, exactly what you have to do. I think a lot of. Great scientific research has been done and that under that banner, yeah. Having the rules be so clear and straightforward rather than just a matter of some, the referees who referee for journals, just making their own minds up about whether this result looks like a good mind or not. It's really helped science move forward. And given scientists the security, they need to set up research the research programs that they've set up. It's all been good, but because it sort of sets up this very specific role it's possible to, for the right kind of Machiavellian mind to [00:14:35] look at those rules and say, well, let me see. I see some ways, at least in some, in some domains of research where there's plentiful data or it's fairly easy to generate. I see ways that I can officially follow the rules and yet, and technically speaking, what I'm doing is publishing something that's statistically significant and yet. Take a step back. And what happens is, is you may end up with a result, know there's the John you need is one of the, one of the big commentators on this stuff has result. Most published research is false in the title of one of his most famous papers. So you need to step back and say, okay, well, the game was working for a while. It was really, we had the game aligned to people's behavior with what, with what was good for all of us. Right. But once certain people started taking advantage of it in certain fields, at least it started not working so well. We want to hang on to the value we get out of having [00:15:35] very clear objective rules. I mean, objective in the sense that anyone can make a fair judgment about whether the rules are being followed or not, but somehow get the alignment back. [00:15:46] Ben: Yeah. And then, so it's like, so, so that game, that game went out of whack, but then sort of like there's. The broader metagame that is like that that's the, the point of the consistent thing. And then also sort of you, you mentioned string theory earlier, and as I was reading the book, I, I don't think you call this out explicitly, but I, I feel like there are a number of domains that people would think of as science now, but that sort of by your, by, by the iron law would not count. So, so string theory being one of them where it's like very hard, we've sort of reached the limit of observation, at least until we have better equipment. Another [00:16:35] one that came to mind was like a lot of evolutionary arguments were sort of, because it's based on something that is lot like is in the past there there's sort of no way to. To gather additional evidence. W would you say that, like, it's actually, you have a fairly strict bound on what counts as science? [00:16:59] Michael: It is, it is strict, but I think it's, it's not my, it's not in any way. My formulation, this is the way science really is now. It's okay. The point of sciences to is to develop theories and models and so on, and then to empirically test them. And a part of that activity is just developing the theories and models. And so it's completely fine for scientists to develop models and string theory and so on and, and develop evolutionary models of that runway ahead of the evidence. Yeah. I, you know, there where, where, where it's practically very difficult to come up with evidence testimony. I don't think that's exact that in itself is not [00:17:35] unscientific, but then that the question of course immediately comes up. Okay. So now what do we do with these models and, and The iron rule says there's only one, there's only one way to assess them, which is to look for evidence. So what happens when you're in a position with string theory or see with some models and evolutionary psychology in particular where, where it's there's there just is no evidence right now that there's a temptation to find other ways to advance those theories. And so the string theorists would like to argue for string theory on the ground of its it's unifying power, for example, that evolutionary psychologists, I think relying on a set of kind of intuitive appeal, or just a sense that there's something about the smile that sort of feels right. It really captures the experience of being a human being and say, I don't know, sexually jealous or something like that. And that's just not, that is not science. And that is not the sort of thing that. In general published in scientific journals, but yeah, the [00:18:35] question that's come up. Well, maybe we are being too strict. Maybe we, if we could, we would encourage the creation of more useful, interesting illuminating explanatorily powerful models and theories. If we allowed that, allowed them to get some prestige and scientific momentum in ways other than the very evidence focus way. Well, maybe it would just open the gates to a bunch of adventure, idle speculation. Yeah. That was way science down and distract scientists from doing the stuff that has actually resulted in 300 years or so of scientific progress. [00:19:12] Ben: And, and, and your argument would be that like for the ladder, that is well don't [00:19:21] Michael: rush in, I would say, you know, think carefully before you do it. [00:19:25] Ben: No, I mean, I find that that very another, another place where I felt like your framework, [00:19:35] I'm not quite sure what the right word is. Like sort of like there was, there was some friction was, is with especially with the the, the Taconic principle of needing to find like, sort of like very minute differences between what the theory would predict. And the reality is sort of areas you might call it like, like complex systems or emergent behavior and where sort of being able to explain sort of like what the fundamentally, just because you can explain how the building blocks of a system work does like, makes it very hard to make. It does not actually help you make predictions about that system. And I I'm I'm do you have a sense of that? How, how you expect that to work out in with, with the iron rule, because it's, it's like when there are, there are just like so many parameters that you could sort of like, argue like, well, like we either we predicted it or we didn't predict it. [00:20:34] Michael: Yeah, [00:20:35] no. Right. So, so sometimes the productions are so important that people will do the work necessary to really crank through the model. So where the forecast is the best example of that. So getting a weather forecast for five days time, you just spend a lot of money gathering data and running simulations on extremely expensive computers, but almost all of, almost all of science. There just isn't the funding for that. And so you'd never going to be able to make, or it's never going to be practically possible to make those kinds of predictions. But I think these models are capable of making other kinds of predictions. So I mean, even in the case of, of the weather models, you can, without, without, without being able to predict 10 days in advance, as long as you relax your demands and just want a general sense of say whether that climate is going to get warmer, you can make, do with a lot with, with, with many fewer parameters. I mean, in the case of, in a way that's not the greatest example because the climate is so complicated that to, to [00:21:35] even, to make these much less specific predictions, you still need a lot of information and computing power, but I think most, most science of complex systems hinges on hinges on relaxing the, the demands for, for. Of the specificity of the prediction while still demanding some kind of prediction or explanation. And sometimes, and sometimes what you do is you also, you say, well, nevermind prediction. Let's just give me a retrodiction and see if we can explain what actually happened, but the explanation has to be anchored and observable values of things, but we can maybe with some sort of economic incident or evolutionary models are a good example of this weekend. Once we've built the model after the fact we can dig up lots of bits and pieces that will show us that the course of say, we, we, we never could have predicted that evolutionary change would move in a certain direction, but by getting the right fossil evidence and so on, we can see it actually did [00:22:35] move in that direction and conforms to the model. But what we're often doing is we're actually getting the parameters in their model from the observation of what actually happened. So there are these, these are all ways that complex system science can be tested empirically one way or [00:22:52] Ben: another. Yeah. The, the thing that I guess that I'm, I'm sort of hung up on is if you want, like, if you relax the specificity of the predictions that you demand it makes it harder than to sort of compare to compare theories, right? So it's like w the, you have, you know, it's like Newton and Einstein were like, sort of were drastically different models of the world, but in re like the reality was that their predictions were, you need very, very specific predictions to compare between them. And so if, if the hole is in order [00:23:35] to get evidence, you need to re lacks specificity it makes it then harder to. Compare [00:23:41] Michael: theories. No, that's very true. So before you, before you demand, is that theories explain why things fall to the floor when dropped then? Good. Einstein let's go. Aristotle looks. Exactly. Yeah. And one reason physics has been able to make so much progress is that the model, all Sara, the models are simple enough that we can make these very precise predictions that distinguish among theories. The thing in that in complex systems sciences, we often, often there's a fair amount of agreement on the underlying processes. So say Newton versus Einstein. There's what you have is a difference in the fundamental picture of space and time and force and so on. But if you're doing something like economics or population ecology, so that looking at ecosystems, animals eating one another and so on. [00:24:35] That the underlying processes are in some sense, fairly uncontroversial. And the hard part is finding the right kind of model to put them together in a way that is much simpler than they're actually put together in reality, but that still captures enough of those underlying processes to make good predictions. And so I think because the prob that problem is a little bit different. You can, the, the that's, it's less, the, the situation is less a matter of distinguishing between really different fundamental theories and Mora case of refining models to see what needs to be included or what can be left out to make the right kinds of predictions. In particular situations, you still need a certain amount of specificity. Obviously, if you, if you really just say, I'm not going to care about anything about the fact that things fall downwards rather than up, then you're not going to be able to refine your models very far before you run out of. It's to give you any further guidance. That's, that's [00:25:35] very true. Yeah. But typically that complex systems kinds of models are rather more specific than that. I mean, usually they're too specific and they give you, they, they, they say something very precise that doesn't actually happen. Right. And what you're doing is you're trying to bring that, that particular prediction closer to what really happens. So that gives, and that gives you a kind of that gives you something to work towards bringing the prediction towards the reality while at the same time not demanding of the model that already make a completely accurate prediction. [00:26:10] Ben: Yeah. But that makes sense. And so sort of to like another sort of track is like what do you think about like theory free? Predictions. Right? So so like the extremity exam question would be like, could a, like very large neural net do science. Right. So, so if you had no theory at all, but [00:26:35] incredibly accurate predictions, like sort of, how does that square with, with the iron rule [00:26:41] Michael: in your mind? That's a great question. So when I formulate the iron Roy, I build the notion of explanation into it. Yeah. And I think that's functioned in, in an important way in the history of science especially in fields where explanation is actually much easier than prediction, like evolutionary modeling, as I was just saying. Now when you have, if you have the, if you, if your, if your model is an effect, then you're on that, that just makes these predictions it looks, it looks like it's not really providing you with an explanatory theory. The model is not in any way articulating, let's say the causal principles, according to which the things that's predicting actually happen. And you might think for that reason, it's not, I mean, of course this thing could always be an aid there's no, it's not it almost anything can have a place in science as a, as a, as a tool, as a stepping stone. Right. So could you cook, but quickly [00:27:35] you say, okay, we now have we now have we've now finished doing the science of economics because we've found out how to build these neural networks that predict the economy, even though we have no idea how they work. Right. I mean, I don't think so. I don't think that's really satisfying because it's not providing us with the kind of knowledge that science is working towards, but I can imagine someone saying, well, maybe that's all we're ever going to get. And what we need is a broader conception of empirical inquiry. Yeah. That doesn't put so much emphasis on an explanation. I mean, what do you want to do. To be just blindsided by the economy every single time, because you insist on a explanatory theory. Yeah. Or do you want, what do you want to actually have some ability to predict what's going to happen to make the world a better place? Well, of course they want to make the world a better place. So we've, I think we've focused on building these explanatory theories. We've put a lot of emphasis, I would say on getting explanations. Right. But, [00:28:35] but scientists have always have always played around with theories that seem to get the right answer for reasons that they don't fully comprehend. Yeah. And you know, one possible future for science or empirical inquiry more broadly speaking is that that kind of activity comes to predominate rather than just being, as I said earlier, a stepping stone on the way to truly explanatory theories. [00:29:00] Ben: It's like, I sort of think of it in terms of. Almost like compression where the thing that is great about explanatory theories is that it compresses all, it just takes all the evidence and it sort of like just reduces the dimension drastically. And so I'm just sort of like thinking through this, it's like, what would a world in which sort of like non explanatory predictions is like, is fully admissible. Then it just leads to sort of like some exponential [00:29:35] explosion of I don't know, like of whatever is doing the explaining. Right? Cause it just, there there's never a compression. From the evidence down to a theory, [00:29:47] Michael: although it may be with these very complicated systems that even in an explanatory model is incredibly uncompressed. Yeah, exactly. Inflated. So we just have to, I mean, I think it's, it's kind of amazing. This is one of my other interests is the degree to which it's possible to build simple models of complicated systems and still get something out of them, not precise predictions about, about, about what's going to happen to particular components in the system. You know, whether, whether this particular rabbit is going to get eaten yeah. Tomorrow or the next day, but, but more general models about how say increasing the number of predators will have certain effects on the dynamics of the system or, or you know, how the kinds of the kinds of things that population ecologists do do with these models is, is, is answer questions. So this is a bit of an example of what I was saying earlier [00:30:35] about making predictions that are real predictions, but but a bit more qualitative, you know, will. Well one of the very first uses of these models was to answer the question of whether just generally killing a lot of the animals in an ecosystem will lead the the prey populations to increase relatively speaking or decrease. It turns out, but in general they increase. So I think this was after this was in the wake of world war one in Italy George, during world war one, there was less fishing because it's just a sailor, but we're also Naval warfare, I guess, not, maybe not so much in the Mediterranean, but in any case there was, there were, there was less fishing. So it was sort of the opposite of, of killing off a lot of animals in the ecosystem. And the idea was to explain why it was that certain just patterns and that increase in decrease in the populations of predator and prey were served. So some of the first population ecology models were developed to predict. So it's kind of a, and these are tiny. These, this [00:31:35] is, I mean, here you are modeling this ocean. That's full of many, many different species of fish. And yet you just have a few differential equations. I mean, that look complicated, but the amount of compression is unbelievable. And the fact that you get anything sensible out of it at all is truly amazing. So we've kind of been lucky so far. Maybe we've just been picking the low-hanging fruit. But there's a lot of that fruit to be had eventually though, maybe we're just going to have to, and, you know, thankfully there're supercomputers do science that way. Yeah. [00:32:06] Ben: Or, or, or developed sort of a, an entirely different way of attacking those kinds of systems. I feel like sort of our science has been very good at going after compressible systems or I'm not even sure how to describe it. That I feel like we're, we're starting to run into all of these different systems that don't, that sort of aren't as amenable [00:32:35] to to, to Titanic sort of like going down to really more and more detail. And so I, I I'd always speculate whether it's like, we actually need like new sort of like, like philosophical machinery to just sort of like grapple with that. Yeah. [00:32:51] Michael: When you modeling, I mean, festival, they might be new modeling machinery and new kinds of mathematics that make it possible to compress things that were previously incompressible, but it may just be, I mean, we look at you look at a complicated system, like the, like in an ecosystem or the weather or something like that. And you can see that small, small differences and the way things start out can have big effects down the line. So. What seems to happen in these cases where we can have a lot of compression as that, those, although those small, those there's various effects of small, small variations and initial conditions kind of cancel out. Yeah. So it may be, you change things [00:33:35] around and it's different fish being eaten, but still the overall number of each species being eaten is about the same, you know, it kind of all evens out in the end and that's what makes the compression possible. But if that's not the case, if, if these small changes make differences to the kinds of things we're trying to predict people, of course often associate this with the metaphor of the butterfly effect. Then I dunno if compression is even possible. You simply, well, if you really want to predict whether, whether there's going to be an increase in inflation in a year's time or a decrease in inflation, and that really every person that really does hinge on the buying decisions of. Some single parent, somewhere in Ohio, then, then you just need to F to, to figure out what the buying decisions of every single person in that in the economy are in and build them in. And yet at the same time, it doesn't, it, it seems that everyone loves the butterfly effect. [00:34:35] And yet the idea that the rate of inflation is going to depend on this decision by somebody walking down the aisles of a supermarket in higher, that just doesn't seem right. It does seem that things kind of cancel out that these small effects mostly just get drowned out or they, they kind of shift things around without changing their high-level qualitative patents. Yeah. Well, [00:34:56] Ben: I mean, this is the diversion, but I feel like that that sort of like touches right on, like, do you believe in, in like the forces theory of history, more like the great man theory of history, right? And then it's like, and people make arguments both ways. And so I think that. And we just haven't haven't figured that out. Actually split like the speaking of, of, of great man theory of history. The thing, like an amazing thing about your book is that you, you sort of, I feel like it's very humanistic in the sense of like, oh, scientists are people like they do like lots of things. They're [00:35:35] not just like science machines. And you have this, like this beautiful analogy of a coral reef that you, that, that scientists you know, contribute, like they're, they're, they're like the living polyps and they build up these they're, they're sort of like artifacts of work and then they go away and it, they, the new scientists continue to build on that. And I was sort of wondering, like, do you see that being at odds with the fact that there's so much tacit knowledge. In science in the sense that like you F for most fields, I found you probably could not reconstruct them based only on the papers, right? Like you have to talk to the people who have done the experiments. Do you see any tension [00:36:23] Michael: there? Well, it's true that the, the metaphor of the coral reef doesn't doesn't capture that aspect of science. It's very true. So I think on the one hand that what's what is captured by the metaphor is the idea that the, [00:36:35] the, what science leaves behind in terms of, of evidence that can is, is, is interpreted a new every generation. So each new generation of scientists comes along and, and, and, and sort of looks at the accumulated fact. I mean, this is going to sound it, this is, this makes it sound. This sounds a little bit fanciful, but you know, in some sense, that's, what's going on, looks at the facts and says, well, okay, how shall I, what are these really telling me? Yeah. And they bring their own kind of human preconceptions or biases. Yeah. But none of these break-ins the preconceptions and biases are not necessarily bad things. Yeah. They look at it in the light of their own mind and they are reinterpret things. And so the scientific literature is always just to kind of a starting point for this thought, which, which really changes from generation to generation. On the other hand, at the same time, as you just pointed out, scientists are being handed certain kinds of knowledge, [00:37:35] which, which are not for them to create a new, but rather just to kind of learn how to just have a use various instruments, how to use various statistical techniques actually. And so there's this continuity to the knowledge let's, as I say, not captured at all by the reef metaphor, both of those things are going, are going on. There's the research culture, which well, maybe one way to put it. It's the culture, both changes stays the same, and it's important that it stays the same in the sense that people retain their, know how they have for using these instruments until eventually the instrument becomes obsolete and then the culture is completely lost, but it's okay. Most of the time if it's completely lost. But on the other hand, there is this kind of always this fresh new re-interpretation of the, of the evidence simply because the the interpretation of evidence is is a rather subjective business. And what the preceding generations are handing on is, is not, is, should be seen more as a, kind of [00:38:35] a data trove than, as, than a kind of a body of established knowledge. But [00:38:43] Ben: then I think. Question is, is it's like, if, what counts as evidence changes and all you are getting is this data trove of things that people previously thought counted as evidence, right? Like, so you know, it's like, they all, all the things that were like, like thrown out and not included in the paper doesn't like that make it sort of harder to reinterpret it. [00:39:12] Michael: Well, there's, I mean, yeah. The standards for counselors, evidence, I think of as being unchanging and that's an important part of the story here. So it's being passed on, it's supposed to be evidence now of course, some of it, some of it will turn out to be the result of faulty measurements, all these suspicious, some of that even outright fraud, perhaps. And so, and so. To some extent, that's [00:39:35] why you wouldn't want to just kind of take it for granted and they get that, that side of things is not really captured by the reef metaphor either. Yeah. But I think that the important thing that is captured by the metaphor is this idea that the, what, what's the thing that really is the heritage of science in terms of theory and evidence, is that evidence itself? Yeah. It's not so much a body of knowledge, although, you know, that knowledge can, it's not that it's, it's not, it's not that everyone has to start from scratch every generation, but it's, it's this incredibly valuable information which may be, you know, maybe a little bit complicated in some corners. That's true, but still it's been generated according to the same rules that or, you know, 10 to. by the same rules that we're trying to satisfy today. Yeah. And so, which is just as [00:40:35] trustworthy or untrustworthy as the evidence we're getting today. And there it is just recorded in the animals of science. [00:40:41] Ben: So it's much more like the, the thing that's important is the, like the, the process and the filtering mechanism, then the, the, the specific artifacts that yeah. [00:40:55] Michael: Come out, I'll make me part of what I'm getting at with that metaphor is the scientists have scientists produce the evidence. They have their, an interpretation of that evidence, but then they retire. They die. And that interpretation is not really, it doesn't need to be important anymore enough and isn't important anymore. Of course, they may persuade some of their graduate students to go along with their interpretation. They may be very politically powerful in their interpretation, may last for a few generations, but typically ultimately that influence wanes and What really matters is, is, is the data trove. Yeah. I mean, we still, it's not, as you, as you said, it's not perfect. We have to regard it with that [00:41:35] somewhat skeptical eye, but not too skeptical. And that's the, that's the, the real treasure house yeah. Of [00:41:43] Ben: science and something that I was, I was wondering, it's like, you, you make this, this really, you have a sentence that you described, you say a non event such as sciences non-rival happens, so to speak almost everywhere. And I would add, like, it happens almost everywhere all the time, and this is, this is wildly speculative. But do you think that there would have been any way to like, to predict that science would happen or to like no. There was something missing. So like, could, could we then now, like, would there be a way to say like, oh, we're like, we're missing something crucial. If that makes sense, like, could we, could we look at the fact that [00:42:35] science consistently failed to arrive and ask, like, is there, is there something else like some other kind of like like intellectual machinery that also that has not arrived. Did you think, like, is it possible to look for that? [00:42:51] Michael: Oh, you mean [00:42:52] Ben: now? Yeah. Or like, like, or could someone have predicted science in the past? Like in [00:42:57] Michael: the past? I, I mean, okay. I mean, clearly there were a lot of things, highly motivated inside. Why is thinkers. Yeah. Who I assume I'd have loved to sell the question of say configuration of the solar system, you have that with these various models floating around for thousands of years. I'm not sure everyone knows this, but, but, but, but by, you know, by the time of the Roman empire, say that the model with the sun at the center was well known. The muddle with the earth at the central is of course well known and the model where the earth is at the center, but then the [00:43:35] sun rotates around the earth and the inner planets rotate around the sun was also well known. And in fact was actually that this always surprises me was if anything, that predominant model in the early middle ages and in Western Europe, it had been kind of received from late antiquity from that, from the writers at the end of the Roman empire. And that was thought to be the, the kind of the going story. Yeah. It's a complicated of course, that there are many historical complications, but I, I take it that someone like Aristotle would have loved to have really settled that question and figured it out for good. He had his own ideas. Of course, he thought the earth had to be at the center because of its that fit with his theory of gravity, for example, and made it work and having the Senate, the city just wouldn't wouldn't have worked. And for various other reasons. So it would have been great to have invented this technique for actually generating evidence that that in time would be seen by everyone has decisively in favor of one of these theories, the others. So they must have really wanted it. [00:44:35] Did they think, did they themselves think that something was missing or did they think they had what they needed? I think maybe Aristotle thought he had what was needed. He had the kind of philosophical arguments based on establishing kind of coherence between his many amazing theories of different phenomena. Know his. Falling bodies is a story about that. The solar system, as of course, he would not have called it the, the planets and so on, and it all fit together so well. And it was so much better than anything anyone else came up with. He may have thought, this is how you establish the truth of, of of the geocentric system with the earth at the center. So now I don't need anything like science and there doesn't need to be anything like science, and I'm not even thinking about the possibility of something like science. Yeah. And that, to some extent, that explains why someone like Aristotle, who seemed to be capable of having almost any idea that could be had, nevertheless did [00:45:35] not seem to have, sort of see a gap to see the need, for example, for precise, qualitative experiments or, or, or even the point of doing them. Yeah. It's, you know, that's the best, I think that's the most I can say. That I don't, I let myself looking back in history, see that people felt there was a gap. And yet at the same time, they were very much aware that these questions were not being said, or [00:46:04] Ben: it was just it just makes me wonder w w some, some period in the future, we will look back at us and say like, oh, that thing, right. Like, I don't know, whatever, like, Mayans, right? Like how could you not have figured out the, like my antigenic method? And it's just it, I, I just find it thought provoking to think, like, you know, it's like, how do you see your blind spots? [00:46:32] Michael: Yeah. Well, yeah, I'm a philosopher. And we in, in [00:46:35] philosophy, it's still, it's still much like it was with Aristotle. We have all these conflicting theories of say you know, justice. What, what really makes the society just to what makes an act. Or even what makes one thing cause of another thing. And we don't really, we don't know how to resolve those disputes in a way that will establish any kind of consensus. We also feel very pleased with ourselves as I take it. Aristotle's are these really great arguments for the views? We believe in me, that's still sort of quite more optimistic maybe than, than we ought to be. That we'll be able to convince everyone else. We're right. In fact, what we really need and philosophers, do you have this thought from time to time? There's some new way of distinguishing between philosophical theories. This was one of the great movements of early 20th century philosophy. That logical positivism was one way. You can look at it as an attempt to build a methodology where it would be possible to use. [00:47:35] And in effect scientific techniques to determine what to, to adjudicate among philosophical theories, mainly by throwing away most of the theories as meaningless and insufficiently connected to empirical facts. So it was a, it was a brutal, brutal method, but it was an idea. The idea was that we could have, there was a new method to be had that would do for philosophy. What, what science did for, you know, natural philosophy for physics and biology and so on. That's an intriguing thought. Maybe that's what I should be spending my time thinking about, please. [00:48:12] Ben: I, I do want to be respectful of your time, the like 1, 1, 1 last thing I'd love to ask about is like, do you think that and, and you, you talked about this a bit in the book, is that, do you think that the way that we communicate science has become almost too sterile. And sort of one of my, my going concerns [00:48:35] is this the way in which everybody has become like super, super specialized. And so, and sort of like once the debate is settled, creating the very sterile artifacts is, is, is useful and powerful. But then as, as, as you pointed out as like a ma as a mechanism of like, actually sort of like communicating knowledge, they're not necessarily the best. But, but like, because we've sort of held up these like sterile papers as the most important thing it's made it hard for people in one specialization to actually like, understand what's going on in another. So do you think that. That, that, that we've sort of like Uber sterilized it. You know, it's like, we talked earlier about people who want to, to change the rules and I'm very much with you on like, we should be skeptical about that. But then at the same time you see that this is going [00:49:35] on. [00:49:35] Michael: Yeah. Well, I think, I mean, there's a real problem here, regardless, you know, whatever the rules of the problem of communicating something as complicated as scientific knowledge or the really, I should say the state of scientific play because often what needs to be communicated is not just somebody that's now been established beyond any doubt, but here's what people are doing right now. Here's the kind of research they're doing here are the kinds of obstacles they're running into to communicate, to, to put that in a form where somebody can just come along and digest it all easily. I think it was incredibly difficult, no matter what the rules are. Yeah. It's probably not the best use of most scientists time and to try to present their work in that way. It's better for them just to go to the rock face and start chipping away at their and little local area. So what, what you need is either for a scientist to take time out from time to time. And I mean there exists these publications review [00:50:35] publications, which try to do this job. That's true. So that people in related fields, you know, typically in the typically related fields means PhD in the same subjects. They're usually for the nearest neighbors to see what's going on, but often they're written in ways that are pretty accessible. I find. So then you create, you create a publication that simply has a different set of rules. The point here is not to in any way to evaluate the evidence, but simply to give a sense of the state of play for. To reach further a field, you have science journalists or what's going on with newspapers and magazines right now is because it's not very good for serious science journalism. And then you have scientists and people like me, who, for whatever reason, take some time out from what they usually do to really, really look kind of a self-standing project to explain what's going on those activities all to some extent, take place outside the narrow narrow view of the [00:51:35] iron rule. So, and I think, I think it's, it's going okay. Given the difficulty of the task. It seems to me that that the, the, the knowledge of the information is being communicated in a, in a somewhat effective, accessible way. I mean, not that if anything, the real, the real, the real barriers to. Some kinds of fruitful, interdisciplinary thinking, not just that it's hard for one mind to simply take on all this stuff that needs to be taken on no matter how effectively, even brilliantly it's communicated the world is just this very complicated place. Yeah. You know, one, one thing I'm interested in historically not, I mean, just, I find fascinating is that fruitfulness of certain kinds of research programs that came out of came out of finding serious wars, like in particular, the second world war, you threw a bunch of people together and they had to solve some problem, like [00:52:35] building at a bomb , it's usually something, something horrendous or a a device, the device for the guns and bombers and so on that would allow that. To rather than having to bit very skillfully. I forget the word for it. You know, you kind of have to put your guide ahead of where the enemy fighter so by the time that your, your, your bullets get there, the plane arrives at the same time, but they built these really sophisticated analog computers basically would do the job. So the Ghana, some, you know, some 19 year olds, like just pointed the plane again. Yeah. And a lot of problems to do with logistics and weather forecasting. And so on this, these, these, the need to have that done through together, people from very different areas in engineering and science and so on and resulted in this amazing explosion. I think if knowledge [00:53:35] it's a very, it's a very attractive period in the history of human thought. When you go back and look at some of the things people were writing in the late forties and fifties, Computers, how the mind works. And so on. And I think some of that is coming out from this, this kind of almost scrambling process that that happened when, when these very specific kind of military engineering problems are solved by throwing people together who never normally would have talked to one another. Maybe we need a little bit of that. Not the war. Yeah. But [00:54:08] Ben: I have a friend who described this as a serious context of use is it is a thing. And it's, I, I mean, I'm, I'm incredibly biased towards looking at that period. Okay. But [00:54:20] Michael: I guess it's connected to what you're doing. [00:54:23] Ben: Absolutely. Is I do you know who. Yeah. So, so he actually wrote a series of memoirs and I just there reprinting it. I wrote the forward to it. So that's, [00:54:35] so I'm like, I agree with you very strongly. And it is it's. I find, I always find that fascinating because I feel like there's, there's like this. I mean, there's this paradigm that sort of got implemented after world war II, where do you think like, oh, like theory leads to applied science leads to leads to technology, but you actually see all these, these places where like, trying to do a thing makes you realize a new theory. Right. And you see similar thing with like like, like the steam engine, right? Like that's how we get thermodynamics is it's like what, like that's a great piece of work that's right, right. Yeah. So that's, I mean, like that, that absolutely plays to my biases that like, yeah, we. Like not, not doing interdisciplinary things for their own sake. Like just being like, no, like let's get these people that are rude, but like having very serious contexts of use that can like drive people having [00:55:32] Michael: problem to solve. It's not just the case [00:55:35] of kind of enjoying kind of chatting about what you each do. And then just going back to the thing you were doing before. Yeah. Feeling, feeling enriched. Yeah. But otherwise I'm changed it. It's interesting [00:55:46] Ben: though, because the incentives in that situation sort of like now fall outside of the iron rule right. Where it's like, it's like, you don't care. Like you don't care about like, I mean, I guess to some extent you could argue like the thing needs to work. And so if it works, that is evidence that your, your theory is, is [00:56:09] Michael: correct. That's true. But, you know, but I think as you're about to say, engineering is not science and it's not it's the own rule is not overseeing engineering. It's the it's engineering is about making things that work and then about producing evidence for, or against various ideas. That's just a kind of a side effect, [00:56:27] Ben: but then it can sort of like, I guess it can like spark those ideas that people then sort of like take, I [00:56:35] was like, I mean, in my head, it's all of this, like I think of what would I call like phenomena based cycles where like, there's, there's like this big, like cyclical movement where like you discover this like phenomena and then you like, can theorize it and you use that theory to then do like, I dunno, like build better microscopes, which then let you make new observations, which let you discover new phenomena. [00:57:00] Michael: It's really difficult to tell where things are going. Yeah. I think the discovery of plate tectonics is another good example of this sea, of these, all of these scientists doing things that, that certainly not looking into the possible mechanisms for continental drift, right. But instead, getting interested for their own personal reasons and doing things that don't sound very exciting, like measuring the magnet, the measuring the ways that the orientation of the magnetic field has changed over past history. By looking at the, by basically digging up bits of rock and tests, looking at the orientations of the, [00:57:35] of the iron molecules or whatever, and the lock and, you know, it's, I mean, it's not, it's not completely uninteresting, but in itself it sounds like a kind of respectable, but probably fairly dull sideline and geology. And then things like that. We're developing the ability to meet very precise measurements of the gravitational field. Those things turn out to be. Key to understanding this, this amazing fact about the way the whole planet works. Yeah. But nobody could have understood in advance that, that they would play that role. What you needed was for a whole bunch of, that's not exactly chaos, but I kind of I kind of diversity that might look almost, it might look rather wasteful. Yeah. That's very practical perspective to, to blossom. Yeah. This is, [00:58:29] Ben: I, I truly do think that like, moving forward knowledge involves like being almost like [00:58:35] irresponsible, right? Like if you had to make a decision, it's like, it's like, should we fund these people who are going in like measuring magnetic fields just for, for funsies. Right. And it's like, like, like from, from like a purely rational standpoint, it's like, no, but yeah, [00:58:51] Michael: the reason that sort of thing happens is cause a bunch of people decide they're interested in. Yeah, persuade the students to do it too. And you know, whether they could explain it to the rest of the world, actually that's another, there was also a military angle on that. I don't know if you know that, but the, the, some of the mapping of the ocean floors that was also crucial to the discovery of plate tectonics in the fifties and sixties was done by people during the war with the first sonar systems who nobody's supposed to be, you know, finding submarines or whatever, but decided, Hey, it would be kind of interesting just to turn the thing on and leave it on and sort of see what's down there. Yeah. And that's what they did. And that's how some of those first maps started being put together. [00:59:35] That's [00:59:36] Ben: actually one of the, one of my concerns about trying to do science with, with like no networks is. How many times do you see someone just go like, huh, that's funny. And like, like so far you can't like computers. Like they can sort of like find what they're setting out to find or like they have a, or they, they almost have like a very narrow window of what is considered to evidence. And perhaps like through, through your framework the, the thought of like, huh, that's funny is like you're someone's brain, all of a sudden, like take something as evidence that wasn't normally like supposed to be evidence. Right. So it's like, you're doing like one set of experiments and then you just like, notice this like completely different thing. Right. And you're like, oh, like maybe that's actually like a different piece of evidence for something completely different. And then it opens up a rabbit hole. [01:00:31] Michael: Yeah. This is another one of those cases though, with.[01:00:35] Sort of the, some kind of creative cause it, and they do think it's incredibly important that scientists not get distracted by things like this. On the other hand, it would be terrible if scientists never got distracted by things like this. And I guess I, one thing I see with the iron rule is it's is it's a kind of a social device for making scientists less distracted. Well, not putting the kind of mental fetters on that would, would make it impossible for them ever to become distracted. [01:01:05] Ben: And maybe perhaps like the, like the, the distraction and like saying, oh, that's funny. It's like the natural state of human affairs. [01:01:12] Michael: Well, I think so. I think if we, we would all be like Aristotle and it turns out it was better for science fair, actually a little bit less curious and yeah. And it's interesting and variable and we had actually our, so [01:01:24] Ben: one could almost say that like the, the iron rule, like w w would you say it's accurate that like the iron rule is absolutely. But so [01:01:35] is breaking in the sense that like, like if, if like somehow there, like you could enforce that, like every single person only obeyed it all the time science, like we, we actually, we make serendipitous discoveries. And so it's like in order to make those, you need to break the rule, but you can't have everybody running around, breaking the rule all the [01:01:57] Michael: time. All right. Put it a little bit differently. Cause I see the rule list is not so much, it's not so much a rural for life. And for thinking is for, for sort of publishing activity. So you don't, you're not, you're not technically breaking the rule when you think. Huh? That's funny. And you go off and start thinking your thoughts. You may not be moving towards. Yeah. It has the kind of scientific publication that, that satisfies the role. But nor are you breaking. The F, but if all scientists can, as it were live to the iron rule, not just in there, not just when they took themselves to be playing a game in every way that they thought about [01:02:35] they, they, they thought about the, the point of their lives as, as kind of investigators of nature. Then, I mean, that's, people are just not like that. It's hard to imagine that you could really, that would ever really happen. Although, you know, to some extent, I think our science education system does encourage it. Yeah. But if that really happened, that would probably be disastrous. We need, it's like the pinch of salt, you know, if you only want to pinch, but without it, it's not good. Yeah. That [01:03:06] Ben: seems like an excellent place to end. Thank you so much for being part of idea missions. [01:03:35]…
I
Idea Machines


1 Distributing Innovation with The VitaDAO Core Team [Idea Machines #42] 1:13:41
1:13:41
Play Later
Play Later
Lists
Like
Liked1:13:41
A conversation with the VitaDAO core team. VitaDAO is a decentralized autonomous organization — or DAO — that focuses on enabling and funding longevity research. The sketch of how a DAO works is that people buy voting tokens that live on top of the Etherium blockchain and then use those tokens to vote on various action proposals for VitaDAO to take. This voting-based system contrasts with the more traditional model of a company that is a creation of law or contact, raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors. Since technically nobody runs VitaDAO the way a CEO runs a company, I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely an experiment! The members of the core team in the conversation in no particular order: Tyler Golato Paul Kohlhaas Vincent Weisser Tim Peterson Niklas Rindtorff Laurence Ion Links VitaDAO Home Page An explanation of what a DAO is Molecule Automated Transcript VitaDAO [00:00:35] In This conversation. I talked to a big chunk of the VitaDAO core team. VitaDAO is a decentralized autonomous organization or Dao that focuses on enabling and funding. Longevity research. We get into the details in the podcast, but a sketch of how a DAO works is that people buy voting tokens that live on top of the Ethereum blockchain. And then they use those tokens to vote on [00:01:35] various action proposals for me to doubt to take. This voting based system contrasts with more traditional models of the company. That is a creation of law or contract raises capital by selling equity or acquiring debt, and is run by an executive team who are responsible to a board of directors. Since technically, nobody runs for you to doubt the way it CEO runs the company. I wanted to try to embrace the distributed nature and talk to many of the core team at once. This was definitely experiment. Uh, I think it's your day. Well, Oh, well, but I realize it can be hard to tell voices apart on a podcast. So I'll put a link to a video version. In the show notes. So without further ado, here's my conversation with Vita Dao. What I want to do so that listeners can put a voice to a name is I want to go around everybody say your name and then you say how you would pronounce the word VI T a D a O. Tim, would you say your name and then, and then pronounce the word that [00:02:35] that's kind of how I've done it. Yeah. And so I'm the longevity steward we can help kind of figure out deal flow on, edited out, so. Awesome. All right, Tyler, you're next on. It is definitively Vieta Dell. Yeah. And I also help out with the longevity steward group. I started starting longevity group and I'm the chief scientific officer and co-founder at molecule as well. And then Nicholas you're next on my screen. It's definitely beats it out. And I'm also a member of the longevity working group in this science communication group and also currently initiating and laptop. Great. And then Vinson. Yeah. So it's the same pronunciation weeded out, but I'm helping on the side and also on kind of like special projects, like this incline where that I took around, we had recently and yeah, in Lawrence. Lauren Sajjan Vieta thou. And I [00:03:35] also steward the deal flow group within the longevity working group. And I think we should all now say as a hive mind, Paul Paul has said at the same time, oh, sorry. I'm going to say bye to dad. Mess with her in yeah. Hi everyone. My name is Paul cohost. I would say, be to down. I actually wonder what demographics says, Vida, like RESA. We should actually look into that. It's interest, interesting community metric. I'm the CEO and co-founder of molecule and one of the co-authors of the VW. I also work very deeply on the economic side and then essentially help finalize deal structures. So essentially the funding deals that we've been carry through into molecule and yeah, very excited to be here today. And maybe we can jump back into Lawrence adjusted we well, [00:04:35] also, so the thing that's confusing to me is that I always assumed that the Vith came from the word vitality. Right. And so that's, that's where the idea of calling it a fight Vita doubt, right? Because like, I don't say vitality, I say fighting. In German, it's actually retaliatory. Yeah. So it's just like the stupid Anglo centrism that is from the Latin, I would say from the word life. Yeah. Cool. So to really sort of jump right in, I think there's the, to like, be very direct, like, can we like walk through the mechanics of how the, how, how everything actually works? Right. So I think listeners are probably familiar with sort of like the high level abstract concept of there's a bunch of people. They have tokens, they vote on deals you give researchers money to, to do work, but like, sort of [00:05:35] like very, very mechanical. How does the dowel work? Could you like walk us through maybe like, sort of a a core loop of, of like what, what you do Yeah. So I mean, the core goal of the DAO is really to try and democratize access to decision-making funding and governance of longevity therapeutics. And so mechanically, there's a few different things going on and anyone feel free to interrupt me or jump in as well. But, so I would start from the base layer is really having this broad community of decentralized token holders, which are ultimately providing governance functions to this community. And the community's goal is to deploy funding that it's raised into early stage. Clinical proof of concept stage longevity therapeutics projects. And these basically fall between these two, let's say points where some tension exists in when it comes to translating academic science. So you have this robust early stage, let's say basic research funding mechanism through things like the NIH [00:06:35] grant funding, essentially. And that gets really to the point of being able to do, let's say very early stage drug discovery. And there's also some sort of downstream ecosystem consisting of venture capital company builders, political companies that does let's say late stage funding and incubation of ideas. They're more well-vetted, but between there's this sort of problem where a lot of innovation gets lost, it's known as the translational valley of death. Yeah. What did we try to do is we try to identify as a community academics that are working and let's say, have stumbled onto a potentially promising drug, but aren't really at the point yet where they can create a startup company. And what we want to do is basically by working together as a community, provide them the funding, the resources, in some cases, even the incubation functions to be able to do a series of killer experiments, really deep risk of project, and then file intellectual property, which in exchange for the funding, the dowel actually, and this is, this is sort of mechanically enabled by a legal primitive that we've been developing a molecule called an IP [00:07:35] NFP framework, which basically consists on one side of a legal contract, typically in the form of a sponsored research search agreement between a funder and a party that would be receiving the funding, the laboratory, and on the other side of federated data storage layer. And so the way this works is basically beat a doubt would receive applications. Some of these projects could, for example, be listed on molecules marketplace have an IPN T created meta dealt with would send funds via the system to the university and in exchange, they would hold this license and essence for the IP, that results from that project. And then within the community, we have domain experts. For example, we have a longevity working group which consists of MDs. Post-docs PhD is basically anyone that has deep domain experience in the longevity space. They work to evaluate projects due diligence and ultimately serve as sort of a quality control filter for the community, which consists of non-experts as well. Maybe just people who are enthusiastic about what. And beyond that, there's also additional domain expertise in the [00:08:35] form of some people who have worked at biotech VCs, for example, people with entrepreneurial experience and through this community, you basically try to form, let's say a broad range of expertise that can then coach the research or work with them and really help the academic move the IP and the research project towards the stage where, where it can be commercialized. And now VitaDAO stewarding this process. They have ownership in the IP and basically what would happen is if that research has out license co-developed sold onto another party, just made productive in essence and. It's successful in commercializing those efforts and received some funds, let's say from the commercialization of that asset, that goes back into the treasury and is continuously deployed into longevity research. So the long-term goal is to really create this sort of self-sustaining circular funding mechanism to continue to fund longevity research over time. And now within that, we could wrap it all into, you know, there's a bunch of like specific mechanics in there. I would love to, to rabbit hole, [00:09:35] I think Vincent, yes, to and on the kind of very simple technical layer, kind of very initially we started off just having this idea and putting it out there and then like having like a kind of Genesis auction where everyone could contribute funds. Like some people contribute 200 bucks and others contributed millions. And in exchange for that. Just like as a, there is an example, like for every dollar they gave, they gave, got one vote in organization. And then this initial group of people that came together to put, to, to pool their resources, to fund longevity, research, got votes and exchange, and actually with these votes, basically they can then what Tyler described make on the, on these proposals that that are vetted through the longevity working group, they can make a vote if it shouldn't get funding. And, and that's of course kind of like the traditional, like model of like a Dow and of like token based governance and boating [00:10:35] and yeah, which we did of course was like kind of like a very easy mechanism that got it started, but then the storm of course can also be useful for different purposes and can also incentive. People working on specific projects, research has also getting told and so kind of getting a governance, right. And organization in exchange for good and contributing work. Nicholas, did I see your hand? Yes. And maybe one thing to add here that takes a bit of a step back. It's adding, adding the question. Why does all of this matter? Why does the style framework Adderall fall? And I think when you, when you look at the way currently academic research works, basically the incentives for the scientists and the moment that something is published in a peer reviewed journal, so that the system is optimized for peer review publication. And then on the other hand, on the translational side, when something, you know,[00:11:35] Turning into a medicine return on investment. And they're basically calculating a risk adjusted net present value of the project. Now, the problem with a lot of biomedical research is the science has, is done. The paper is published, but a risk adjusted present value of the project is still approaching zero because there there's still some key experiments are missing or to get that experiment off the ground. And actually this is where the doubt can come in and using new technologies to basically financialize the IP and make it more liquid. And may, maybe more specifically the asset isn't created, you know, a lot of research you know, the NIH has not focused on therapies. I mean, not the creation of new therapies where value is actually created. They'll, they'll do clinical trials on existing therapies, but, but you know, the real value inflection points are not done through basic basic research. So, so that's where we hope to solve. Got it. So, [00:12:35] w I think in my, in my mind, the thing that's really interesting about Vieta Dow, as opposed to other dads is, is the sort of like interface with the, with the world of atoms that that's like a pretty, pretty unique and exciting thing. So there's, there's, there's a lot of mechanics there that I'm actually interested in digging into. So like one thing is in order. So, so sort of all, in order to. Give money to a researcher even at some point they need to turn it into dollars or euros in order to buy the equipment that they need to, to do the research. And so are they, they're like taking the, VitaDAO token and then converting that into, into currency. How does, how does that work? Yeah, I can speak to this or Paul or if you want to, if you want to speak to it. So, I mean, I can, I can maybe kick it off. So one of the things that's really important and that we've been really focused on at molecule is ensuring that the process of working with researchers, which goes [00:13:35] well beyond just working with the research, right? You need to work with the university, with the tech transfer office. You need to negotiate a licensing agreement and all of this can happen in a way that is somewhat seamless and it doesn't require them. Let's say having to do all of their interactions with it, let's say a. You know, this sort of ephemeral entity that exists on the Ethereum blockchain. So we've basically created rails by a molecule for things like Fiat forwarding we're negotiations with the TPO for a lot of the legal structures to ensure that it's as smooth as possible, the Vino tokens themselves don't actually play into. We can, we can give those to researchers as an incentive and to people who perform work for the community. But that is not actually the, what is, what is given to researchers. Researchers. When, when a proposal is passed within the community, we have a certain treasury and ether, for example, that we've raised over over a period of time that is liquidated and sold for USD decency. And then that USB-C travels via off rails that molecule has created to ensure that the university [00:14:35] can just receive beyond currency. So I mean, a big part of this. Know, defy in a lot of ways has some advantages in that. It never has to really interact with real world banking systems. This is a challenge in the D space. We still have to interface with tech transfer offices. We still have to, you know, speak to general counsel at universities and make sure that people are comfortable working this sort of way. And I would say this is probably one of the most significant challenges and the reason that, you know, a lot of legal engineering and a lot of thinking went into how to create the base layer infrastructure that allows us to actually operate in this space. So it's, yeah, it's a challenge. It's something that we're always trying to iterate on. I mean, we imagine a future where universities do have wallets. You know what I mean, researchers do have wallets, but it's going to take some time for that future to be realized. And in the midterm, I think it's really important to show the world. The dowels can work effectively, especially these types of dowels that have a core mission, vision of funding research. They can do that productively even given the constraints of the, of the current. And [00:15:35] so that, so, so like negotiating with tech transfer offices like they, I assume need to sign a sort of an analog legal agreement with a analog legal entity. Is that correct? And is that, is that like, is molecule that, that legal entity or like, how does, how does that work? Yeah, so maybe so to reiterate what Tyler said, there's actually nothing stopping that say from a university to directly engage with a doubt. I think it's more that those systems don't exist it, and there's not enough, like precedents to kind of enable those. There's also much larger, for example, question of like, to what extent could it litigate against the patent and actually actually enable, enable this protection. And so if you did operates through a set of different agents so these are analog, real wealth legal partners, and some molecule is one of those legal partners in essence. So we can, we can ensure that we are the licensing party, for example, with a tech transfer office. And then we enter into a sub licensing agreement, for [00:16:35] example, with with B to them. And in the same sense as what were tologist explained, we also then ensure that all of the, the, the payment flows and the. Are compliant to Kahn systems, something that we've realized it's, it's, it's really important to kind of bridge this emerging with three world with the real world to really make it as seamless as possible. And not for example, for us that yeah. University to go through the process of opening a Coinbase account figure out what is USB-C actually. But I mean, fundamentally I I like to use this analogy. If you can make an international EFG with like a big number and a swift number, like actually crypto is much easier than that by now, but it's a much less much less adopted system, even from an accounting perspective. Accounting for funding flows in, in this decentralized system is very simple. Like the, the proof of funds is very easy to provide because you can visually see where every single transaction can be traced back to. But so the way that we've tried to design really the flow of funding within. With Nvidia down within molecules to make it as seamless and [00:17:35] interoperable with the real well today as possible. And also to ensure that we have the highest degree of legal standards, legal integrity. So we work with with specialized IP counsel and IP law firms across the world in different jurisdictions to really ensure also that any IP that we adopt funds and that is encapsulated within these IP NFTE frameworks is future-proof. Because that, that's something that became very apparent for us. When we, when you work with IP, you can't really, you can't really make mistakes in terms of how you protect the intellectual property. And you also have a responsibility to actually the therapeutics that are being developed there, because if you, if anything was to invalidate the IP that could fundamentally influence whether a potential therapeutic can actually ever reach patients. Yeah. And so I think that the, the, the one. The question is there has to be a lot of trust between the Dow itself and sort of the, the organization or [00:18:35] people doing the negotiation and sort of holding the IP and forcing the IP. Because, because there's like at that sort of Dow analog interface there's is my impression is that there's no like enforceable legal contract. Right. So is that correct? I'm just, I'm just trying to like wrap my head around, like the actual. It is an enforceable legal contract, actually. So the initial agreement between let's say molecule and the university is a typical stock standard sponsored research agreement that you would do at sea, between between two parties, like a pharmaceutical company and a university, for example. So these are, these are the same agreements that the universities use. In many case, we plug into their pre-existing templates. Those typically have within them an assignment agreement or an ability to sub-license where the company or whomever is doing this initial licensing then has [00:19:35] the right to license exclusively the, the resulting intellectual property, or in some cases, even the full rights of the agreement molecule now engages in. Fully contractual, fully enforceable, typically in the context of Switzerland where the company is based agreement sublicensing agreement with the Dow via the election by via the election of this agent process. And now, so I would say the weakest part of that, if you want to think about where the sort of core let's say. Yeah, like the breaking points are with in that process would be, would be around the fact that there is required a large amount of trust in the agents, but really what the agent is doing is, is actually putting themselves at risk. They're taking on legal liability in some cases on behalf of the dowel. And so. Something if that Peyton was, let's say that agent made offer something or wasn't able to honor their agreement. I mean, there is full legal recourse that it could be, that [00:20:35] could be taken. But this is, yeah. Again, when you look at Peyton enforceability and Indian electoral property landscape, most of these things like, you know, you find out what works through, through litigation. These things have not been litigated yet. There's not really precedent for enforcement here. But this is also what it takes to innovate in the intellectual property landscapes. So it's, there is a tension between these things, but it, yeah, maybe to your original question, there's a lot of, is a lot of trust, certainly involved in I'm thinking about when we go, we go stuff is that there's like no first principles of it. It's just sort of like poke it and see what happens. Yeah, maybe as an interesting, it will be interesting case studies before it becomes relevant to us because in the space, kind of like some of the core protocols, like units open curve, I actually governed by dolls now. And actually they are now enforcing the IP actually at the courts. So even before it will be come necessary for us, there will be cases and case studies of kind of like it's very big organizations like a human 12 or [00:21:35] curve enforcing and going through the courts like this, even this year or next year already cases that are coming up. So it will be really interesting to see what are the legal precedents or like a Dow and forces is yeah. IP through agents basically. And I think there will be precedent before we will have to kind of in false our IP. Yeah. Well, it's literally saying your name. Well, one thing to add there. So to reiterate what Vincent said as well, I mean, that was a very quickly become powerful economic agents. And I think enforcing enforcing let's say processes in our legal system is often a function of capital. So I think if you did that, for example, was to ever get to a point where it had to enforce one of its one of its IP cases. It would definitely have the financial backing to do so, and it can operate through agents to kind of enforce the validity of its IP. And then the remaining processes that's, that's considered like the relationships between agents are really [00:22:35] subject to the same legal processes that we have today. When two companies enter a entered equilibrium, and if a biotech company enters a sponsored research agreement with the university, the trust agreements that are set up there are, are, are not different. And, and the underlying legal contracts that we using are also the same, I think. Back to Vincent's point, there are actually first cases where Dows are enforcing their IP. This is in the context of fits in ICU, open source software development, where, where a dowel let's say has developed a certain protocol, but that protocol is open source. But it's probably running under a specific software license and the Dow is not choosing to actively enforce its its IP against someone who infringed against that license. I think one additional aspect here is to when we think through trust and where is trust, concentrated and power concentrated from the Dow is to note that that, although there are these agents that are available for a Dodge interact with the real world, the capital's [00:23:35] concentrated within the network of token holders. And, you know, just on a technical level, there's this multisignature wallet that holds all the funds and that's controlled by members of the community. And it's all basically in a token gated way. And that network structure, that social network, which is basically the Dow, I think can be very well compared also to some kind of association where you have people all across the world, collaborating, they're all aligned by, by a token incentive to pursue one shared mission. And then the Dow the network. Start agreements with various agents. So it's not really relying on one particular agent fulfill its mission. If there was a situation which trust or agreement with one individual real-world agent know w would be broken, then still most of the capital wise with the Dow and the Dow would have the ability to engage in a D and an agreement with a different entity. It's not like there's one entity or one vulnerability. When, when you think [00:24:35] through the contact zone between the digital Dow and the physical company, and speaking of agents at what level does the entire membership of the Dow folk, right? Like, are they, are they voting on every decision? Like we want this person as our lawyer, we want this person. Yeah. Yeah. Now basically to make it kind of concrete there's like, of course, like a core team and stewards who actively working and we'll also have of course some yeah, for example, on the, on the longevity side, helping to solve steel flow, doing all of these activities, and then it's mostly on the bigger funding decisions, for example, should we fund this project with automation dollars, but it won't be on, should we hire this designer that will be like autonomy, for example, with the. The design team to hire a designer and budgets that are basically voted through. So it's not, micro-managing kind of in depth sense, but it [00:25:35] kind of more the key overall big decisions, what the community was able to do. So, I mean, early in the, in the community's formation and in the Dallas formation, there was a governance framework that basically laid out a series of, of, of decisions as to how governance actually functions in the doll. And there's in B doll, there's this sort of three tier governance system moving from conversation that is quite stream of consciousness oriented in discord, moving to semi formalized proposals for community input in a governance framework called discourse. And then ultimately. Things that make it past that stage, moving onto this sort of software platform for a token bass boat. And part of that governance framework that was initially created, also invested a certain amount of decision making power within working groups and also set thresholds on what those working groups were able to spend, what sort of budgets they had and where they needed permission from the community ultimately to make decisions. So there might be. No for decisions greater than $2,500. That requires a [00:26:35] soft phone for things more than $50,000 requires a token days vote. And this is really important because as you can imagine, early on the organization, it can be super chaotic and really, really unproductive if every single decision that that was making needs to have this sort of laborious community-wide boat. But this is also a really interesting sort of iterative experiment, but I think many dollars are participating at the moment, which is really trying to figure out to what extent you can involve the community in a productive way in the sort of day-to-day operation. What's differentiates, differentiates a token holder from a contributor, from a core team member, from a working group member. How do people sort of move along that funnel and traverse those sort of worlds in a way where you get the most productive sort of organization? And this is something that is, I would say, being iterated on and improved constantly based on, you know, the, the sort of dialogue happening between the team and. And actually on that note, I have one vaguely silly question, which is why are all Dow's run on? [00:27:35] This is, this is, this is my, my, my biggest, my biggest complaint is I, I cannot pay attention to like streaming walls of text. Yeah. So it's like, how did, how did that emerge? Like, has anybody done a, a doubt, like, just run on like a forum or by email or something? Yeah, it is actually the biggest bag holder in most DAOs that operates. I'm just kidding. Actually. It's it's of course almost like mimetic it's like, that's how, like a lot of crypto projects, even like three, four years ago began to organize. And I think it's, it's ultimately, it's just the tooling. Like they were like slack and discord, this court to coordinate, and this court was much better in like enabling to participate in a lot of different channels very easily. But we're going to be, I think it's a lot about like, even like file sharing. All of these things you need, which go beyond. But ultimately there are kind of like some leading doubts that emerge just as a telegram chat between [00:28:35] five friends. And that I know like the leading, like art collected, I was like, please, it doubt. And that was just like five friends on a telegram or something. So of course you can envision like every possible way and model. Ultimately, I think it's, it's more like a, became a pattern that like a lot of projects organized food like this. Yeah. And I think there's also this like feedback thing that occurs with like the more people that are organizing by a discord in the early days, the more that people started to create like token integrations and token gating and things like snapshot and all of these sort of things where now there's like, because of that, a bunch of tooling from an integration perspective, that is, that is now developed, that makes it easier to operate in a community like that than it would be to have a slack channel, for example. Yeah. The best part, there is a serious lock-in effect. If you start your new Dow, the best choice is to go with discord because that's where all the other books, we, you know, folks that are already active plus you can leverage a lot of bots to allow you to token gate access or [00:29:35] send notifications, similar things. And another question is how did you all become the core team? Just show up Tyler and Paul probably could start telling them. I think maybe one interesting thing is that ultimately like every journey is kind of individually, but ultimately most people are just like saw very initially or like at a similar idea and kind of, it's almost, I think like, like a shelling point where like like also like, like I literally tried to register longevity doubt just the domain two years ago, before we, even before I met anyone who wasn't a Dow. And, and so I think there's like, and I think it's a similar story, even for Tim that, and then ultimately of course, there's like some mechanism of discovering it and, or like hearing him about the idea or meeting, like, ultimately for me, it was meeting Tyler and Paul because of molecule and then for a lot of people, actually, they just saw an interview. They saw [00:30:35] an article about it, jumped into discord, introduced themselves, for example said, yeah, we would love to help on the website. I would love to help on the view flow and then started helping and ultimately through that mechanism. And I think like, People like bubble it up basically, and just started writing an article or doing a low or, but then became more and more integrated parts of most kind of like, like work themselves into it. And also of course, like like a lot of people have never met each other in person or is it like, and, but it kind of like this, this trust I think emerges and builds up doing like just engaging and helping progress the Dow as a whole. But I think it's, it's actually really interesting, exciting to see kind of like just this like global coordination emerging out of like the shared purpose or mission. And a lot of people just stepping up and like initially we didn't have a token, we had $0 and they were like people who like spend weeks, we building a website pro bono without [00:31:35] expecting anything like re like really good research has joining me into this. Before we even had like $1 funding to give towards research. So I think it would have to, yeah, that's kind of also the inspiring part. I think about a lot of dialysis that it just naturally emerged and everyone can do this a bit of like no boundaries, but then yeah, self-selected almost, On Nicholas raising his hand was going to give him a chance to say something, right? Yeah. So I think there's the saying, I've read a couple of days ago that some ideas are occur in multiple different brains at the same time. And I think that's really what also happens if we, to Dow Vincent let's think about this for some time. Lawrence had basically stopped developing mobile applications, really figured, you know, focused on aging research, Paul and Tyler thought about this topic. Marketplace for ideas, intellectual property. Tim had been, I think, thinking about this idea and, you know, basically crop funding, academic, or just fundamental research as a community for some [00:32:35] time. And I've been sufficiently frustrated with the way academia currently works and have been actually also thinking about, okay, can there be some kind of mechanism where a community bootstraps itself into existence and funds, scientists and entrepreneurs within its community. Everybody pays a little and then you can actually allocate a lot to the really good ideas. And in some ways I think, you know, we all have some kind of predecessor to this idea. And then when we each had these individual time points heard about it, there was just a, who was a very intuitive decision to join. I think it's like a certain amount of serendipity, a certain amount of like Twitter network effects, like a weird variety of things. Like you know, we started out with like just like white paper and an idea. And then, you know, through, through that, got in touch with a couple of different people, but then people just start showing up. I mean, weird. The most interesting thing for me about the Dow experiment is like early on, we had like this [00:33:35] sort of like, okay, people want to be working with group members. This is like pre doubt. Not even like Vincent saying no token yet, nothing trying to figure out, like, how do you organize this community? How do you do something meaningful? We were like trying to collect applications or something. And then it's like, some people would apply and we're like academy, who's going to be good or whatever one person who's now the lead of the, of the tech working group, this guy, Audi Sheridan applied and was rejected, but then just like made himself super valuable. Like he started doing things that were like, no one else could do. Became an invaluable member of the community. And then we sort of realized like, why are we doing this application thing? Like people just show up there's things that need to be done. Sometimes we don't even see what those things are. People have good ideas, they make proposals. And all the sudden, you know, you, it's not like a company where there's a hiring process. There's very little, you can, anyone can show up on the discord tomorrow, identify some pain points, make a proposal, and just demonstrate to all these other people [00:34:35] that they have value to add to the community. And then, you know, there's, there's a sort of process there, but that process is, is still very loose. So I mean, most people who are here even on this call showed up through some like, like Nicholas and Vince were saying, I had been thinking about this before. We're sort of attracted to this magnet that is now a selling point for crypto in longevity and just had really great ideas about how to improve the community and elevated. And that's sort of, that's sort of, for me, the magic, I mean, You know, six months old now, roughly, I guess it'll be about six months and you know, the community is like 3,500 people or so, and, you know, hundreds of researchers, you know, dozens of people who are contributing pretty often, I don't mean some people full time at this point. And that's like a, a growth cycle to go from like a white paper and nothing to, you know, a bunch of money to fund R and D a bunch of intellectual capital you know, pretty strong political force in that amount of time would be [00:35:35] unprecedented. I think, for, for a company, especially something that's like bootstrapping from a community, not raising money from didn't raise money from BCS or anything like that. Just like had an auction for a token. It's to me, this is really interesting, and it sort of proves that, like, in terms of organizing intellectual capital and monetary capital, it's a really, really powerful mechanism. And so sort of related to the company point, or are you, are you worried about the sec. I mean, a huge amount of thought has gone into like the legal structuring and middle engineering and the dowel. So, I mean, the way it basically works is that the intellectual property that the Dao holds in the form of B's IP NFTs are not owned by the token holders. The token holders can sort of govern them by proxy through this governance, token and dividends are not paid out either. So the idea is to create, you know, it's not a nonprofit organization and the. As an organization is trying to make profit to further fund longevity research, but those dividends don't flow to, to token holders. [00:36:35] So there's not, you know, it's, it's, there's several prongs of the Howey test that are essentially being broken under, you know, whether it's things like making profits from the efforts of others and the fact that no one in the organization is directly profiting from, from sort of commercialization efforts the Dow is doing. But yeah, I mean, this is something, you know, thinking about the interaction between the Dow and the sec or, or, you know, like securities concerns played a pretty big role in the design thinking around the entire organization, the structure, because, you know, you can also go different routes. You know, some security token route or, or, you know, this, if you go these sort of routes, you really end up just excluding huge numbers of people from, from participating. So the goal here was like, how do you maximize participation in a way that is still ultimately creating value, but not necessarily creating value? You know, it's plumbed individual token holders, but really for the field of longevity as a whole and to move the needle on research. Got it. [00:37:35] So maybe, maybe to add a couple of points here. So the way that Vita token is fundamentally designed as a governance and utility token and at its highest level, you can think of it as something that is actively used by all members to curate the IP and the projects that they want to fund. And something that taught us that earlier is this, this very strong with typical let's say security, security, like assets, you have direct low dividends. You have very clear expectation of profits. In this case, first of all, you need to actively do something to be a member of VitaDAO and to then actively help to curate the IP. And the rights that come with it be don't token. There's no way that you could like say, okay guys, I'm out. And I want to take my share of IP that I helped create with me, which is also typical thing that you might have. You could have this as a shareholder, or if you're kind of in like more like a limited liability partnership type setting. So in this case, the Dao owns the IP and there's also, no, VDX not any expectations of profits that you could have because first of [00:38:35] all the goal here is to fund, to fund research, and really open up that research and then to try and make it accessible for the wealth which could actually mean open sourcing the research or open-sourcing the IP thus killing its commercial value. So that's a beat that discovered some. And it deem that discovery to be so important that it had to be open sourced and, and made accessible and thus they could never become a patient of all therapeutic down the line. Token holders have full rights to do that. Whereas I think if you, if you had a typical setting where you had a company and it was the whole, the shareholders and those shareholders had a very clear expectation of profits that would never fly in most normal companies. And so because there is no direct expectation of any, any potential returns that are made, there's not even the potential for return per se. And then there's that there's the full of governance option to essentially not commercialize anything. Yeah. Yeah, that's really cool. And actually sort of not quite related, but so, so I, I would say that that therapeutics. [00:39:35] Sort of a very special case in the sense of it's like very IP based there's, there's sort of very much a, like a one-to-one correlation between IP and product and those products can be very lucrative. So, so that's sort of why, you know, the therapeutics as, as an industry work. Do you think that the, the sort of the beat Dow approach could work for research and development outside of the therapeutic world? I guess as you're maybe, maybe rephrase your question, Ben is, yeah. It's just like, I guess the question is, is like the sort of idea that you can create incredibly valuable IP that like. It's fairly unique to the world of therapeutics and in many other sort of technological domains th the value really comes from like building the company around some IP and IP is [00:40:35] not that important. So yeah who wants to go for it? Go for Tyler. No, I was just gonna say quickly. So, I mean, I think absolutely because it also, it doesn't doubt doesn't need to be also IP centric, for example, Bita doc and have the holding data that was being produced by something. And that data could have intrinsic value. Similarly, meted out could try to get involved in manufacturing or create products. I mean, there's many different design flavors for these dowels. And I think the governance framework around this, and let's say the organizational capacity and the coordination capacity can be applied to many different problems in many different industries. And I think even the intellectual property thing does hold true well beyond therapeutics. So with therapeutics, you're right. They're very, very expensive to develop, which is why you tend to get this enforceable monopoly to try and basically incentivize people, developing them, but in textiles or engineering or, you know, [00:41:35] any sort of field where. IP plays a role. You could even apply almost a one for one one-to-one sort of model here, but beyond that there's many different flavors of assets and that sentence that adult could hold the other than probably most excited by is really things like data, which I think can be really, really powerful or software, which could be similarly powerful. And then, which I think a lot of dowels are already, already doing. For example, maybe it also has as one point also in addition to like, even like activities, like funding I P directly and kind of like having like a self fulfilling or like also you know, sustainable funding cycle there. We also, for example, had like these efforts that are completely philanthropic, if you like, and just helping to use also our community and to, for example, put together like this donation round on longevity and like exploring kinetic donations, like basically where, like I also like this idea even like before Kind of be the Dow existed.[00:42:35] And I was like, okay, now we're like, kind of, there's like enough people and enough attention to do this. And the doll basically donated $65,000. But then for example, we literally donated 400,000 and we helped curate a projects which are all purely philanthropic, which are like open source projects, different even like, like NGOs doing like different projects and and basically helped also get our community together to donate to these different projects. And then talking a little bit for me, it's like, like one example where it's like really powerful because you have this like shining point of like crypto people were interested to fund longevity and they're not just interested to fund IP and FTS in a sustainable loop, but also to explore other funding experiments or other experiments. Like what another one we were discussing is like a longevity prize or like grants and fellowships for young people entering the field. All of that is actually kind of like advancing the whole cause and the whole community [00:43:35] and, and, and the core focus and activity of funding, IP, because with growth, our community and, and yeah, the whole field. So I think that's kind of actually an interesting point is that we are not limited to kind of funding IP, but it's of course, one of the core mechanisms we're engaging in. Yeah. I would add that there's also value in the community itself. Imagine Bitcoin, right? Anyone can fork it Instagram it's, it's a simple app. Anyone could have made a copy, but there's most of the value there and the net there that gets built. So here we have a team, right? The stellar team and the Dow itself is ultimately our R D. Awesome organization here. It got born in a Genesis by itself. It's a smart contract. So it's sort of unique in that way of it. Of course, someone interacted with the smart contract. It can be someone anonymous, but it issued 10% of his tokens, which by the way are [00:44:35] 64 million which is we're on 64 million, which is about the lifespan in minutes of the longest lived person, John Como. And that's sort of Beaky right. We can only extend that if someone lives longer than that, but anyone could buy those tokens, right? It's a fair auction including us, including random people. And then there was a vote to empower a core team like us. Yes, some of it, most of us here got involved before. But the cool part is anyone can start showing up and contributing a lot of value and ultimately the community can decide to do make them a core contributor to make them a steward of even some other efforts, right. Even something that we haven't thought about. There's always room it's permissionless. That's, that's something special definitely a metal experiment right here. And it's an experiment of sort of organizing people towards a common goal and a different way to make experiments, scientific experiments and, and figure out how to advance the therapeutics. We need to extend our healthy. You would actually be [00:45:35] curious. If I could ask you a question, Ben, on, on your thinking on, on poverty, how do you think, like, something like that fit into your thinking on just like new institutions for funding science, because you also mentioned this, like, it could also be a model of course, like we're potentially exploring it all. It's four different areas. And ultimately for me, it's like, if there's like big of enough of a community that is interested to fund something, like, like one of course, very like public example could be something like climate change or something exciting, like space. They would probably be at some point a community that would form resources and community to fund those research areas. It would be curious to hear from you, like kind of yeah. How you think for you to dial in the framework that you're outlining there. Like you're listening to the work? No really well with like pop up on this theme, you're exploring. Yeah. I mean, frankly, the reason I, one of the reasons I wanted to have this conversation was to sort of form those thoughts. So I [00:46:35] will be able to answer that much better after sort of like going after this. Right. So I think the, so just some of the tricky pieces, I think outside the domain of longevity is like longevity is, is very, you know, exciting to a lot of people with money both in the crypto community and outside of it. And so I think that's, you know, it's like, there's, there's lots of people who are excited about space, but from my experience, space, geeks tend to not be that wealthy. And so, so there's a question of just like you can, you can have a very excited community, but I think the real thing is like how much are those people really willing to put their money where their excitement is? That that's, that's a big question. Another question is, is for me that I think about it is like coordination around research. So, so I think another sort of great thing about therapeutics is that you really can, there's sort of like this nice, like one-to-one to one where you can have one [00:47:35] lab develop one therapeutic, which corresponds to one piece of IP, which corresponds to one product. And obviously it doesn't always work that way, but that's, that's sort of like a pretty strong paradigm, whereas with a lot of other technology it's. Sort of that, that attribution chain is very hard to do and it involves lots of different groups contributing different things. And so, and you need someone coordinating them. So this is, this is a lot to say. I think that there's very much something here. That's why I'm interested in it and why a lot more people to learn about it and why we're talking about this. But I, I think it's, it, it w it needs a lot of thought as is. It's not sort of like, I, I don't think that you could like, literally take what you all have done and just like, copy paste it for, for other domains. But that isn't to say that. Modify it and do something. Cause you know I think it's actually really, really pretty. Yeah. I mean maybe if I can speak on that [00:48:35] quick. So I think Dow will be a highly use case specific. It's actually been an interesting site. I've been I've I started writing about thousand mid 2016, 2016. There was an article that I wrote on like, what would happen if we combined let's say AI conscious AI systems with Adar. So kind of having adapt, having operated by autonomous agents in essence. And so what happened after the Dow launched, which was one of the first dollars on Ethereum? It was, it was a big complex autonomous kind of setup where the Dow was almost entirely just controlled through through dot holders. But then that also enabled a. An attack vector that essentially allowed someone to hack those core contracts. And then kind of the Dallas space went into a long period of of considering whether something like this should ever be attempted again. And people became, began to variously, very cautiously build out these systems. And there were, there's a couple of projects that over really over five years already have tried to build like generalizable Bal frameworks. And many of those projects have kind of have [00:49:35] you have failed that it actually providing frameworks that really got to mass adoption. And I think w whoever, someone, when, when you start building a down, it's kind of like, when you say, like, I want to build a company and there's many ways to build companies. And the difficult thing is not incorporating the Delaware or getting the bank account set up. And that's what sometimes people think today when they set up a doubt that like, oh, okay, it's a multisig, it's a discourse. But you obviously need that entire ecosystem that you're building. You need to think about, like, what is the, what is the value creation model for this style? What's its, what's its unique value proposition based on a value proposition, what type of community do I want to build? What type of culture do I need to implement that value proposition that will attract that community to help me? So we've needed that. For example, we've been very conscious about the type of open community that we wanted to build. And then this goes into all sorts of follow on questions. It's like, where do you actually get funding from to do what you do? And based on where that funding comes from, that will influence the culture of the community. For example, if you have a Dow that's funded by [00:50:35] several groups of larger VCs, that thou will be very different from a cultural perspective. And also from its goal is then a down that is funded by an open. Where now the individual members are much more, let's say engaged because they put some of their funding in. So they want to have a say on how to control it and what it gets used for. It's going to be very interesting to see, I think in the coming years, if, if generalizable frameworks and register, like just press a button and like spin up it, that you can already do that. There's many systems that do that, but I keep being surprised that like, they're actually not being very actively used. I think what is really important for example is to build basic infrastructure that can serve industries. And so, for example, if this is something that we we've been very focused on a molecule like drug development, isn't that different, whether you're saying you're developing longevity therapeutics or counts of therapeutics like the base kind of the base infrastructure and how you interact with the real world, for example, through IPS the same We kind of realized like decentralized drug development through Dallas, for example, could only really work if that was not a way to own IP. And then, but now I think for example, I [00:51:35] think a community like meta down will be very different than let's say a Dow that's focused on rare diseases where you're working with several patient advocacy groups. And it's not like there's a huge general excitement, unfortunately about diseases that are, that only affect small patient populations. Whereas aging affects affects all of us. And now the data that we're currently, for example, building out at molecule is called SIDA which will be a Dao that's focused on exploring and essentially democratizing access to psychedelics and mental health. Again, because we feel this is a topic that has a very broad appeal and where we can, where you can very effectively scale culture and also apply and also apply some of the frameworks ta-da. Yeah, maybe just one other thing. I think it's important to highlight in terms of how we think about this as well. Like the reason that dowels are interesting, even for me, like the reason that crypto is interesting is because it's effectively just a sandbox environment to try experiments that create behavioral outcomes like token engineering and token economics is [00:52:35] simply a way to motivate certain outcomes and certain behaviors in real time, sort of building and production, texting and production and academia. If I said, I want to change drug development, I want to change the way that pharmaceutical companies behave. I could probably write a paper in like nature reviews, drug discovery, and maybe kick off a policy discussion that ultimately isn't really going to move the needle at least on like a tangible timeline around how these things get solved. But what's interesting about those is that you can basically say I have this idea. There's the stakeholders that I want to incentivize to behave a certain way and achieve a certain outcome. And you can just like deploy this with software and start doing it. It's really crazy. I mean, the, one of the most interesting comments that Vitalik said, we hosted this topic. comment that resonated was that like, he felt the biggest sort of gift to humanity that corporate provided was this sandbox environment for experiments. And I think [00:53:35] as a scientist, this is one of the things that, that really, really strongly resonates. It's like move beyond the theoretical and go directly to the apply and start testing things in production, seeing what works. And I don't think we can say confidently that like dolls are biotech dolls. They're better than biotech companies. And achieving goals and drug development. But I think in a couple of years, we'll have a bunch of data points to suggest the things that Dallas are really good at, at least with this design implementation we'll know what they aren't good at. And because the organizations are so flexible and because they operate through this very iterative governance model that you have the ability to always be tweaking and always be improving. And so this for me is what's really, really exciting. It's like this crazy experiment that you're doing pulling in people from all over the world, independent of geography, geography. Like I haven't, if there was another tool kit to do it, that was an on crypto. We probably would have built it using that like it's. And, but really that the point here is. I haven't seen a better way to [00:54:35] scale incentives to a large group of people. Then we went three and crypto. So to me, this is, this is the most, yeah, I think we're done when it comes down to the point of the rights before that ultimately it's about a community, even with sidearm, like there's no token, there's nothing. We literally just set up like a telegram chat, invited some interesting people. They self selected themselves into now. It's like 500 people. We hosted like meetups and there's like ideas emerging out of all the people. And ultimately it doesn't really matter like how it's almost implemented or if there's a token, but it was like, what does community is to share the values and the culture of it and like, Like a shared mission also. So I think that's really, for me also, what's interesting to takeaway is that also looking at like the most successful projects in crypto has probably been projects like Bitcoin and Ethereum and, and I think a big part of the team success was its community and its culture persevering through thick and thin, like building and improving the protocol [00:55:35] together, building on it, being incentivized to build on it. I think that's like, like a major takeaway is that it would've made it. It's like all about communities and yeah, shared missions. Did you have your. Yeah, everything I'm curious about is how have tech transfer offices responded to this? I, I assume that there've been many conversations with them. I put cards on the table, don't have the highest opinion of the innovative-ness of tech transfer offices. And so I I'm wondering how, how have those interactions gone? There is surprisingly technophobic organizations for supposed to like, suppose to be like focusing on innovation. Yeah. Supposed to be helping out professors and researchers sort of bring innovation into the real world. But I would say on the whole, you know, not necessarily by fault of their [00:56:35] own, but rather just because tech transfer is largely a failed business model. Instructionally is not operated well. It's a couple of general councils sitting in an office that are not domain experts in any one field have typically grossly inflated ideas of what innovation is worth. It's challenging that said we've been super lucky, lucky to engage. Some amazing people at tech transfer offices that are really, I mean, and this is self selecting, right. If you're inter if you're interacting with us, probably amongst the most forward thinking let's say tech transfer people, so keep a list of them. So that like right. So that like, so, so like then, then if you can get some kind of feedback loop where, like you say, like, okay, these are the best tech transfer offices to work with. And then people start working with them and then all the other tech transfer offices start seeing. Totally. I mean, but this is what happens. [00:57:35] Like the first one does it. And then they've sort of de-risked it for the others. And this is what we see happening with every subsequent one that goes for it. It's easier to have the next conversation. We also learn more about how to work with them, how to structure these deals. I would say the main thing here is that tech transfer is largely not profitable. There's very, very few tech transfer offices in the world that are cashflow positive. Their business model is in danger. Their existence is in danger and they desperately need new ways of innovating the smart there's outside of Harvard, MIT, Stanford, Oxford, Cambridge, not that many that are really doing big things. And I think what we see is that there are people in even smaller tech transfer offices around the world that recognize this and are actually really, really hungry for a different way of doing things. And those are the people we hope to work with. But yeah, you're right. It's not the most, not the easiest. Let's say stakeholder group to engage. Yeah. Sorry, go ahead. Having said that though, [00:58:35] because we've been so this is also for example, a core role that we see that we see at molecule. Again, tech transfer can be standardized, like working with tech transfer. It doesn't matter if you are outsourcing in longevity asset or. And what we've actually made as I got developing systems that are as close as what they're used to today makes life massively easier. So the kind of things to avoid is to create the impression let's say. So even within Veit about in terms of negotiating contracts next steps around the IP it's important to realize that there's not a thousand people in a, in a discord like that will then contact the university or try to get involved in the research, make decisions. It's also then important to realize that these funds are like, it's not, they're not coming from kind of anonymous accounts in this like weird ether that is kind of the cryptocurrency space. But kind of. Give those stakeholders, the assurance that that we using the same process that they used to, that we've developed sophisticated legal [00:59:35] standards. And then all of this can run kind of through the existing banking system once it's, once it's bridged into them. And actually once you provide those assurances, it's surprisingly easy to work with them. In, in some cases, not in all of them, but I think as an organization, we, for example, I think can be much easier for them to work with them. Let's say a venture capital firm that wants to out lessons, the IP is setting up a company has, and then engages in three to six month long negotiations. I think the tech transfer offices that we have engaged, I've been pleasantly surprised how quick and easy they can actually be to work with a Dow or a decent size community. If the right structures and processes are. And like one out of every 20 is just some person who's like, oh my God, this is so cool. I also love when I play around in defy, I'm also into, it happens rarely, but when, when that happens, you're like, okay, this has got to work.[01:00:35] And also work with, sorry, go ahead. We also work with companies that themselves have negotiated with the TTOs and they can, sub-license a stake. And either first of all, they can also work with the company molecule and the molecule can be, don't even need to know necessarily about via that way initially. Right. Molecule can have a sponsored research agreements with that startup or with the TTO. There was a company they don't TTO is, might prefer to work directly with the company, right. Or even a revenue share. We can have royalty agreements as an, as a, as an FDA as well. With, with the company a startup, right? And if, if, if the deals are too slow, we can work directly with our ups initially. And as things open up and this gets more popular and they see that there's a better place to go. So you have the, the, that was a bidder, you know, maybe other people in the crypto community can become bitters [01:01:35] for these IP NFPS. And it can be a much better way to sort of decide as a market what the value of assets are. And so if you have an asset that, that the market would this market more and more liquid market would value higher, why would you go with the traditional players when you can get much, much better terms? And so I think they will get convinced once they what they see that. Yeah. And I think also one thing that like today we funded like a new project as well. And what the research has said also that he was pleasantly surprised how quickly it went from like application to funding. So I think it was within four weeks or something, which I think is not common for like planning to funding. And I think that's also something that like and a lot of researchers are also really excited to have a community behind them that is really excited to follow the progress, to publish a process, to do interviews and the video about their research and, and connect to the other research we are funding. So I think that's [01:02:35] also like a huge value proposition to the researchers. And speaking of applications is this is a question from on Twitter. All of your proposals seem to have passed with like resounding consensus Not necessarily, not necessarily, no. It's I think there was one or two that would almost almost 50 50, but like really, like I would read on some there's like resounding like almost like a hundred percent voting in favor on two or three, there was like only 60% of voting in favor. And what I think is interesting, what I observed as a pattern is that like on the ones people voted against it was mostly in working group members voting against, but the community was like oftentimes voting in favor. So like, my feeling was like, the community wants to fund a lot of things and then things keep everything that is getting listed for funding should be funded. But then the people that in turn the, like [01:03:35] some of them who might've looked and, and help you diligence. Sign up something might be really excited by it and some might vote against it. And I think that's really the thing, because ultimately you can even see in the voting we voted. So you might like seeing the names of the people who voted and you see, okay, this person that is leading the longevity working group voted against it. And then that's of course a signal for someone to also vote against it. But of course there's also kind of evaluation. Write-ups so you can imagine like a four people looking at a proposal and actually some are really excited by it. And some said we shouldn't fund it. And that would be reflected in the evaluation would reflect the reflected in the voting because the person who are excited by it vote. Yes. And it was not no. And that will also of course go into the voting of the, yeah. Of the normal voter. There might be also a selection bias. Cause we, we only put things that make sense on up to a vote. Right. So [01:04:35] we didn't put any crazy thing, like a head transplant research or even the, okay. Maybe that might be exciting for them. I know some, some, some crazy thing that the community would be like what, that's not research into how to make Lamborghini live longer now. That will obviously be downloaded by the community, right? Yeah. So then it's kind of like a selection criteria. Like it has to fulfill a certain quality criteria if you like. And it needs like the requirement to have like potentially at some point some value that could be captured. And I think a lot of actually things like, like kind of all the things that someone in the community has been excited by God got put on chain. And I think also in the future, there will actually be many more proposals that might be very close calls. And, and so, so just to, to clarify. Like propose sort of funding proposals go through a working group before it [01:05:35] goes up into, to a community vote. Okay. That basically the main experts, like what also taught us at early off, like people are like, we're in this research also investors in the field and present like re deep domain experts who are also of course look at the criteria. So on our actually main page com we also have like a kind of like the way acquirements and like a FAQ for, for applications. And a lot of like, some of the applications also didn't fit the criteria. So we couldn't put them towards the proposal, but yeah. When does science makes sense? I like some people I'm excited by, it will be put up on a P as a proposed. I think what will happen over time. So there's a lot of proposals also that uh, uh, that are being worked on and that are almost like in the funnel. I think what'll happen over time. You'll see a lot more diversity or proposals as well. I personally think it would be cool to just get a lot more crazy or ideas on that because also like in something that we've read as it is only once it goes on chain, does it actually like, cause like the [01:06:35] final debate whether we're doing it or not, but and then I think, what do you also see as there's lots of like, almost like housekeeping proposals but it's important that they're actually put up to all token holders to sign on because finding at the end of the day toggles are really the executive for the organization. And so it's important to even, and then no one would disagree, but if we say, Hey guys, we're going to change our governance process because we realized it's better to do 1, 2, 3, and everyone agrees with that. There's no big disagreement on this, like housekeeping. But in a way that the, that the governance framework is designed we have to put everything on chain and then it gets voted through. We've also read as for example and I mean, these are all like, so veto is not just like this community or this funding vehicle, but it's also a whole set of smart contracts that the Dao actually operates through. So you need to operate, put things to vote and like formally then execute them through those, through those smart contracts. So to speak, I think is also really important in a, in a trust the system, something that we've also realized though, like it's quite, it's a little bit and you learn as you go. [01:07:35] You launch kind of you launch a product and architecture, and then you refine it as you go along. We've also reduced, for example, that can be cumbersome to constantly have people actually use gas. So a boat can cost, depending on the congestion congestion of the film network at Gaston, a vote can cost anywhere between 10 or $20, which can be a lot. And let's say, if you have, let's say, if you have a, a thousand dollars worth of tokens, and let's say you're a very small, a smaller but committed community. That, that that's, that can be a high cost to actually interact and participate in the system. Let's say for larger holders that's less of an issue, but that wouldn't actually then serve the democratic kind of, I think, vision of the organization. So something that we're doing for example, is moving on a gas, this voting system, where essentially, almost imagine you just check all of the balance that everyone has, and then people vote with their balance as opposed to actually on chain moving tokens. So those are continuous improvements that we're doing, and that would actually mean that even more proposals micro life, but it could also mean that that there's a [01:08:35] higher, let's say that there's more discussion, like a smaller discussion around those proposals. Yeah. In, in theory, just do this with a spreadsheet. Like that's also the, of course the meme with with crypto, it's like a spreadsheet blockchain to just spreadsheets in the simplest form. But I think what's really the key thing. And I think. Like web two. And like our old world of banking is a spreadsheet that kind of like is controlled by the bank or the state, for example. And it, and like, if you were like a researcher and your state, doesn't like your research, they will just block your research a spreadsheet if you like with your money's in it. And definitely, I think it's, that's the power is that it's trustless and not owned by your state or university of bank, but that it's like, yeah, it's condomless and trustless. But I think on, on the funding side you could also do it, but then you need to trust someone. So maybe you have the spreadsheet or [01:09:35] maybe you have to access rights to the Google spreadsheet. And I think that's ultimately where it breaks down. It's like, ultimately, you, you couldn't do it in web two way, like a dolls. And I think that's like an interesting, yeah, cool. Well, I think we all need to jump. Is there any, any, any last thoughts that you want to sort of leave in people's heads about this? That we didn't touch? Maybe like one key one instead it's like, like everyone should take a look at the website, read it out at comment, feel free to jump in discord, introduce a 74, if you want to join, because we're really always looking for like more researchers, more enthusiastic to, to join us. And I think it, like, we kind of like the first one who kind of pulled it off with some funding and some first project, I think there will be more and more interesting research projects and research styles. So it's like the whole like decent, less science-based projects emerging. So there's also like. The interesting ones, but we can even, maybe in the show notes, also some, some interesting resources [01:10:35] beyond kind of each dowel also have lists of in general, just decentralized science efforts. We're excited by like, from research by also yeah, funded by the Coinbase founder as a, for decentralized publishing, but it hasn't been designed to surprise us what, like Adams is working on and a bunch of different projects that we can, can leave in the show notes for those who want to rabbit hole into the simplest science, because I think it's really interesting new field emerging, maybe as a last coming from my side as well. I think we're beginning to see that all of this as possible. And like, I think if you dream big enough, like we can actually build these things out and actually make it happen. If you, if any of your listeners have a cool idea about trying this approach in another therapeutic area that they're passionate about or even just. Having ideas about exhilarating systems that could be built to support this. We're already seeing. Yeah. Lots of other builders kind of come into this ecosystem and, and we're really excited to like build [01:11:35] together. The great thing about with three is that it's highly composable and interoperable really in the way of like open API as in a, in, in, in a sense like similar to the open source software is really open and interoperable. So we need keen. If any of your listeners want to get involved in B2B, I have ideas about building other doubts, what maybe they even just want to explore the IP and Ft framework. So something as well as like, if you have a cool research project that you want to get funded you can already get that funded through, through, through an NFT. And that entire infrastructure is built and, and exists and something that we're also looking yeah. That I'm looking forward to is really opening up scientists, scientific funding and making access, make it much more democratic, democratic, and accessible for anyone to come in and fund this. It doesn't have to be adopt like if you want it to, if you want it to finance a specific project, or maybe you were at a couple of friends, start up a small group that starts identifying early stage assets in universities. And then essentially bringing them on chain. Now you can own them. You can transact in [01:12:35] them. And even at a later stage, you could, you could decide to set up a doubt. So some of the founders, for example, that are approaching as a. They're like, oh, I want to start it off because I have this research about it. I mean, like, wait, you really needed the bow for that, but because the value, but you want to create an ecosystem. It's really good to though, to center the down around the use case. But yeah, that's also something important to read as also not everything needs a, yeah, like to put out a call call-out bounty. We give out referral fees. If, if you refer to us research projects that make, or, or non-academic researchers or team, or even a startup that we could do a deal with if we end up funding that we do give out the percentage to be able to bring it in. So we're excited to find out all the. Unheard of on undervalued research into aging, of [01:13:35] course, and longevity from anywhere in the world. Excellent. Well, I really appreciate all of you taking the time, taking more, more than, more than the time. And yeah. Keep up the good work..…
I
Idea Machines


1 The Nature of Technology with Brain Arthur [Idea Machines #41] 1:54:11
1:54:11
Play Later
Play Later
Lists
Like
Liked1:54:11
Dr. Brian Arthur and I talk about how technology can be modeled as a modular and evolving system, combinatorial evolution more broadly and dig into some fascinating technological case studies that informed his book The Nature of Technology . Brian is a researcher and author who is perhaps best known for his work on complexity economics, but I wanted to talk to him because of the fascinating work he’s done building out theories of technology. As we discuss, there’s been a lot of theorizing around science — with the works of Popper, Kuhn and others. But there’s been less rigorous work on how technology works despite its effects on our lives. Brian currently works at PARC (formerly Xerox PARC, the birthplace of personal computing) and has also worked at the Santa Fe institute and was a professor Stanford university before that. Links W. Brian Arthur’s Wikipedia Page The Nature of Technology on Amazon W. Brian Arthur’s homepage at the Santa Fe Institute Transcript Brian Arthur [00:00:00] In this conversation, Dr. Brian Arthur. And I talk about how technology can be modeled as modular and evolving system. Commentorial evolution more broadly, and we dig into some fascinating technological hae studies that informed your book, his book, the nature of tech. Brian is a researcher and author who is perhaps best known for his work on complexity economics. Uh, but I wanted to talk to him [00:01:00] because of the fascinating work he's done, building out theories of technology. Uh, as we discussed in the podcast, there's been a lot of theorizing around science, you know, with the works of popper and Kuhn and other. But there's has been much less rigorous work on how technology works despite its effect on our lives. As some background, Brian currently works at park formerly Xerox park, the birthplace of the personal computer, and has also worked at the Santa Fe Institute and was a professor at Stanford university before that. Uh, so without further ado, here's my conversation with Brian Arthur. Mo far less interested in technology. So if anybody asks me about technology immediately search. Sure. But so the background to this is that mostly I'm known for a new framework and economic theory, which is called complexity economics. I'm not the [00:02:00] only developer of that, but certainly one of the fathers, well, grandfather, one of the fathers, definitely. I was thinking one of the co-conspirators I think every new scientific theory like starts off as a little bit of a conspiracy. Yes, yes, absolutely. Yeah. This is no exception anyways. So that's what I've been doing. I'm I've think I've produced enough papers and books on that. And I would, so I've been in South Africa lately for many months since last year got back about a month ago and I'm now I was, as these things work in life, I think there's arcs, you know, you're getting interested in something, you work it out or whatever it would be. Businesses, you [00:03:00] start children, there's a kind of arc and, and thing. And you work all that out. And very often that reaches some completion. So most of the things I've been doing, we've reached a completion. I thought maybe it's because I getting ancient, but I don't think so. I think it was that I just kept working at these things. And for some reason, technologies coming back up to think about it in 2009, when this book came out, I stopped thinking about technology people, norm they think, oh yeah, you wrote this book. You must be incredibly interested. Yeah. But it doesn't mean I want to spend the rest of your life. Just thinking about the site, start writing this story, like writing Harry Potter, you know, it doesn't mean to do that forever. Wait, like writing the book is like the whole [00:04:00] point of writing the book. So you can stop thinking about it. Right? Like you get it out of your head into the book. Yeah, you're done. So, okay. So this is very much Silicon valley and I left academia in 1996. I left Stanford I think was I'm not really an academic I'm, I'm a researcher sad that those two things have diverged a little bit. So Stanford treated me extraordinarily well. I've no objections, but anyway, I think I'd been to the Santa Fe Institute and it was hard to come back to standard academia after that. So why, should people care about sort of, not just the output of the technology creation process, but theory behind technology. Why, why does that matter? Well[00:05:00] I think that what a fine in in general, whether it's in Europe or China or America, People use tremendous amount of technology. If you ask the average person, what technology is, they tell you it's their smartphone, or it's catch a tree in their cars or something, but they're, most people are contend to make heavy use of technology of, I count everything from frying pans or cars but we make directly or indirectly, enormously heavy use of technology. And we don't think about where it comes from. And so there's a few kind of tendencies and biases, you know we watch we have incredibly good retinal displays these days on our computers. [00:06:00] We can do marvelous things with our smartphone. We switch on GPS and our cars, and very shortly that we won't have to drive at all presumably in a few years. And so all of this technology is doing marvelous things, but for some strange reason, We take it for granted in the sense, we're not that curious as to how it works. People trend in engineering is I am, or I can actually tell you that throughout my entire life, I've been interested in how things work, how technology works, even if it's just something like radios. I remember when I was 10, I like many other kids. I, I constructed a radio and a few instructions. I was very curious how all that worked and but people in general are not curious. So I [00:07:00] invite them quite often to do the following thought experiments. Sometimes them giving talks. All right. Technology. Well, it's an important, yeah, sort of does it matter? Probably while I would matter. And a lot of people manage to be mildly hostile to technology, but there are some of the heaviest users they're blogging on there on Facebook and railing about technology and then getting into their tech late and cars and things like that. So the thought experiment I like to pose to people is imagine you wake up one morning. And for some really weird or malign reason, all your technology is to super weird. So you wake up in your PJ's and you stagger off to the bathroom, but the toilet, [00:08:00] you trying to wash your hands or brush your teeth. That is no sink in the bathroom. There's no running water. You scratch your head and just sort of shrugged in you go off to make coffee, but there's no coffee maker, et cetera. You, in this aspiration, you leave your house and go to clinch your car to go to work. But there's no car. In fact, there's no gas stations. In fact, there's no cars on the roads. In fact, there's no roads and there's no buildings downtown and you're just standing there and naked fields. And wondering, where does this all go? And really what's happened in this weird Saifai set up is that let's say all technologies that were cooked up after say 1300. So what would that be? The last 700 years or so? I've disappeared. And and you've [00:09:00] just left there and. People then said to me, well, I mean, wouldn't there have been technologies then. Sure. So you know how to, if you're a really good architect, you might know how to build cathedrals. You might know how to do some stone bridges. You might know how to produce linen so that you're not walking around with any proper warm clothes and so on. But our whole, my point is that if you took away everything invented. So in the last few hundred years, our modern world or disappear, and you could say, well, we have science, Peter, but without technology, you wouldn't have any instruments to measure anything. There'd be no telescopes. Well, we still have our conceptual ideas. Well, we would still vote Republican or not as the case may be. Yeah, you'd have, and I'd still have my family. Yeah. But how long are your kids gonna [00:10:00] live? Because no modern medicine. Yeah, et cetera. So my point is that not only does technology influence us, it creates our entire world. And yet we take this thing that creates our entire world. Totally. For granted, I'd say by and large, there are plenty of people who are fascinated like you or me, but we tend to take it for granted. And so there isn't much curiosity about technology. And when I started to look into this seriously, I find that there's no ology of technology. There's theories about where science comes from and there's theories about music musicology and theories, endless theories about architecture and, and even theology. But there isn't a very [00:11:00] well-developed set of ideas or theories on what technology is when, where it comes from. Now, if you know, this area is a, was that true? On Thur, you know, I could mention 20 books on it and Stanford library, but when I went to look for them, I couldn't find very much compared with other fields, archi, ology, or petrol energy, you name it technology or knowledge. It was, I went to talk to a wonderful engineer in Stanford. I'm sure he's no longer alive. Cause this was about 15 years ago. He was 95 or so if I couldn't remember his name it's an Italian name, just a second. I brought this to prompts. Just a sec. I'm being sent to you. I remember his name and [00:12:00] make it the first name for him. Yeah. Walter VIN sent him. So I went to see one it's rarely top-notch aerospace engineers of the 20th century had lunch with them. And I said, have engineers themselves worked out a theory of the foundations of their subject. And he looked, he sort of looked slightly embarrassed. He says, no. I said, why not? And he paused. He was very honest. He just paused. And he says, engineers like problems they can solve. It's. So compared with other fields, there isn't as much thinking about what technology is or how it evolves over time, where it comes from how invention works. We've a theory of how new species come into existence since 1859 and Darwin. [00:13:00] We don't have much for theory at all. At least. This was 10, 15 years ago about how new technologies come into being. I started to think about this. And I reflected a lot because I was writing this book and people said, what are you writing about? I said, technology that is always followed by Y you know, I mean, I could say I was maybe writing the history of baseball. Nobody would've said why, but Y you know, what could be interesting about that? And I reflected further that and I argue in my book, the nature of technology, I reflected that technology's not just the backdrop or the whole foundation of our lives. We depend on it 200 years ago, the average length of life, might've been 55 in this country, or 45. [00:14:00] Now it's 80 something. And maybe that's an, a bad year, like the last year. So, and that's technology, medical technology. We've really good diagnostics, great instruments very good methods, surgical procedures. Those are all technology. And by and large, they assure you fairly well that if you're born this year in normal circumstances, Reasonably the normal circumstance through born, let's say this decade, that's with reasonable, lucky to live, to see your grandchildren and you might live to see them get married. So life is a lot longer. So I began to wonder who did research technology and strangely enough maybe not that strangely, it turns out to be if not engineers, a lot sociologists and economists. [00:15:00] And then I began to observe something further in that one was that a lot of people. So wondering about how things change and evolve had really interesting thoughts about how science, what science is and how that evolves. And so that like Thomas Kuhn's, there are many people speculated in that direction, whether they're correct or not. And that's very insightful, but with technology itself I discovered that the people writing about it were historians associates, which is an economist and nearly, always, they talked about it in general. We have the age off the steam engines or when railroads came along, they allowed the expansion of the entire United States Konami that connected his coast and west coast and [00:16:00] so on. So they're treating the technology has sort of like an exogenous effect sent there and they were treating that also. I discovered there's some brilliant books by economic historians and sociologists add constant is one. He wrote about the turbo chapter, super good studies about Silicon valley, how the internet started and so on. So I don't want to make too sweeping the statement here, but by and large, I came to realize that nobody looked inside technologies. So this is if you were set in the 1750s and by ology certain biologists, they would have been called social scientists, natural philosophers. That's right. Thank you. They would have been called natural philosophers and they would have been interested in if they were interested [00:17:00] in different species, say giraffes and Zebras and armadillos or something. It was as if they were trying to understand these from just looking outside. And it wasn't until a few decades later, the 1790s, the time of George cookie that people started to do. And that to me is, and they find striking similarities. So something might be a Bengal tiger and something might be some form of cheetah. And you could see very similar structures and postulate as Darwin's grandfather did that. There might be some relation as to how they evolved some evolutionary tree. By time, Darwin was writing. He wasn't that interested in evolution. He was interested in how new species are formed. So I began to realize that in [00:18:00] technology, people just by and large looking at the technology from the outside, and it didn't tell you much. I was at a seminar. I remember in Stanford where it was on technology every week. And somebody decided that they would talk about modems. Those are the items that just connect your PC. The wireless internet. And they're now unheard of actually they're built into your machine. I'm sure. And we talked for an hour and a half about modems or with an expert who from Silicon valley who'd been behind and venting. These never was the question asked, how does it work? Really? Yeah. Did, did everybody assume that everybody else knew how it worked? No. Oh, they just didn't care. No, no. Yeah, not quiet. It was [00:19:00] more, you didn't open the box. You assume there was a modem who is adopting modems. How fast were modems, what was the efficiency of modems? How would they change the economy? What was in the box itself by and large was never asked about now there are exceptions. There are some economists who really do get inside, but I remember one of my friends late Nate Rosenberg, superb economist of technological history here at Stanford. Rude poop called inside the black box, but he didn't even in that book, he didn't really open up too many technologies. So then I began to realize that people really didn't understand much about biology or zoology or evolution for that matter until this began to open up or can [00:20:00] isms and see similarities between species of toads and start to wonder how these different species had come about by getting inside. So to S set up my book, I decided that the key thing I was going to do, I didn't mention it much in the book, but was to get inside technologies. So if I wanted to talk about jet engines, I, wasn't just going to talk about thrust and about manufacturers and about people who brought it into being, I was going to talk about, you know heat pumps, exactly Sur anti surge systems for compressors different types of combustion systems and materials whole trains of compressors. Oh, assemblies of compressors the details of turbines that drove the compressors. [00:21:00] And I found that in technology, after technology, once you opened it up, you discovered many of the same components. Yeah. So let me hold that thought for a moment. I thought it was amazing that when you look at technologies from the outside, you know, see canoes and giraffes, they don't look at all similar legs. Yeah. But they all have the same thing, basic construction there. And then their case, their memos, and they have skeleton their vertebrates or et cetera, whatever they are or something. And so in technologies, I decided quite early on with the book that I would understand maybe 25 or so technology is pretty well. And of those [00:22:00] I'd understand at least a dozen very well, indeed, meaning spending maybe years trying to. Understand certain technologies are understanding. And and then what I was going to do is to see how they had come into being and what could be said about them, but from particular sources. So I remember calling up the chief engineer on the Boeing 7 47 and asking them questions personally, the cool thing about technology, unlike evolution is that we can actually go and talk to the people who made it right. If they're still alive. Yes. And so, so, so I decided that it would be important to get inside technologies. When I did that, I began to realize that I was seeing the same components [00:23:00] again and again. So in some industrial system, safe for pumping air into coal mines or something, fresh air, you'd see compressors taking in their piping, it done. And and yeah. Again, and again, you see piston engines or steam engines, or sometimes turbines powering something on the outside. They may look very different on the inside. You are seeing the same things again, again, and I reflected that in biology and say, and yeah, in biology save mammals we have roughly the same numbers of genes, very roughly it's kind of, we have a Lego kit of genes, maybe 23,000 case of humans slightly differently for other creatures. [00:24:00] And these genes were put together to express proteins and express different bone structures, skeletal structures, organs in different ways, but they were all put together or originated from roughly the same set of pieces put together differently or expressed differently, actuated differently. They would result in different animals. And I started to see the same thing with technology. So again, you take some. You take maybe in the 1880s some kind of a threshing machine or harvester that worked on steam summer inside. There there'd be a boiler. There'd be crying, Serbia steam engine. If you looked into railway locomotive, you'd see much the [00:25:00] same thing, polars and cranks, and the steam engine there be a place to keep fuel and to feed it with a coal or whatever it was operating on. So once I started to look inside technologies, I realized it was very different set of things that there's ceased to become a mystery. And so the whole theme of what I was looking at was see if I can get this into one sentence. Technologies are means to human purposes normally created from existing components at hand. So if I want to put up some structures and Kuala lumper, which is a high level high rise building, I've got all the pieces I needed. Pre-stressed concrete, whatever posts are needed to create. [00:26:00] Fundations the kinds of bolts and fasteners the do fastened together, concrete, high rise, cranes, and equipment et cetera. Assemblies made of steel to reinforce the whole thing and to make sure the structure stands properly. It's not so much of these are all standardized, but the type of technology, every technology I thought is made with pieces and parts, and they tend to come from the same toolbox used in different ways. They may be in Kuala, lumper used in Seattle's slightly different ways, but the whole idea was the same. So it's technology then cease to be a mystery. It was matter of combining or putting together things from a Lego sets in M where [00:27:00] I grew up in the UK. We'd call them mechano sets. What are they called here? Erector sets or, well, I mean, Legos are, or, but like, I mean, there's, there's metal ones, the metal ones. I think the metal ones are erector sets. There's also like the wood ones that are tinker toys. Anyway, I like Legos, like, like I'm kinda like, okay. Okay. So, and that goes and yeah. And then you could get different sorts of Lego sets. You know, a few were working in high pressure, high temperature, it'd be different types of things of you're working in construction. There'd be a different set of Lego blocks for that. I don't want to say this is all trivial. It's not a matter of just throwing together these things. There's a very, very high art behind it, but it is not these things being born in somebody's attic. And in fact [00:28:00] of you were sitting here and what used to be Xerox park and Xerox graphy was invented by not by Mr. Xerox. Anyway, somewhere in here, but xerography was invented by someone who knew a lot about processes. A lot about paper, a lot about chemical processes, a lot about developing things. And shining light on paper and then using that maybe chemically at first and in modern Sarah Buffy. Electrostatically. Yeah. And so what could born was rarely reflecting light known component of marks on paper, thinking of a copier machine focused with a lot of lenses, [00:29:00] well-known onto something that was fairly new, which was called a Xerox drum. And that was electrostatically charged. And so you arranged that the light effected the electrostatic charges on the Xerox drum and those electrostatic as the drum revolved, it picked up particles of printing, ink like dust and where being differentially charged, and then imprinted that on paper and then fused it. All of those pieces were known. It's and it's not a matter of someone. I think mine's name is Carlson by the way. It's not a matter of what's somebody working in an attic that guy actually, who was more like that, but usually it's a small team of [00:30:00] people who are, who see a principal to do something to say, okay, you know, we want to copy something. Alright. But it could, you know cathode Ray tube and maybe it could project it on to that. And then there might be electrons sensitive or heat sensitive paper, and it could make her copies that way. But certainly in here Xerox itself for zero park, the idea was to say, let's use an electrostatic method combined with Potter and a lot of optics to ride on a Xerox drum and then fuse that under high heat into something that, where the particles stuck to paper. So all of those things were known and given. So I guess there's sorry. There's, there's so many different directions that I, that I want to go. One. [00:31:00] So sort of just like on the idea of modularity for technology. Yeah. It feels like there's both I guess it feels like there's almost like two kinds of modularity. One is the modularity where you, you take a slice in time and you sort of break the technology down into the different components. Yeah. And then there's almost like modularity through time that, that progresses over time where you have to combine sort of different ideas, but it doesn't necessarily, but like those ideas are not necessarily like contained in the technology or there's like precursor technology, like for example there's you have the, the moving assembly line. Right. Which was a technology that was you originally for like butchering meat. Yup. Right. And so you had, you had car manufacturing [00:32:00] and then you had like a moving assembly line. Yep. And then Henry Ford came along and sort of like fused those together. And that feels like a different kind of modularity from the modularity of. Of like looking at the components of technology, M I D do you think that they're actually the same thing? How do you, how do you think about those sort of two types of modularity? I'm not quite sure what the difference is. So, so the, the Henry T I guess like the, the, the, the, the Ford factory did not, doesn't contain a slaughter house. Right. It contains like some components from the slider house. And some components, I guess. Let's see, I think, like, [00:33:00] this is like, I, I was like, sort of like thinking through this, it feels like, like when, when you think of like the sort of like intellectual lineages of technology the, like a technology does not always contain the thing that inspires it, I guess is and so, so there's this kind of like evolution over time of like, almost like the intellectual lineage of a technology that is not necessarily the same as like the. Correct evolutions of the final components of that technology like for yeah. Does that, does that make sense? Like th th th or am I just like, am I seeing a difference where there, there is no difference which could be completely possible? Well, I'm not sure. I think maybe the latter, let me see if I can explain the way I see it, please stop me again. If it [00:34:00] doesn't fit with what you're talking about. I could fascinated by the whole subject of invention, you know, where to radically new technologies come from, not just tweaks on a technology. So we might have we might have a Pratt and Whitney jet engine in 1996, and then 10 years later have a different version of that. That's a good summer different components. That's fine. That's innovation, but it's not ready. Invention invention is something that's quite radical. You go from having air piston engines, which spit like standard car engines, driving propellers systems, 1930s, and you that gets replaced by a jet engine system working on a different principle. So the question really is so I've [00:35:00] begun to realize that what makes an invention is that it works in a different principle. So when Cox came along, the really primitive ones in the 12 hundreds, or a bit later than that are usually made up, they're made with their water clocks and are relying on this idea that a drip of water is fairly regular. If you set it up that way and about the time of Galileo. And in fact, Galileo himself realized that the pendulum had a particular regular beat. And if you could harness that regularity, that might turn into something that can measure time I clock. So, and that's a different principle that the principle is to use the idea that something on the end of a string or on the end of a piece of wire, give you a regular. [00:36:00] Frequency or regular beat. So the country realize that inventions themselves something was carrying out unnecessary purpose using a different principle before the second world war in Britain, they in the mid 1930s, people got worried about aircraft coming from the continent. They thought it could well be terminated and and bombers coming over to bomb England and the standard methods then to detect bombers over the horizon was to get people with incredibly good hearing, quite often blind people and attach to their ear as the enormous air trumpet affair that went from their ear to some big concrete collecting amplifier, some air trumpet that was maybe 50 or a hundred [00:37:00] feet across to listen to what was going on in the sky. And a few years later in the mid thirties, actually the began to look for something better and then. Made a discovery that fact that being well-known in physics by then, that if you bounced a very high frequency beam electromagnetic beam of say piece of metal, the metal would distort the beam. It would kind of echo and you'd get to stores and see if it was just to adore three miles away, made a word, wouldn't have that effect, but it was metal. It would. So that that's different principle. You're not listening. You're actually sending out a beam of something and then trying to detect the echo. And that is a different principle. And from that you get radar, how do you create such a beam? How'd [00:38:00] you switch it off very fast. Search can listen for an echo or electronically how do you direct the beam, et cetera, et cetera. How do you construct the whole thing? How can you get a very high energy beam because needed to be very high energy. These are all problems that had to be solved. So in my, what I began to see, she was the same pattern giving invention guidance began usually an outstanding problem. How do we detect enemy bombers that might come from the east, from the continent, if we need to how do we produce a lot of cars more efficiently and then finding some principle to do that, meaning the idea of using some phenomenon in the case of ear trumpets, it was acoustic phenomena, but these could be greatly amplified for somebody's ear. If you directed them into a big [00:39:00] concrete here, right? Different ways to put out high frequency radio beams and listen for an echo of that. Once you have the principle, then it turns out there's sort of sub problems go with that in the case of radar, how do you switch the beam off so that you can, things are traveling at the speed of light. I just switched it off fast enough that the echo isn't drowned out by the original signal. So then you're into another layer of solving another problem and an invention. Usually not. Well, I could talk about some other ways to look at it, but my wife looking at an invention is that nearly always is a strong social need. What do we do about COVID? The time that [00:40:00] says February, March 20, 20 oh, cur we can do a vaccine. Oh, okay. The vaccine might work on a different principle, maybe messenger RNA rather than the standard sort of vaccines. And so you find a different principle, but that brings even getting that to work brings its own sub problems. And then if with a bit of luck and hard work, usually over several years or months, you solved the sub problems. You managed to put all that in material terms, not just conceptual ones, but make it into some physical thing that works and you have an invention. And so to double click on that, couldn't you argue that those, that the solution to those sub problems are also in themselves inventions. And so it's just like inventions all the way down. [00:41:00] No great point there. I haven't thought of that. Possibly the, if they need to use a new principal themselves, the sub solutions. Yeah. Then you'd have to invent how that might work. But very often they're standing by let me give you an example. I hope this isn't I don't want to be too sort of technical here, please go, go, go, go rotate. Here we go then. So it's 1972 here in Xerox park where I'm sitting and the engineer, Gary Starkweather is his name, brilliant engineer and trained in lasers and trend and optics PhD and master's degrees, really smart guy. And he's trying to [00:42:00] figure out how to how to print. If you have an image in a computer, say a photograph, how do you print that now at that time? In fact, I can remember that time there. There are things called line printers and they're like huge typewriter systems. There is one central computer you put in your job, the outputs it was figured out on the computer and then central line printer, which is like a big industrial typewriter. And then it clanked away on paper and somebody tore off the paper and handed it to through a window. Gary, Starkweather wondered how could you print texts? But more than that images where you weren't using a typewriter, it's very hard to his typewriters and very slow if you wanted to images. So he [00:43:00] cooked up a principle, he went through several principles, but the one that he finished up using was the idea that you could take the information from the computer screens, a photograph you could use computer processors to send that to a laser. The lasers beam would be incredibly, highly focused. And he realized that if he could use a laser beam to the jargon is to paint the image onto the Xerox drum. Then so that it electrically charged the Xerox drum, right then particles would stick to the Xerox, strung the charge places, and the rest would be zero graphy, like a copier machine. He was working in Xerox park. [00:44:00] This was not a huge leap of the imagination, but there were two men's sub-problems in as well. We want to mention, if you look at it there's an enormous two huge problems if you wanted. So you were trying to get these black dots to write on a zero extremity to paint them on a zero Ekstrom. I hope this is an obscure. No, this is great. And I'll, I'll, I'll include some like pictures and this is great. All right. So you suppose I'm writing or painting a photograph from the computer through a processor, send to a laser. The laser has to be able to switch on and off fast. If it's going to write this on a Xerox Trump, and if you work out commercially how fast it would have to operate. Starkweather came to the conclusion. He'd have to be able to switch his [00:45:00] Lezzer on and off black or white 50 million times a second. Okay. So 50 megahertz, but nobody had thought of modulating or doing that sort of switching at that speed. So he had to solve that. That's a major problem. He solved it by circuitry. He got some sort of pizza electric device that's kind of don't ask, but he got a electronic device that could switch on and off. And then he could send signals to modulator for that to modulator, to switch on and off the laser and make a black or white as needed. And so that was number one. Now that kind of, that in your terms acquired an invention, he had to think of a new principle to solve that problem. So how do you, how do you write images on a computer? Sorry, on [00:46:00] how do you write it? How do you write computer images? Print that onto paper. That's required a new principal switching on a laser and. 50 million times the second required a new principal or acquire a new principal. So those are two inventions. There's a third one and another sub problem. The device, by the way, he got to do this was as big as one of these rooms in 1972. If I have my if I have the numbers, right a decent laser would cost you about $50,000 and you could have bought a house for that in 1978 here. And it would be the size, not of a house, but of a pretty big lab, but not something inside a tiny machine, but an enormous apparatus. And so how do you take [00:47:00] a laser on the end of some huge apparatus that you're switching on and off the 15 million times a second and scan it back and forth. And because there's huge inertia, it's an enormous thing. And believe it or not, he, he solved that. Not with smoke, but with mirrors. So he actually, instead of moving the laser beam, He arranged for a series of mirrors under evolving a piece of apparatus, like actuate the mirrors. Yeah. All he had to do was 0.1 beam at the mirror, switch it on and off very quickly for the image. And then the mirror would direct it kind of like a lighthouse beam right across the page. And then the next [00:48:00] face of the mirror exactly little mirror would come along and do the next line. So how do you do that? Well, that was easier. But then he discovered that the different facets on this mirror you'd have to, they'd have to line up to some extraordinarily high precision that you could not manufacture them to. So that's another sub problem. So to solve that he used ope optics if there was so here's one facet of mirror here is the beam. So directs the beam right across the page, switching it off and on as need be. Then the next facet of the mirror comes round switches. The same beam that you want to line up extraordinary. Precisely. Couldn't do it manufactured. [00:49:00] In manufacturing technology. But you could do it with optics. It just said, okay, if there's a slight discrepancy, we will correct that. He did agree and optics. He really knew what he was doing with optics in the lab. So using different lenses, different condensing lenses, whatever lenses do he solved that problem. So it's took two or three years, and it's interesting to look at the lab notebooks that he made. But for me let me see if I can summarize this. There is no such thing as Gary Starkweather scratching his head saying, wouldn't it be lovely to wouldn't it be lovely to be able to print images off the computer and not have to use a big typewriter. And and so he sits in his attic, a star of some self for three months comes up with the solution, not at all. What he did was he envisaged a [00:50:00] different principle. We're writing the image, using a highly focused laser beam onto the Xerox drum. The rest then is just using a copier machine fair. But to do that, you have to switch on and off the laser beam problem. So that's at a lower level to invent a wedge to that. And he also had to invent a principle for scanning this beam across the Xerox strung, maybe whatever it would be 50 times a second, or maybe a hundred times the second without moving the entire apparatus. And the principally came up for that was mirrors. Yeah. And so, and then I could go down to another level, you have to align your mirrors. And so, so what I discovered and see if I can put this in a nutshell [00:51:00] invention, isn't a sort of doing something supremely creative in your mind. It finishes up that way. It might be very creative, but all inventions are basically as problem-solving. Yeah. So to do something more mundane imagine I live here in Palo Alto let's say I work in the financial district in San Francisco and let's say my car's in the shop getting repaired. How am I going to get to work? And or how am I going to get my work done tomorrow? I have no car. The level of principle is to say, okay, I can see an overall concept to do it with. So I might say, all right, if I can get to Caltrain, if I can get to the station I'll go in on the train, but hang on. How do I get to the station? So that's a sub problem. [00:52:00] Maybe I can get my daughter or my wife or her husband, whatever it is to, to drive me. Then the other end, I can get an Uber or I could get a a colleague to pick me up, but then I'd have to get up an hour earlier, or maybe I'll just sit at home and work from home, which is more of the solution we would do these days. But how will that work? Because I et cetera. So invention is not much different from that. In fact, that's the heart of invention. If we worked out that problem of getting worked when your car is gone nobody would stand up and say, this was brilliant yet you've gone through exactly the same process as the guy who invented the polymerase chain reaction. Again, I can't recall his name. Getting older. I can't [00:53:00] eat there, but anyway so what's really important in invention. I think this goes to your mission. If I understand it, rightly is the people who have produced inventions are people who are enormously familiar with what I would call functionalities. Yeah. How do you align beams using optical systems? How do you switch on and off lasers fast? And so the people who are fluent at invention are always people who know huge amounts about those functionalities. I'm trained as an electrical engineer. You're, what's it I'm trained as a mechanical engineer robotics. Oh yeah. Brilliant. So what's really important [00:54:00] in engineering, at least what they teach you apart from all that mathematics is to know certain functionalities. So you could use capacitors and inductors to create, and also electronic oscillations or regular waves. You can. Straighten out varying voltage by using induction in the system, you can store energy and use that in capacitors. You, you can actually change a beam using magnets. And so there's hundreds of such things. You can amplify things you can use using feedback as well to stabilize things. So there are many functionalities and learning engineering is a bit like becoming fluent in this set of functionalities, not learning anything that's semi [00:55:00] creative. What might that be? Yes. Paint learning to do plumbing. Yep. Learning to work as a plumber. Good. A true engineer. So it is a matter of becoming fluent. You want to connect pipes and plumbing. You want to loosen pipes. You want to unclog things you want to reduce. The piping systems or pumping system, you want to add a pump you want, so there's many different things you you're dealing with. Flows of liquids, usually and piping systems and pumping systems and filtration systems. So after maybe three to four years or whatever, it would be a for rail apprentice ship in this, not only can you do it, but you can do it unthinkingly, you know, the exact gauges, you know, the pieces, you know, the parts, you know where to get the parts, you know how to set them up and you look at [00:56:00] some problem and say, oh, okay. The real problem here is that whatever, the piping diameter here is wrong, I'm going to replace it with something a bit larger. So Lincoln's whatever. And here's how I do that. So, you know, being good at invention is not different people. Like Starkweather, Starkweather new, I think is still alive. Knows all about mirrors, but optical systems above all, he knew an awful lot about lasers. He knew a lot about electronics. He was fluent in all those. So if we don't, if we're not fluent ourselves, we stand back and say, wow, how did he do that? But it's a bit like saying, you know, you write a poem and French, let's say I don't speak French. French and support them and it worked, how did he [00:57:00] do that? But if I spoke French, I might, so, okay. Yeah, but I can see, so this actually touches on sort of like an extension of your framework that I wanted to actually run by you, which is what I would describe what you were just describing as talking about almost like the, the affordances and constraints of different pieces of technology and people who invent things being just very like intimately familiar with the, the affordances and constraints of different technologies, different systems. And so the, the question I have that I think is like an open question is whether there is a way of sort of describing or encoding these affordances and constraints [00:58:00] in a way that makes creating these inventions easier. So like in the sense that very often what you see is like someone who knows a lot about. One like the, the affordances in one area, right. When discipline and they sort of like come over to some other discipline and they're like, wait a minute, like, there's this analogy here. And and so they're like, oh, you have this, this constraint over here. Like, there's, there's like a sub problem. Right. And it's like, I know from the, the affordances of the things that I'm, I'm really familiar with, how to actually solve the sub problem. And so like, through that framework, like this framework of like modularity and constraints and affordances, like, is it possible to actually make the process easier or like less serendipitous? Yeah. In, in a couple of ways. One is that I [00:59:00] think quite often you see a pattern where some principle is borrowed from a neighboring discipline. So Henry you were saying that Henry Ford took the idea of a conveyor belt from the meat industry. Right. And and by analogy use the same principle with manufacturing cars. But to get that to work in the car industry, the limitations are different cars are a lot heavier, so you could have a whole side of beef and it's probably 300 pounds or whatever. It would be for a side of beef, but for the car, it could be at 10 and a half. So you have to think of different ways. Yeah. And in the meat industry to do conveyor belts, there's two different ways. You can have a belt standard, rubber thing or whatever it would be just moving along at a certain speed, or you [01:00:00] can have the carcass suspended from an over hanging belts working with a chain system and the carcass is cut in half or whatever and suspended. And you could be working on it pretty much vertically above you both. It was that second system that tended to get used cars as, so things don't translate principles translate from one area to another, and that's a very important mechanism. And so if you wanted to enhance innovation I think the thing would be to set up some institution or some way of looking at things, whereas. They're well-known principles for doing this in area in industry X, how would I do something equivalent in a different industry? So for [01:01:00] example blockchain is basically let's say it's a way of validating transactions that are made privately between two parties without using an intermediary, like a bank. And you could say, well, here's how this works with a Bitcoin trading or something. And somebody could come along and say, well, okay, I want to validate art sales using maybe some similar principle. And I don't want to have to go to some central authority and record there. So maybe I can use blockchain to do fine art sales, in fact, that's happening. So basically you see an enormous amount of analogous principle transfer of principles from [01:02:00] one field to another. And it's we tend to talk about inventions being adopted. At least we do an economic. So you could say the, the arts trading system adopts block chain, but it's not quite that it's something more subtle. You can get a new principal or new, fairly general technology comes out, say like blockchain and then different different industries or different sets of activities in conjure that they don't adopt it then countries. Oh, blockchain. Okay. No, I'm saying the medical insurance business let's say so I can record transactions this way and I don't have to involve a room or, and I particular, I don't have to go through banking systems and I can do it this way and then [01:03:00] inform insurance companies. And so they're encountering and wondering how they can use this new principle, but when they do, they're not just taking it off the shelf. Yeah. They're actually incorporating that into what they do. So here's an example. A GPS comes along quite a while ago. I'm sure. 1970s in principle using atomic clocks. Satellites or whatever. Basically it's a way of recording exactly time and using multiple satellites to know exactly where they are at the same time and allowing for tiny effects of even relativity. You figure out you can triangulate and figure out where something is precisely. Yeah, no, that just exists. But by the [01:04:00] time, so different industries say like Oceanwide Frazier shipping and you conjure it exists. Okay. And by the time they encounter it, they're not just saying I'm going to have a little GPS system in front of, in the Bennett code it's actually built in. And it becomes part of a whole navigational system. Yeah. So what happens in things like that is that some invention or some new possibility becomes a component in what's already done just as in banking around the 1970s, being able to. Process customer names, client names, and monetary months you could process that fast with electronic computers and there most days they were [01:05:00] called and data processing units that we don't think of it that way now, but you could process that. And then that changed the banking industry significantly. So by 1973, there was a, the market and futures in Chicago where you were dealing with say pork belly futures and things like that because computation coming home. Interesting. So the pattern there's always an industry exists using conventional ideas, a new set of technologies becomes available. But the industry doesn't quite adopted it, encounters it and combines it with many of its own operations. So banking has been recording people in ledgers and with machinery, it has been facilitating transactions, [01:06:00] maybe on paper unconscious computation. Now can do that. Yeah. Automatically using computation. So some hybrid thing is born out of banking and computation that goes into the Lego set and actually sort of related to that, something I was wondering is, do you think of social technology as technology, do you think that follows the same patterns? What do you mean social technology? I, I think like a very obvious one would be like for example, like mortgages, right? Like mortgages are like mortgages had to be invented. And they allow people to do things that they couldn't do before. But it's not technology in the sense of, of built. Yeah, exactly. It's not like, there's no, like you can create a mortgage with like you and me and a piece of [01:07:00] paper. Right. But it's, it's something that exists between us or like democracy. Right. And so, so I feel like there's, there's like one end, like, like sort of like things like new legal structures or new financial instruments that feel very much like technology and on the other end, there's like. Great. Just like new, like sort of like vague, like new social norms and like, yeah. Great question. And it's something I did have to think about. So things like labor unions nation states nature. Yes, exactly. These thing democracy itself, and in fact, communism, all kinds of things get created. Don't look like technologies. They don't have they don't have the same feel as physical technologies. They're not humming away in some room or other. They're not under the hood of your [01:08:00] car. And things like insurance for widows and pension systems. There's many of those social technologies even things like Facebook platforms for exchanging information. Sometimes very occasionally things like that are created by people sitting down scratching heads. That must have happened to some degree in the 1930s when Roosevelt said there should be a social security system. But that wasn't invented from scratch either. So what tends to come about in this case, just to get at the nitty gritty here, what tends to happen is that some arrangement happens. Somebody maybe could have been a feudal Lord says, okay, you're my trusted gamekeeper. You can have a [01:09:00] rather nice a single house on my estate. You haven't got the money to purchase and build it. I will lend you the money and you can repay me as time goes by. And in fact, the idea that so many of those things have French names, more, more cash. You know, it's actually, I think the act of something dying as far as my, my school friends would go, I don't know. But a lot of those things came about in the middle ages. There are other things like What happens when somebody dies the yeah. Probate again, these are all things that would go back for centuries and centuries. I believe the way they come about is not by deliberate invention. They come about by it being natural in [01:10:00] to something. And then that natural thing is used again. And again, it gets a name and then somebody comes along and says, let's institutionalize this. So I remember reading somewhere about the middle ages. They it was some Guild of some traders and they didn't feel they were being treated fairly. I think this was in London. And so they decided to withhold their services. I don't know what they're supplying. It could have been, you know, courage, transport, and along the streets or something. And some of these people were called violets. We were, would not be valet again, very French, but so they withheld their services. Now that wouldn't be the first time. [01:11:00] It goes back to Egypt and engineered people withholding their services, but that becomes, gets into circulation as a meme or as some repeated thing. Yeah. And then somebody says, okay, we're going to form an organization. And our Gilda's going to take this on board as being a usable strategy and we'll even give it a name that came to be called going on, strike or striking. And so social invention kind of should take place just by it being the sensible thing to do. The grand Lord allows you. It gives you the money to build your own house. And then you compare that person back over many years [01:12:00] and and put that, put that loan to to its death and mortgage it. So the I think in this case, what happens in these social inventions is that sensible things to do gets a name, gets instituted, and then something's built around it. Well, one could also say that many inventions are also the sensible thing to do where like it's someone realizes like, oh, I can like use this material instead of that material. Or like some small tweak that then enables like a new set of capabilities. Well, I'm not, yeah. In that case, I wouldn't call it really an invention that the, the vast majority of innovations, like 99 point something, something, something 9% or tweaks and, you know, [01:13:00] w we'll replace this material. Well, why doesn't that count as an invention? If, if, if it's like a material, like it's a different, like, I guess why doesn't that also count as, as a new principal, it's like bringing a new principal to the thing. The word to find a principal is it's the principles, the idea of using some phenomenon. And so you could say there's a sliding scale if you insist. Up until about 1926 or 1930 aircraft were made of wooden lengths covered with canvas dope. The dope, giving you waterproofing and so on. And and then the different way of doing that came along when they discovered that with better engines, you could have heavier aircraft, so you could make the skeleton out of [01:14:00] metal, right? And then the cladding might be metal as well. And so you had modern metallic aircraft. There's no new principal there, but there is a new material and you could argue, well, the new materials, different principle, then you're just talking about linguistics. So, so, so you would not consider the, like the transition from cloth aircraft to metal aircraft to be an invention. No. Huh? Not got another, I mean, sure might be a big deal, but I don't see it as a major invention going from air piston Angeles to jet engines. That's a different principle entirely. And I, so I, I've a fairly high bar for different principles. But you're not using a different phenomenon. That's my that's, that's my criteria. And if you have a very primitive clock [01:15:00] in this 16, 20 or 16, Forties that uses a string and a bulb on the end of the string. And then you replace the string where the wire or piece of metal rigid. You're not really using a new phenomenon, but you are using different materials and much of the story of technology isn't inventions, it's these small, but very telling improvements and material. In fact jet engines, weren't very useful until you got combustion systems where you were putting in aircraft fuel. Yeah. Atomizing that and setting the whole thing and fire the early systems down. When you could better material, you could make it work. So there's a difference between a primitive technology and [01:16:00] then one that's built out of better components. So I would say something like this, the if you take what the car looks like in 1919 0 5, is it a very, is it a different thing than using horses? Yeah, because it's auto motive. There is an engine. It's built in. So it's from my money. It's using a different principle. What have you changed? What if you like took the horse and you put it inside the carriage? Like what have you built the carriage around the horse? Would that be an automotive? Well then like, like what if I had a horse on a treadmill and that treadmill was driving the wheels of the vehicle with the horse on it, then I think it would be it would be less of an invention. I don't know. I mean, you're basically say I find it very useful to say that if [01:17:00] that radar uses a different principle from people listening, you could say, well, I mean, people listening are listening for vibrations. So is radar, you know, but just at a electro magnetic vibrations, what's different for my money. It's not so much around the word principle. All technologies are built around phenomena that they're harvesting or harnessing to make use of. And if you use a different set of phenomena, In a different way, I would call it an invention. So if you go from a water wheel, which is using water and gravity to turn something, and you say I'm using the steam engine, I would regard that as you're still, you [01:18:00] could argue, well, aren't you use a phenomenon phenomenon of the first thing you're using the weight of water and gravity, and the fact that you can turn something. And then the second thing you are using the different principle of heating something and having it expand. And so I don't see, I would say those are different principles. And if you're saying, well, there's a different principle, I'd go back to, well, what phenomena are you using? So, yeah, I mean, if you wanted to be part of a philosophy department, you could probably question every damned thing because yeah. I'm actually not trying to, to challenge it from a semantic standpoint. I think it's just actually from like really understanding, like what's going on. I think there's actually like a, sort of a debate of like, whether [01:19:00] it's. Like, whether it's like a fractal thing or whether there are like, like multiple different processes going on as well. Maybe I'm just too simple, but let's start to look at invention. The state of the art was pathetic. It wasn't very good because all papers, well, all the versions of invention, I was reading, all of us had a step, then something massively creative happens and that wasn't very satisfactory. And then there was another set of ideas that were Darwinian. If you have something new, like the railway locomotive that must have come out of variations somehow happening spontaneously, and might've been sufficiently different to qualify as radically new inventions. It doesn't do it for me either because you know, 1930 you could have varied [01:20:00] radio circuits until you're blue in the face. You'd never get radar. Yeah. So what the technology is fundamentally is the use of some set of phenomena to carry out some purpose. The, there are multiple phenomena. So but I would say in this maybe slightly too loose speaking, that's the principal phenomenon you're using or the, the key phenomenon constitutes the concept or principle behind that technology. So if you have a sailing ship, you could argue, well, you know, it, displaces water it's built to be not have water intake. It's got a cargo space, but actually for sailing ships, the key principle is to use the motive, power of wind in clever ways to be able to propel a [01:21:00] ship. If you're using steam and take the sails down you're using, in my opinion, a different principle, a different phenomenon. You're not using the mode of power of wind. You're actually using the energy that's in the, some coal fuel or oil and clever ways and to move the ship. So I would see those as two different principles you could say, well, we also changed whatever the staring system or as does that make it an invention. It makes maybe that part of it, an invention, but overall The story I'm giving is that inventions come along when you see a different principle or a set of phenomena that you want to use for some given purpose and you managed to solve the problems to put that into reality. Yeah. I completely agree [01:22:00] with that. I think the, the thing that I'm interested in is like like to, to use is the fact that sort of, again, we go back to like that modular view then, you're you sort of have like many layers down you, the, the like tinkering or, or the, the innovations are so based on changing the phenomena that are being harnessed, but like much, like much farther down the hierarchy of, of the modularity. Like, like in, in S like sailing ships you like introduce like Latin sales, right? Like, and it's like, you change the, into, like, you've invented a new sale system. You haven't invented a new kind of ship. Right. So you've changed the phenomenon, but yeah, I think the distinction you're making is totally on target. When you introduced Latina sales, you have invented a new. Cell system. Right. [01:23:00] But you haven't invented a new principle of a sailing ship. It's still a sailing ship. So I think you're getting into details that are worth getting into at the time I'm writing this. I I was trying to distinguish, I'm not trying to be defensive here. I hope, but I was just, I'm not trying to be offensive in any way. Wait for me to, I haven't thought about this for 10 years or more the I think what was important in yeah, let's just in case this whole thing that said innovation happens. Nobody's quite sure what innovation is. But we have a vague idea. It's new stuff that works better. Yes. In the book I wrote I make a distinction between radically new ways to do something. So it's radically new to propel the ship by a [01:24:00] steam engine. Even if you're using paddles versus by wind flow. Okay. However, not everything's right. Radically new. And if you look at any technology, be it computers or cars the insides, the actual car Bratcher system in the 1960s would have been like a perfume spray or a spraying gasoline and atomizing it, and then setting that in light. Now we might have as some sort of turbo injections system, that's, that's working, maybe not with a very different principle, but working much more efficiently. So you might have an invention or a technology that the insights are changing enormously. But the, the, I, the overall idea of that [01:25:00] technology hasn't changed much. So the radar would be perfect examples. So be the computer, the computers kept changing its inner circuitry, the materials it's using, and those inner circuits have gotten an awful lot faster. And so on. Now that you could take a circuit out and you could say, well, sometime around 1960, the circuit cease to be. Certainly it seems to be trialed, vacuum tubes and became transistors monitored on boards. But then sometime in that deck, could it became integrated circuits, was the integrated circuit and invention yeah. At the circuit level, at the computer level better component. Yeah. So hope that, that absolutely has I guess as, as actually a sort of a closing question is there, is there like work that you [01:26:00] hope people will sort of like do, based on what you've written like, is, is there, is there sort of like a line of work that you want people to be, to be doing, to like take the sort of the framework that you've laid out and run with it? Cause I, I, I guess I feel like there's like, there's so much more to do. Yeah. And so it's like, do you have a, do you have a sense of like what that program would look like? Like what questions, what questions are still unanswered in your mind? I think are really interesting. I think that's a wonderful question off the red cord. I'm really glad you're here because. It's it's like visiting where you grew up. I am. I'm the ghost of, of books. Oh, I don't know. I mean, it's funny. I was injured. This is just, yeah. I was interviewed a month or two ago on [01:27:00] this subject. I can send you a link if you want, please. Yeah. I listened to tons of podcasts, so, yeah. Anyway, but I went back and read the book. You're like, wow, I'm really smart. Well, it had that effect. And then I thought, well, God, you know, it could have been a lot better written. It had all sorts of different things. And, and the year this was produced and free press and New York actually Simon Schuster, they put it up for a Pulitzer prize. That really surprised me because I didn't set out to write something. Well-written I just thought of keep clarifying the thing. And it went to come back to your question. Yeah. My reflection is this the book I wrote the purpose of my book was to actually look inside technologies. So [01:28:00] when you open them up, meaning have you look at the inside components, how those work and how ultimately the parts of a technology are always using some, none, you know, we can ignite gasoline and a, in a cylinder, in a car, and that will expand rapidly and produce force. So there's all kinds of phenomena. These were things I wanted to stay at. And yeah, the book there's that book has had a funny effect. It has a very large number of followers, meaning people have read that and I think of a field for technology and they're grateful that somebody came along and gave them a way to look at technology. Yeah. But having, let me just say it carefully that I've done other things in research [01:29:00] that have had far more widespread notice than this. And I think it's something tech the study of technology, as I was saying earlier on is a bit of a backwater in academic studies. Yeah. It's eclipsed. Is that the word dazzled by science it's? So I think that it's very hard to we, if something wonderful happens, we put men on the moon, we put people on the moon. We, we come up with artificial intelligence. Some are vaguely. That's supposed to be done by scientists. It's not, it's done by engineers who are very often highly conversant, both with science and mathematics, but as a matter of prestige, then a [01:30:00] lot of what should have been theories of technologies, where they come from, it's sort of gone into theories of science and I would simply point out no technology, no science when you can't do much science without telescopes crystallography x-rays systems microscopes. So yeah, it's all. Yeah. So you need all of these technologies to give you modern science. Without those instruments, we'd still have technology. We'd still have science, but be at the level of the Greeks, which would be a lot of conceptual ideas about how the world works. Anyway, to my surprise, this book came out nature of technology, 2009, I think. Yeah, August. So it's 12 years old [01:31:00] and there was a lot of fuss about it at the time. And then it was kind of like a submarine that appeared and then die. Everything was quiet. Got there, period. It has to be a renewed interest in it this last year. So I have no idea why I suspect I'm trying to keep my own ego out of this. Not very well, but I suspect that a lot of it. Yeah. I think that to start with, and to finish with it has not been fully accepted. That technology is really a worthwhile entity to be thought about in its own. Right. It's more that, oh yeah. Well, we have technology. What more do we want? Well, we can talk about trading [01:32:00] systems. Well, isn't that isn't that economics, well, we could talk about, so things like financial derivatives that I see as technologies that was part of finance. So we tend to subsume these into other fields. There's maybe we can talk about high rise, steel and concrete buildings a hundred or more years ago. Well, isn't that architecture and so on, but actually there hasn't been sufficient attention paid to technology in its own. Right. And so there's been a lot of attention paid to this book, but not so much. I thought it might help give some impetus to get getting the field of study for technology and it didn't not yet. And now I cherish a feeling that after I'm gone this thing, that'll be discovered [01:33:00] mentally F this is very fancy comparison. But it shouldn't have said men to left. I'm thinking of gosh, Mendel, Gregor Mendel, Greg Armento. Yes. Sorry. Okay. Mendel had a theory of genetics and by the time that could properly develop too, you know, it was too late for him. So I don't know, it's a bit of a mystery to me. But I do think I want to stress one or two things that we didn't mention here. And we are moving into we, or leaving as system in the economy said 50 years ago, most things in the economy were produced in factories and I'm thinking of general foods or even general motors. We didn't put this to the factory system. We'd manufacturing, [01:34:00] then we'd outputs. And some of the outputs might be rolled. Steel would be inputs to other factories systems. And then we got a service economy, but now we're moving into an economy that has an awful lot of autonomous functionalities. You use the word affordances and I think that's right. And I hadn't thought of that was a good word. Nationalities is something that does something for you. So being able to navigate your car with a GPS, that's a functionality. And a lot of those, everything, not everything, but we're seeing the economy become more autonomous. So everything from trading systems pretty soon air traffic control systems, autonomous, no human beings involved, supposed to be a lot safer. Similarly, driverless convoys of trucks, trucks, [01:35:00] et cetera. I think if you want to understand those properly, you need to very good understanding of technology and how autonomous systems can work, work, and where technology has come from. I would just simply say that technology is a major part of what we've achieved as humans are, and we need to understand it just as we 300 years ago, we didn't really look very closely at the inside of creatures, animals, or species. By the 17 hundreds, we were well underway doing that. We learned a huge amount. I think we need to do the same in technology. I think technology is very much part of what makes us human. I do think that many technologies let's say kind of social [01:36:00] tech platform technologies think of Facebook or Google for that matter. Another platform technologies, Uber. These are technologies where you can dial into the technology and use it services maybe as a passenger in Uber, or maybe as someone recording information and Facebook, many technologies technology is resisted in many ways because it can produce really nasty things war the automation of war et cetera. And, but I would like to point out that many social technologies of platform technologies like Facebook are neutral. It's what you put in there. Like pipelines what'd you send along the papers differently. That can be benign. It can be [01:37:00] wonderful. You know, I'm a, I'm a great consumer. A late night detective. So that's all coming on the platform of Netflix. And, but you can equally use those pipelines to send a really negative stuff along. So I think we need to be careful. We, we can't just say technology is wonderful. Technology is bad for the most part. Technologies are in some intermediary position between us as humans and the earth, which produces phenomena. It produces metals. If we understand those phenomena produces optics that produces electrical phenomena, magnetic phenomena, and we've learned to harness all those. So they're in the middle and what those are used for is something [01:38:00] not very well. It could be nefarious or could be wonderfully beneficial. I would argue that I used to teach classes here in economic development. And so I had to face the problem. And the first lecture is economic development making an economy say, and Syria or Jordan, is that. Good or bad, and you could argue many ways. But one thing I think is unarguable is that technology has allowed us to live healthier and longer lives. And so I'd come back to the demographic element, morbidity slower, meaning by and large, we're much healthier to give an age and we, and our children are living longer. And I know that if I went back a century ago and looked at my [01:39:00] family, my grandmother died over a hundred years ago of something that would be perfectly treatable. Yeah. Pernicious, anemia, and et cetera. So if the least we can, I've mixed feelings about technology as a, the humanist part of the more practical engineer would say, well, you know, maybe you can, maybe you can criticize technology, but you might be doing it with a swimming pool in your backyard with a Volvo in the driveway with your smart and your hip pockets and, and with your children all alive and a hundred years ago, none of those would have been the case. Is that so [01:40:00] bad? Well, yeah, but we have to be careful. I want to mention one thing if I may, and yeah, this is your platform. The one thing is that yeah, one thing we really, one thing I want to mention is that coming back to the idea of the technology evolves it evolves in the following way that new technologies by and large well, new technologies are constructed from existing technologies. You can't really make a new technology unless you have the components to put it together. So jet engine is made out of compressors and turbines and combustion systems. Those all already existed. And then a new technology becomes available for, to be a component in some other systems. So jet engines are available. That's power [01:41:00] jet aircraft. Yeah. Et cetera. And new amplifier circuits around 1912 using trialed vacuum tubes become available to power radio receivers and res radio transmitters. And it got a broadcasting system which so building blocks each new technologies and principle, a building block for use and further new technologies. So it's as if your Lego set every so often gives you a new block that has its own interesting possibilities. Sometimes that's a, one-off the Solvay process is I think it purchases, what is it sodium? Carbonator this? I thought this isn't the Solvay process. Isn't that for aluminum, then it doesn't look it up or I can not to worry. [01:42:00] Well, this is a process, so, well you can take the Harbor process or any yeah. So the, the Solvay process, oh, it is for, for obtaining setting. Right? Okay. So the. Yeah, Solvay process produces sodium carbonate. But it doesn't mean it's solved a process. Isn't something that is central to a thousand. Other technologies probably is useful in a few hundred other ones. Whereas something like the transistor comes along or even the laser around 1916, nobody, I remember seeing a headline. This was a solution in search of a problem. The laser now it's used, I wouldn't say in everything, but in many, many, many uses. I think so. So what I want to point out is there is a mechanism of [01:43:00] evolution technology that if you take the whole collection of technologies at any one time period, some of those existing technologies in combination are making novel technologies and many of the novel technologies go on to become building blocks for yet further technologies. So if you look at the entire and collection of technologies, it is throwing off new technologies, which may be components and yet further new technologies. The technical word for that is to say either itself creating or the fancy word is Alto ploy, attic, POI ETI. So it's auto poetic. That's the word? I think it came from Umberto Maturana [01:44:00] and The cheese, I'm not sure on anyway to be scanned down. Sorry. All right. So just like, yeah. Sorry. Maturana and Francisco Varela. They at Chilay and philosophers actually of, of technology as well as everything else. So systems' flaws first, but anyway, technology itself producing herself, creating, and but the mechanism isn't Darwinian the mechanism. There's no flood of Darwinian improvement around initial primitive 1825 railway locomotives are still not that difference from ones a hundred years later. Using steam. And there's a certain amount of Darwinian variation and improvement, but [01:45:00] mostly technologies evolve. Bye now of all technologies becoming components and yet other technologies. So the steam engine becomes a component in the railroad locomotive. Yeah. We've stopped in Darlington express around 1820 was a trend of cars. Just train, meaning something that flows out behind you. Yeah. Drawn by horse. And when the railway locomotive comes along, that's a new technology. New technology is adopted in other ways. So you get a whole railroad system and so on. So you go from steam engine to steam, locomotives, to steam trains, to railroad systems. And in that sense technology yeah. At any level, then the technology becomes a component in further technologies. I call that [01:46:00] sort of evolution, competent tutorial evolution. If you can bind things in your Lego set to make new things that are repeated often and encapsulated, then you have a new component. Yeah, for further accommodation, would that be comparable to Darwinian evolution? If, instead of looking at things at the level of species, we look at things sort of at the level of you know, like, like genes or body parts or proteins yeah. That sort of evolution does occur biologically in a fairly primitive bacterial systems are archaic. There's something called horizontal gene transfer. So you're taking genes from one, whatever they are bacterium, and those are getting transferred horizontally to other the actual standard cell for [01:47:00] many creatures evolved out of other cells that become absorbed into our model. That's why we have like mitochondria. Yeah. So this does happen, but but once we get up to like a draft F evolving something, doesn't affect the thing new. Great. If you could take a traps neck and put that onto a horse. Yeah. Whatever, but that's not the way it works. So yeah. When you think about it this time, Evolution by combination is all over the place in biology, but it's quite specific. I think Darwin got it roughly, right by just saying variation and selection. He didn't think in terms of combinations. So to come back to your question, I should let you go here to come back to your question. What what theories could be built out of this and what use could be made out of this thinking as [01:48:00] so many, see if I can give you a, a decent answer, serious answer to that. I think that the book I wrote in 2009, the nature of technology lays out a framework for asking what is the technology? How does technology evolve, how our technology is put together? How does invention work? How does more standard engineering, just pure innovation work? How does tech, how to technologies create an economy? It's it looks at all those questions. I think, I think that, so it's giving a framework for thinking about technology and how to operate. So in our world, yes. I think all of those ideas could be refined or could be challenged. Could be improved upon just so this book, I think, is a first step in trying to set up a theory of [01:49:00] evolution for technology and as such I, I haven't seen that much academically coming up, building on this quite surprising. The other thing I would point out is that this nature of technology dolls positive different method of evolution than Darwin's. So you're not looking at, in Darwin's evolution. Some snail species might evolve say, and bridges of cliffs in Hawaii by being in a slightly different wetter environment and small variations genetically, and those over many, many generations are selected to fit that new environment a bit better until that sort of snail has evolved that to the degree, continue to breed with the old ones and you get a new species. So [01:50:00] Darwin's things, variation and accumulation of small differences. This book puts together a version of evolution that says there's a mechanism. Well, a novel things are created via a combination of the old and become available themselves as building blocks for further combination. Yeah. I am amazed and surprised that I haven't seen that idea taken up. The theory that Darren came up with in 1838, but finally published 20 years later, 1859 got taken up almost immediately argued about bitterly centered resistance celebrated everything you can think of, but there's a different type of evolution competent. I called it competent tutorial evolution. It it hasn't been talked [01:51:00] about in any detail. I'm sure if you go back, you'd find that some people been vaguely aware, but nobody really has written about it in detail. So I think that that would be worth taking up and looking at, in some detail, I'd call it a second evolutionary mechanism. That's certainly not being modest, but I do think that it's different evolution, a different form of evolution. Once you understand it, you see it all over the place. You see new combinations, even in language, certainly in my lifetime the word Munich, he used to be labeled for city in Germany. And now it's kind of label for holes for a type of for a piece of unsavory what's done by unsavory [01:52:00] authorities. So if you try to be accommodating to whoever runs Bella ruse, you could be accused of pulling a Munich and similarly hyphen gate, which came from water gate, you know, truck travel game. That's now a combination and rarely gets rid of an awful lot of components. It's usually official government malfeasance and some minute area of misdoing. And that is but if we want to avoid lengthy explanations, we compress that into a module, something hyphen gate. So concept. Or often encapsulated and then used as components in language, certainly the case in mathematics, [01:53:00] et cetera. It's, it's certainly certainly the case in engineering it's case in science as well. And all those systems build up by having new ideas, concepts, or objects that are created in some competent tutorial where combining way from previous ones and then becoming things in their own ride for further combination that's worth looking at yes. And hopefully this will spawn many arguments about it. Brian, Arthur. Thanks for being part of idea machines. [01:54:00]…
I
Idea Machines


1 Philosophy of Progress with Jason Crawford [Idea Machines #40] 46:56
46:56
Play Later
Play Later
Lists
Like
Liked46:56
In this Conversation, Jason Crawford and I talk about starting a nonprofit organization, changing conceptions of progress, why 26 years after WWII may have been what happened in 1971, and more. Jason is the proprietor of Roots of Progress a blog and educational hub that has recently become a full-fledged nonprofit devoted to the philosophy of progress. Jason’s a returning guest to the podcast — we first spoke in 2019 relatively soon after he went full time on the project . I thought it would be interesting to do an update now that roots of progress is entering a new stage of its evolution. Links Roots of Progress Nonprofit announcement Transcript So what was the impetus to switch from sort of being an independent researcher to like actually starting a nonprofit I'm really interested in. Yeah. The basic thing was understanding or getting a sense of the level of support that was actually out there for what I was doing. In brief people wanted to give me money and and one, the best way to receive and manage funds is to have a national nonprofit organization. And I realized there was actually enough support to support more than just myself, which had been doing, you know, as an independent researcher for a year or two. But there was actually enough to have some help around me to basically just make me more effective and, and further the mission. So I've already been able to hire research [00:02:00] assistants. Very soon I'm going to be putting out a a wanted ad for a chief of staff or you know, sort of an everything assistant to help with all sorts of operations and project management and things. And so having these folks around me is going to just help me do a lot more and it's going to let me sort of delegate everything that I can possibly delegate and focus on the things that only I can do, which is mostly research and writing. Nice and sort of, it seems like it would be possible to take money and hire people and do all that without forming a nonprofit. So what what's sort of like in your mind that the thing that makes it worth it. Well, for one thing, it's a lot easier to receive money when you have a, an organization that is designated as a 5 0 1 C three tax status in the United States, that is a status that makes deductions that makes donations tax deductible. Whereas other donations to other types of nonprofits are not I had had issues in the past. One organization would want to [00:03:00] give me a grant as an independent researcher, but they didn't want to give it to an individual. They wanted it to go through a 5 0 1 C3. So then I had to get a new. Organization to sort of like receive the donation for me and then turn around and re grant it to me. And that was just, you know, complicated overhead. Some organizations didn't want to do that all the time. So it was, it was just much simpler to keep doing this if I had my own organization. And do you have sort of a broad vision for the organization? Absolutely. Yes. And it, I mean, it is essentially the same as the vision for my work, which I recently articulated in an essay on richer progress.org. We need a new philosophy of progress for the 21st century and establishing such a philosophy is, is my personal mission. And is the mission. Of the organization to just very briefly frame this in the I, the 19th century had a very sort of strong and positive, you know, pro progress vision of, of what progress was and what it could do for humanity and in the [00:04:00] 20th century. That optimism faded into skepticism and fear and distrust. And I think there are ways in which the 19th century philosophy of progress was perhaps naively optimistic. I don't think we should go back to that at all, but I think we need a, we need to rescue the idea of progress itself. Which the 20th century sort of fell out of love with, and we need to find ways to acknowledge and address the very real problems and risks of progress while not losing our fundamental optimism and confidence and will to, to move forward. We need to, we need to regain to recapture that idea of progress and that fundamental belief in our own agency so that we can go forward in the 21st century with progress. You know, while doing so in a way that is fundamentally safe and benefits all of humanity. And since you, since you mentioned philosophy, I'm really like, just, just ask you a very weird question. That's related to something that I've been thinking about. And [00:05:00] so like, in addition to the fact that I completely agree the philosophy. Progress needs to be updated, recreated. It feels like the same thing needs to be done with like the idea of classical liberalism that like it was created. Like, I think like, sort of both of these, these philosophies a are related and B were created in a world that is just has different assumptions than we have today. Have you like, thought about how the, those two, like those two sort of like philosophical updates. Yeah. So first off, just on that question of, of reinventing classical liberalism, I think you're right. Let me take this as an opportunity to plug a couple of publications that I think are exploring this concept. Yeah. So so the first I'll mention is palladium. I mentioned this because of the founding essay of palladium, which was written by Jonah Bennet as I think a good statement of the problem of, of why classical liberalism is [00:06:00] or, or I think he called it the liberal order, which has maybe a slightly different thing. But you know, the, the, the basic idea of You know, representative democracy is you know, or constitutional republics with, with sort of representative democracy you know, and, and basic ideas of of freedom of speech and other sort of human rights and individual rights. You know, all of that as being sort of basic world order you know, Jonah was saying that that is in question now and. There's essentially now. Okay. I'm going to, I'm going to frame this my own way. I don't know if this is exactly how gender would put it, but there's basically, there's, there's basically now a. A fight between the abolitionists and the reformists, right. Those who think that the, the, the, that liberal order is sort of like fundamentally corrupt. It needs to be burned to the ground and replaced versus those who think it's fundamentally sound, but may have problems and therefore needs reform. And so you know, I think Jonah is on the reform side and I'm on the reform side. I think, you know, the institutions of you know, Western institutions and the institutions of the enlightenment let's say are like [00:07:00] fundamentally sound and need reform. Yeah, rather than, rather than just being raised to the ground. This was also a theme towards the end of enlightenment now by Steven Pinker that you know, a lot of, a lot of why he wrote that book was to sort of counter the fundamental narrative decline ism. If you believe that the world is going to hell, then it makes sense to question the fundamental institutions that have brought us here. And it kind of makes sense to have a burn it all to the ground. Mentality. Right. And so those things go together. Whereas if you believe that you know, actually we've made a lot of progress over the last couple of hundred years. Then you say, Hey, these institutions are actually serving us very well. And again, if there are problems with them, let's sort of address those problems in a reformist type of approach, not an abolitionist type approach. So Jonah Bennett was one of the co-founders of palladium and that's an interesting magazine or I recommend checking out. Another publication that's addressing some of these concepts is I would say persuasion by Yasha Munk. So Yasha is was a part of the Atlantic as I recall. [00:08:00] And basically wanted to. Make a home for people who were maybe left leaning or you know, would call themselves liberals, but did not like the new sort of woke ideology that is arising on the left and wanted to carve out a space for for free speech and for I don't know, just a different a non-local liberalism, let's say. And so persuasion is a sub stack in a community. That's an interesting one. And then the third one that I'll mention is called symposium. And that is done by a friend of mine. Roger Sinskey who it himself has maybe a little bit more would consider himself kind of a more right-leaning or maybe. Just call himself more of an individualist or an independent or a, you know, something else. But I think he maybe appeals more to people who are a little more right-leaning, but he also wanted you know, something that I think a lot of people are, are both maybe both on the right and the left are wanting to break away both from woke ism and from Trumpism and find something that's neither of those things. And so we're seeing this interesting. Where people on the right and left are actually maybe [00:09:00] coming together to try to find a third alternative to where those two sides are going. So symposium is another publication where you know, people are sort of coming together to discuss, what is this idea of liberalism? What does it mean? I think Tristan ski said that he wanted some posting to be the kind of place where Steven Pinker and George will, could come together to discuss what liberalism means. And then, then he like literally had that as a, as a podcast episode. Like those two people. So anyway, recommend, recommend checking it out. And, and Rob is a very good writer. So palladium, persuasion and symposium. Those are the three that I recommend checking out to to explore this kind of idea of. Nice. Yeah. And I think it looks, I mean, I mean, I guess in my head it actually like hooks, like it's sort of like extremely coupled to, to progress. Cause I think a lot of the places where we, there's almost like this tension between ideas of classical liberalism, like property rights and things that we would like see as progress. Right. Cause it's like, okay, you want to build your [00:10:00] Your Hyperloop. Right. But then you need to build that Hyperloop through a lot of people's property. And there's like this fundamental tension there. And then. I look, I don't have a good answer for that, but like just sort of thinking about that, vis-a-vis, it's true. At the same time, I think it's a very good and healthy and important tension. I agree because if you, if you have the, if you, so, you know, I, I tend to think that the enlightenment was sort of. But there were at least two big ideas in the enlightenment, maybe more than two, but you know, one of them was sort of like reason science and the technological progress that hopefully that would lead to. But the other was sort of individualism and and, and, and, and Liberty you know concepts and I think what we saw in the 20th century when you have one of those without the other, it leads to to it to disaster. So in particular I mean the, the, the communists of you know, the Soviet union were were [00:11:00] enamored of some concept of progress that they had. It was a concept of progress. That was ultimately, they, they got the sort of the science and the industry part, but they didn't get the individualism and the Liberty part. And when you do that, what you end up with is a concept of progress. That's actually detached from what it ought to be founded on, which is, I mean, ultimately progress by. To me in progress for individual human lives and their happiness and thriving and flourishing. And when you, when you detach those things, you end up with a, an abstract concept of progress, somehow progress for society that ends up not being progress for any individual. And that, as I think we saw in the Soviet union and other places is a nightmare and it leads to totalitarianism and it leads to, I mean, in the case specifically the case of the Soviet union mass. And not to mention oppression. So one of the big lessons of you know, so going back to what I said, sort of towards the beginning that the 19th century philosophy of progress had, I think a bit of a naive optimism. And part of that, [00:12:00] part of the naivete of that optimism was the hope that that all forms of progress would go together and work sort of going along hand in hand, the technological progress and moral and social progress would, would go together. In fact, towards the end of. The, the 19th century some people were hopeful that the expansion of industry and the growth of trade between nations would lead to a new era of world peace, the end. And the 20th century obviously prove this wrong, right? There's a devastating, dramatic proof though. And I really think it was my hypothesis right now is that it was the world war. That really shattered the optimism of the 19th century that, you know, they really proved that technological progress does not automatically lead to moral progress. And with the dropping of the atomic bomb was just like a horrible exclamation point on this entire lesson, right? The nuclear bomb was obviously a product of modern science, modern technology and modern industry. And it was the most horrific destructive [00:13:00] weapon ever. So so I think with that, people saw that that these things don't automatically go together. And I think the big lesson from from that era and and from history is that technological and moral progress and social progress or an independent thing that we have. You know, in their own right. And technological progress does not create value for humanity unless it is embedded in the, you know, in the context of good moral and social systems. So and I think that's the. You know, that's the lesson of, for instance, you know, the cotton gin and and American slavery. It is the lesson of the of the, the Soviet agricultural experiments that ended on in famine. It's the lesson of the, the Chinese great leap forward and so forth. In all of those cases, what was missing was was Liberty and freedom and human in individual rights. So those are things that we must absolutely protect, even as we move technological and industrial progress forward. Technological progress ultimately is it is [00:14:00] progress for people. And if it's not progress for people and progress for individuals and not just collectives then it is not progress at all the one. I agree with all of that. Except the thing I would poke is I feel like the 1950s might be a counterpoint to the world wars destroying 20th century optimism, or, or is it, do you think that is just sort of like, there's almost like a ha like a delayed effect that I think the 1950s were a holdover. I think that, so I think that these things take a generation to really see. And so this is my fundamental answer at the, at the moment to what happened in 1971, you know, people ask this question or 1970 or 73 or whatever date around. Yeah. I think what actually happened, the right question to ask is what happened in 1945, that took 25 years to sink in. And I think, and I think it's, so my answer is the world wars, and I think it is around this time that [00:15:00] you really start to see. So even in the 1950s, if you read intellectuals and academics who are writing about this stuff, you start to read things like. Well, you know, we can't just unabashedly promote quote-unquote progress anymore, or people are starting to question this idea of progress or, you know, so forth. And I'm not, I haven't yet done enough of the intellectual history to be certain that that's really where it begins. But that's the impression I've gotten anecdotally. And so this is the, the hypothesis that's forming in my mind is that that's about when there was a real turning point now to be clear, there were always skeptics of. From the very beginning of the enlightenment, there was a, an anti enlightenment sort of reactionary, romantic backlash from the beginnings of the industrial revolution, there were people who didn't like what was happening. John chakra. So you know, Mary Shelley, Karl Marx, like, you know, you name it. But I think what was going on was that essentially. The progress you know, the, the progress movement or whatever, they, the people who are actually going forward and making scientific and technological progress, they [00:16:00] were doing that. Like they were winning and they were winning because they were because people could see the inventions coming especially through the end. I mean, you know, imagine somebody born. You know, around 1870 or so. Right. And just think of the things that they would have seen happen in their lifetime. You know, the telephone the the, you know, the expansion of airplane, the automobile and the airplane, right? The electric light bulb and the, and the, the electric motor the first plastics massive. Yeah, indoor plumbing, water, sanitation vaccines, if they live long enough antibiotics. And so there was just oh, the Haber-Bosch process, right. And artificial or synthetic fertilizer. So this just like an enormous amount. Of these amazing inventions that they would have seen happen. And so I basically just think that the, the, the reactionary voices against against technology and against progress, we're just drowned out by all of the cheering for the new inventions. And then my hypothesis is that what happened after world war II is it wasn't so much that, you [00:17:00] know the people who believed in progress suddenly stopped believing in it. But I think what happens in these cases, The people who, who believed in progress their belief was shaken and they lost some of their confidence and they became less vocal and their arguments started feeling a little weaker and having less weight and conversely, the sort of reactionary the, the anti-progress folks were suddenly emboldened. And people were listening to them. And so they could come to the fore and say, see, we told you, so we've been telling you this for generations. We always knew it, that this was going to be what happened. And so there was just a shift in sort of who had the confidence, who was outspoken and whose arguments people were listening to. And I think when you, when you have then a whole generation of people who grew up in this new. Milia, then you get essentially the counterculture of the 1960s and you know, and you get silent spring and you get you know, protests against industry and technology and capitalism and civilization. And, [00:18:00] you know, do you think there, mate, there's just like literally off the cuff, but there might also be some kind of like hedonic treadmill effect where. You know, it's like you see some, like rate of progress and, you know, it's like you, you start to sort of like, that starts to be normalized. And then. It's true. It's true. And it's funny because so well before the world war, so even in the late 18 hundreds and early 19 hundreds, you can find people saying things like essentially like kids these days don't realize how good they have it. You know, people don't even know the history of progress. It's like, I mean, I found. I found it. Let's see. I remember there was so I wrote about this, actually, I hadn't had an essay about this called something like 19th century progress studies, because there was this guy who was even before the transcontinental railroad was built in the U S in the sixties. There was this guy who like in the 1850s or so [00:19:00] was campaigning for it. And he wrote this whole big, long pamphlet that, you know, promoting the idea of a transcontinental railroad and he was trying to raise private money for it. And. One of the things in this long, you know, true to the 19th century, it was like this long wordy document. And one of the parts of this whole thing is he starts going into the, like the whole history of transportation back to like the 17th or 16th century and like the post roads that were established in Britain and you know, how those improve transportation, but even how, even in that era, that like people were speaking out against the post roads as, and we're posing them. No sidebar. Have you seen that comic with like, like the cave men? The caveman? Yes. I know exactly what you're talking about. Yeah. The show notes, but caveman science fiction. Yeah, that one's pretty good. So I'm, I'm blanking on this guy's name now. But he, so he wrote this whole thing and he basically said that. The [00:20:00] story of progress has not even been told and people don't know how far we've come. And if, you know, somebody should really like collect all of this history and tell it in an engaging way so that people knew, you know, how far people knew, how far we've come. And this is in like the 1850s. So this is before the, the, the railroad was built, right? The transcontinental one, this is before the, the light bulb and before the internal combustion engine and before vaccines and, you know, everything. It was pretty, that was pretty remarkable. I also remember there was like an 1895 or 96 anniversary issue of scientific American, where they went over like 50 years of progress. And there was this bit in the beginning that was just like, yeah. You know, people just take progress for granted these days. And there was another thing, a similar thing in the early 19 hundreds, I read where somebody went out to find one of the inventors who'd improved. The the mechanical Reaper I think it was somebody who'd invented an automatic binder for the sheaves of grain and and was saying something like, yeah, people don't even remember, you know, the, the inventors who, you know, who made the modern world. And so [00:21:00] we've got to go find this inventor and like interview him and to record this for posterity. So you're seeing this kind of kids these days type attitude all throughout. So I think that that kind of thing is just natural, is like, I think is sort of always happening. Right. There's this constant complaint. I mean, it's just like, you know, at any pretty much any time in history, you can find people complaining about the decline of morality and you know, the, how the youth are so different and The wet, the ankles, they exposed ankles. Right? Exactly. So I think you have to have some somewhat separate out that sort of thing, which is constant and is always with us with kind of like, but what was, you know, what we're. What was the intellectual class? You know as Deirdre McCloskey likes to call it the clerisy, what were they saying about progress and what was the general zeitgeist? Right. And I think that even though there are some constants, like people always forget the past. Whatever they have for granted. And even though you know, every new invention is always opposed [00:22:00] and fought and feared. There is an overall site Geist that you can see changing from the late 19th century to the mid 20th century. And I think where you can really see. There's a, there's a couple of places you can really see it. So one is in the general attitude of people towards nature. And what is mankind's relationship to nature in the 19th century? People talked unabashedly and unironically about the conquest of nature. They talked about nature almost as an enemy that we had to fight. Yeah. And it sort of made sense you know, nature truly is red in tooth and claw. It does not, it's not a loving, loving mother that has us in her nurturing embrace. You know, the reality is that nature is frankly indifferent to us and you know, we have to make our way in the world in spite of now. Let's say, let's say both because of, and in spite of nature, right? Nature obviously gives us everything that we need for life. It also presents it. It also gives none of that in a [00:23:00] convenience form. Everything that nature gives us is in a highly inconvenient form that, you know, we have to do layers and layers of industrial processing to make into the convenient forms that we consume. David Deutsch also makes a similar point in the beginning of infinity, where he says that, you know, the idea of earth as like a biosphere or a life support, you know, or the ecosystem as a, as a life support system is absurd because a life support system is like deliberately designed for, you know, maximum sort of safety and convenience. Whereas nature is nothing of that. So there was some, you know, so there was some justification to this view, but the way that people just on a, on ironically talked about conquering nature, mastering nature, taming nature improving nature, right? The idea that the manmade, the synthetic, the artificial was just to be expected to be better than nature. Like that is a little mindblowing. Today I was just there was a quote, I was just looking up from I think the plastic is a great example [00:24:00] because plastic was invented and, and, and you know, or arose in this era where people were more favorable to it, but then quickly transitioned into the era where It, it became just one of the hated and demonized inventions. Right. And so in the early days, like in the 1930s I think it was 1936 Texas state had a, some sort of state fair and they had a whole exhibition about plastics and somebody was quote, one woman who was, who, who saw the exhibition, you know, was quoted as saying like, oh, it's just wonderful how everything is synthetic these days, you know, as this is like, nobody would say. Yeah, right. Or there was a documentary about plastic called the fourth kingdom and it was something like, you know, in addition to the, the three kingdoms of what is it like animal vegetable and mineral, you know, man has now added a fourth kingdom whose boundaries are unlimited. Right. And again, just that's just like nobody would ever put it that way. And sometimes, okay. So to come back to the theme of like naive optimism, sometimes this actually led [00:25:00] to problems. So for instance, in this, this still cracks me up in the late 19th century. There were people who believed that we could improve on. Nature is distribution of plant and animal species. The nature was deficient in which species you know, we're aware and that we could improve on this by importing species, into non-natural habitats. And this was not only for like, you can imagine some of this for industrial, like agricultural purposes. Right. But literally some of it was just for aesthetic purposes. Like someone wanted to imitate. Yeah. If I'm recalling this correctly, someone wanted to import into America like all of the species of birds that were mentioned in Shakespeare sun. And this is purely just an aesthetic concern. Like, Hey, what if we had all these great, you know, songbirds in from, from Britain and we have them in America. So it turns out that in importing species, Willy nilly like can create some real problems. And we got by importing a bunch of foreign plants, we got a bunch of invasive pest species. And so this was a real [00:26:00] problem. And ultimately we had to clamp down. Another example of this that is near to my heart currently, because I just became a dad a couple months ago. Thanks. But it turns out that a few decades ago, people thought that for me, that infant formula was like superior to breast milk. And there was this whole generation of kids, apparently that was, that was just like raised on formula. And, you know, today, There's this, I mean, it turns out, oops. We found out like, oh, mother's breast milk has antibodies in it that protect against infection. You know, and it has maybe some, I don't know, growth hormones, and it's like this, we don't even know. It's a really complicated biological formula. That's been honed through, you know, millions or hundreds of millions of years of evolution. Right. However long mammals have been around. Right. And. So yeah, so again, some of that old sort of philosophy of progress was a little naive. You know, but now I think that someday we'll be able to make synthetic, you know whatever infant sustenance that will, [00:27:00] you know, that could be better than than what moms have to put out and given the amount of trouble that some women have with breastfeeding. I think that will be a boon to them. And we'll just be part of the further, a story of technology, liberating women. But we're not there yet, right? So we have to be realistic about sort of like where, where technology is. So this, this sort of relationship to nature is I think part of where you see the the, the, the contrast between then and now a related part is people's people's concept of growth and how they regarded growth. So here's another. One of these shocking stories that shows you going like the past is a foreign country in the, in 1890 in the United States. The, the United States census, which has done every 10 years was done for the first time with machines. With that we didn't yet have computers but it was done for the first time with tabulating machines made by the Hollerith tabulating company. And if it, if it ha you know, the, the, the census had grown large and complicated enough that it had, if it hadn't been known these machines, they probably wouldn't have been able to get it done on time. It was becoming a huge clerical challenge. So, okay. Now, [00:28:00] everybody, now this is an era where. The population estimates are not, are just there. Aren't like up to the minute, you know, population estimates just available. You can't just Google what's the population of the U S and get like a current, you know, today's estimate. Right? So people really didn't have a number that was more like the number they had for the population in the U S was like 10 years old. And they were all sort of curious, like wondering, Hey, what's the new population 10 years later. And they were gunning for a figure of at least 75. There was this one, the way one one history of computing put, it was there were many people who felt that the dignity of the Republic could not be sustained on a number of less than 75 million. And so then, then, so then the census comes in. And the real T count is something in the 60 millions, right? It's not even 70 million. And like, people are not just disappointing. People are incensed, they're angry. And they like, they like blame the Hollerith tabulating company for bundling. They're like, it must've been the machines, right. The machines screwed this. [00:29:00] Yeah, that's right. Demand a recount. Right. And, and so they, they they're like, man, this, this Hollerith guy totally bungled the census. Obviously the number is bigger. It's gotta be bigger than that. Right. And it's funny because, so this is 1890, right? So fast forward to 1968 and you have Paul and, and Erlick writing the population bomb, right. Where they're just like overpopulation is the absolute worst problem facing the entire world. And they're even essentially embraced. You know, coercive population control measures, including you know, and and not, but not limited to like forced sterilization essentially in order to in order to control population because they see it as like the worst risk facing the planet. I recommend by the way Charles Mann's book, the wizard and the prophet. For this and, and many other related issues. One of the things that book opened my eyes to was how much the the 1960s environmentalist movement was super focused on on overpopulation as like its biggest risk. And then, you know, today it's shifted to, they've shifted away [00:30:00] from that in part, because population is actually slowing. Ironically, the population growth rates started to slow right around the late 1960s, when that hysteria was happening. You know, but now now the population is actually projected to level off and maybe decline within the century. And so now of course the environmentalist concern has shifted to resource consumption instead because per capita resource consumption is growing. But, yeah. So just like that flip in, how do we regard growth? Right. Is growth a good thing? Something to be proud of as a nation that our population is growing so fast, right? Or is it something to be worried about? And we breathe a sigh of relief when population is actually level. Yeah, I'm getting like a very strong, like thesis, antithesis synthesis vibe of like we've had, we had the thesis, like the sort of like naive but like naive progress is the thesis, the sort of backlash against that is the, the antithesis. [00:31:00] And then like, now we need to come up with like, what is, what is the new city? Yeah, I mean, I'm not a hit Gelien, but I agree. There's something, there's something. Yeah, sir. Like a police back to two routes of progress, the organization something that I've been just sort of like wondering like Fox is like I feel like sort of a lot of the people. In, in like the, the progress movement in the slack, or like, I would say people like us, right? Like people, people from tech and I've, I've sort of talked to people who are either in academia or in government. And they're like really interested. And I was like, wondering if you have like, faults about like, sort of like now that is sort of onto like the next phase of, of this. I have like, sort of like ways to Rodan broad, like almost like broadened the scope brought in the sort of like people, [00:32:00] I don't know what the right word is like under, under the umbrella, under the tent. And sort of like, yeah, or like just sort of how you, how do you think about that? Cause it seems like really useful to have sort of as many sort of worlds involved as possible. Yeah, absolutely. Well, let me talk about that. Maybe both longterm term and short term. So. Fundamentally, I see this as a very, long-term like a generational effort. So in terms of, you know, results from my work do like direct results from my work. I'm sort of looking on the scale of decades on games. And I think that yeah, I would refer you to a, an essay called culture wars are long wars by tenor Greer of his blog scholar stage which really sort of lays out why this is that the ideas at this fundamental level are sort of they, they take effect on a generational level, just like the, just like the philosophy of progress took about a generation to flip [00:33:00] from, I think, 1945 to 1970, it's going to take another generation to re. Established something deep and new as as the nude psychosis. So how does that happen? Well, I think it starts with a lot of deep and hard and difficult thinking. And and writing and like the most absolutely the, the fundamental thing we need is books. We need a lot of books to be written. And so I'm writing one now tentatively titled the story of industrial civilization that I intend to be sort of. To, to lay the foundation for the new philosophy of progress, but there are dozens more books that need to be written. I don't have time in my life to even write them all. So I'm hoping that other people would join me in this. And one of the things I'd like to do with the new organization is to help make that possible. So if anybody wants to write a progress book and needs help in our support doing it, please get in touch like a list of titles that you'd love to see. Yeah, sure. So I think we actually need three categories of of books or more broadly of contents. [00:34:00] So one is more histories of progress. Like the kind that I do where just a retelling of the story of progress, making it more accessible and more clear, because I just think that the story has never adequately been told. So I'm writing about. The, in, in the book that I'm writing virtually every chapter could be expanded into a book of its own. I've got a chapter on materials and manufacturing. I have a chapter on agriculture. I have a chapter on energy. I have one on you know, health and, and medicine. Right. And so just like all of these things does deserve a book of their own. I also think we could use more sort of analysis of maybe some of the failed promises of progress, what went wrong with nuclear power, for instance what what happened. The space travel and space exploration. Right? Why did that take off so dramatically and then sort of collapse and, and have a period of stagnation or similarly for for air travel and like, why is it that we're only now getting back supersonic air travel, for instance. Perhaps even nanotechnology is [00:35:00] in this category, if you believe. Jason was, Hall's take on it. In his book, where is my flying car? You know, he talks about he talks about nanotechnology as sort of like something that we ought to be much farther along on. So I think, you know, some of those kinds of analyses of what went wrong I think a second category. Of books that we really need is taking the the, just the biggest problems in the world and addressing them head on from kind of the, the pro progress standpoint. Right. So what would it mean to address some of the biggest problems in the world? Like climate change global poverty the environment War existential risk from, you know, everything from you know, bio engineered, pandemics to artificial intelligence, like all of these different things. What would it mean to address these problems? If you fundamentally believe in human agency, if you believe in science and technology and you believe that kind of like we can overcome it, it will be difficult. You know, it will, it's, it's not easy. We shouldn't be naive about it, but like we can find solutions. What [00:36:00] are the solutions that move us, the move humanity forward? You know, how do we, how do we address climate change without destroying our standard of living or killing economic growth? Right. So those are, that's like a whole category of books that need to be written. And then the third category I would say is visions of the future. So what is the, what is the kind of future that we could create? What are the exciting things on the horizon that we should be motivated by and should be working for? Again Hall's book where's my flying car is like a great entry in this. But we could use do you use a lot more including you know, I mean, I would love to see one and it made some of the stuff probably already exists. I haven't totally surveyed the field, but we absolutely need a book on longevity. What does it mean for us all to, to, to, to conquer aging and disease? You know, maybe something on how we cure cancer or how we cure all diseases, which is the the, the mission for instance, of the Chan-Zuckerberg foundation or Institute. We should you know, we should totally have this for nanotechnology. I mean, I guess some of this already exists maybe in Drexler's work, but I just think, you know, more positive visions [00:37:00] of the future to inspire people, to inspire the world at large, but especially to inspire the young scientists and engineers and founders who are going to actually go you know, create those things. The plug is a project hieroglyph which was like, if you've seen that. I've heard of that. I haven't read it yet. Why don't you just say what's about, oh, it was a, it's a collection of short science fiction of short, optimistic science fiction stories. That was a collaboration between, I believe Arizona state university and Neal Stephenson. And like the, the opening story that I love is by Neil Stevenson. And it just talks about like, well, what if we built just like a, a mile high tower that we use that like we've launched rockets out. Like, why not? Right? Like, like you could just, it's like, you don't need a space elevator. You seem like a really, really tall tower. And it's like, there's nothing, we wouldn't actually need to invent new technologies per se. Like we wouldn't need to like discover new scientific principles to do this. It would just take a lot of [00:38:00] engineering and a lot of resources. Yeah. Yeah. And there's a similar concept in Hall's book called the space pier, which you can look up. That's also on, on his website. It does require like discovering new things. Right? Cause the space depends on like being able to build things out of them assignments. The, the space tower just like involves a lot of steel like a lot of steel. So, so you've touched a little bit actually on, this is a good segue into, I've been talking about. But then like, beyond that, you know, the same, the basic ideas need to get out in every medium and format. Right. So, you know, I also do a lot on Twitter. We need, we need people who are good at like every social media channel. You know, I'm, I'm much better at Twitter than I am at Instagram or tech talks. So, you know, we need people kind of on those channels as well. We need, you know, we need video, we need podcasts. We need just sort of like every, every format platform me. These ideas need to get out there. And then ultimately you know, they need to get out there through all the institutions of society. Right. We need more journalists who sort of understand the history on the promise of [00:39:00] technology and use that as context for their work. We need more educators, both at the K-12 level and at university who are going to incorporate this into the. And I've already gotten started on that by creating a high school level course in the history of technology, which is currently being taught through a private high school, the academy of thought and industry we need you know, it needs to get out there in documentaries, right? Like there should be I'm really I'm really tempted as a side project. A a docu-drama about the life of Norman Borlaug, which is just an amazing life and a story that, that everybody should know is just, it's just like an underappreciated hero. I think a lot of these sort of stories of great scientists that had mentors could really be turned into really excellent, compelling stories, whether it's documentaries or I sort of fictionalized you know, dramas. The Wright brothers, it would be, you know, another great one. I, I decided after reading David McCullough's history of them and their invention and, and so forth. Right. So there could just be a lot of these. And then I think ultimately it gets into the culture through through fiction as well in all of its [00:40:00] forms. Right. So optimistic Saifai in, you know, novels and TV shows and movies and everything. Yeah. It's just also, I think I'm not. Science fiction, but just like fiction about what it's like to like what it's actually like to, to, to push things forward. Because I think I, like, I don't know. It's like most people don't actually know. Like researchers do along these lines Anton house had a good post blockbuster two, where he was talking about movies that dramatize invention and was looking for recommendations and was sort of reviewing movies by the criteria. Which ones actually show what it's like to go through the process. Right. And the sad thing about a lot of popular, even the popular treatments of this stuff, like Anton reviewed I guess there was a recent movie about Mary Curie. And there's a similar thing about you know Edison and like the current wars starting Benedict Cumberbatch. [00:41:00] And the problem with a lot of these things is they just sort of focus on like human drama, like people getting mad at each other and yelling and like fighting each other and so forth. Right. And they don't focus on like the iterative discovery process and the joy of, of inventing and discovering. So the, one of the totally you know, unexpected, the sleeper hit of Anton's review was this movie, I think it's actually in Hindi called pad man, which is a drama. the real story of. A guy who invented a cheap menstrual pad for women and that could be made you know, using a sort of like very low capital and then, and be made affordable to women in India. And I mean, he was really trespassing on social you know, cultural norms and boundaries to do this and was sort of like ostracized by his own community. But really pursued this process and the, the movie I saw the movie it's, I, I recommend it as well. It really does a good job of dramatize. The process is process of iteration and [00:42:00] invention and discovery and the trial and error and the joy of finding something, you know, that that actually works. So we need, yeah, we need more stuff like that that actually shows you know, shows the process and and the dedication you know, it's funny, one of the. One of my favorite writers in Silicon valley is Eric Reese who coined this term, the lean startup and read a book at the same name. And he's got this. He has this take that you know, whenever you see these stories of like business success, there's kind of like the opening scene, which is like the spark of inspiration, the great idea, you know, and then there's like, there's like the closing scene, which is. Basking in the rewards of success and in between is, is what he calls the montage, right? Because it's typically just a montage of kind of like people working on stuff, you know, and maybe, you know, maybe there's some like setbacks and there's some iteration and stuff, but it's just kind of glossed over. There's this like two minute montage of people iterating and some music is sort of playing over it. Right. And, and Eric's point is like, the montage is where all the [00:43:00] work happens. Right. It's unglamorous, it's a grind. It's like, you know, it's not necessarily fun and, you know, in and of itself, but it is where the actual work is done. And so you know, his point in that, in that context, it was like, we need to open up the, the, you know, the covers of this a little bit. We need to like teach people a little bit more about what it's like in the montage. And I think that's what we need, you know, just sort of like more broadly for science and. Okay. Here's, here's a pitch for a movie. I believe that the, the Pixar movie inside out right where they like go inside the, the little girl's head that, but for the montage. Right? So like the hall with the montage is that a lot of it is like sitting and thinking and like, not necessarily, it's like not necessarily communicated well with other people or just be talking, but like, you could have an entire internal drama. Oh, The of the, the process as a way to like, show what's [00:44:00] going on. Yeah. Good work. I don't know. I'm so sorry. All of that is so all of that is sort of the long-term view. Right? I think how things happen. A bunch of people including me, but not only me need to do a lot of hard thinking and research and writing and and speaking, and then these ideas need to get out to the world through every, in every format, medium platform and channel and, and institution and you know, sort of that's how ideas get into the zeitgeist. And so then I, you know, I said there's also, so the short term, so what's, so in the short term I'm going to work on doing this as much as possible. Like I said, I'm writing a book. I'm hoping that when I hire some more help, I'll be able to get my ideas out in more formats and mediums and channels. I would like to support other people who want to do these things. So again, if. Any vision that you are inspired to pursue along the lines of anything I've been talking about for the last 10 minutes. And, and there's some way that you need help doing it, whether it's money or connections or advice or coaching or [00:45:00] whatever, please get in touch with me at the roots of progress. And you can find my email on, on my website. And and I would love to support these products. And then another thing I'm going to be doing with the new organization and these resources is just continuing to build and strengthen the network, the progress community finding people who are sympathetic to these ideas and meeting them, getting to know them and. Introducing them to each other and getting them and getting them to know that they all getting everybody to sort of look around at everybody else and say, ah, you exist. You're there. You're interested in this great list form of connection. And I hope through that that there will be you know, a people will just understand, Hey, This is more than just me or more than just a small number of people. This is a growing thing. And also that people can start making connections to have, you know, fruitful collaborations, whether it's supporting each other, working together coaching and mentoring each other, investing in each other and so forth. So I plan to hold a a series of events in the beginning probably be private events. For a, you know, people in various niches or sub-communities of [00:46:00] the progress community to sort of get together and talk and meet each other and start to make some plans for how we develop these ideas and get them out there. Isn't that seems like an excellent, an optimistic place to close. I, I really sort of appreciate you, like laying out the, the grand plan. And just all the work you're doing. It's it's I mean, as you know, it's like, it's super exciting. Thanks. Same to you and yeah, it was great to be here and chat again. Thanks for having me back.…
I
Idea Machines


1 Fusion, Planning, Programs, and Politics with Stephen Dean [Idea Machines #39] 1:07:52
1:07:52
Play Later
Play Later
Lists
Like
Liked1:07:52
In this conversation, Dr. Stephen Dean talks about how he created the 1976 US fusion program plan, how it played out and the history of fusion power in the US, technology program planning and management more broadly, and more. Stephen has been working on making fusion energy a reality for more than five decades. He did research on controlled fusion reactions in the 60s and in the 70s became a director at the Atomic energy commission which then became the Energy Research and Development Administration which *then* became the department of energy. In 1979 he left government to form the consultancy Fusion Power associates, where he still works. In 1976, he led the preparation of a report called “Fusion power by magnetic confinement” that laid out a roadmap of the work that would need to be done to turn fusion from a science experiment into a functional energy source. References Fusion Power by Magnetic Confinement Executive Summary Volume 1 Volume 2 Volume 3 Volume 4 Fusion Power Associates The notorious fusion never plot Adam Marblestone on technological roadmapping My hypotheses on program design (which were challenged by this conversation!) Fusion Energy Base (a good website on fusion broadly) ITER Transcript (Machine generated, so please excuse errors) [00:00:00] In this conversation, Dr. Steven Dean, and I talk about how he created the 1976 S fusion program plan, how it played out in the history of fusion power in the U S technology program, planning and management more broadly, and even more things. Steven has been working on making fusion energy a reality for more than five decades. He did research on control, fusion reactions in the 1960s and seventies, he became a director [00:01:00] at the atomic energy commission, which then became the energy research and development of administration, which then became the department of energy in 1979. He left government to form the consultancy fusion, power associates, where you still want. In 1976, he led the preparation of a report called fusion power by magnetic confinement that laid out a roadmap of the work that needed would need to be done to turn fusion from a science experiment, into a functional energy source. And if I can sort of riff about this for a minute, the thing is. Unlike what I sort of see as modern roadmaps, it lays out not just the sort of like plan of record to getting fusion, to be a real energy source, but lays out all the different possible scenarios in terms of funding, in terms of new technology that we can't even think of being created and lays everything. Yeah. In a way that you can actually sort of make decisions off of it. [00:02:00] And I think one of the most impressive things is that it has several different what it calls logics of funding, which is like different, different funding levels and different funding curves. And it actually, unfortunately, accurately predicts that if you fund fusion below a certain level, even if you're funding it continually you'll never get to. An actual useful fusion source because you'll never have enough money to build these, these demonstrator missions. And so in a way it's sort of predicts the future. This, this document is super impressive. If you haven't seen it you should absolutely check it out there. There are links in the show notes and it's sort of, one of the reasons I wanted to talk to Dr. Dean is because this, this document. Is one of the pieces of evidence behind my hypothesis. That to some extent, program design and program management for advanced technologies is a bit of a lost art. And so I wanted to learn more about how he thought about it and built [00:03:00] it. So without further ado, here's my conversation with Steven Dean. To start off, what was the context of creating the fusion plan? Well, I guess I would have to say that it started a few years earlier in a sense that in 1972 the I was in the fusion office and in the atomic energy commission and the office of men and mission management and budget at the white house put out instructions to, I guess, all the agencies that they should prepare an analysis of their programs under a system, they called management by objectives. And this was some, this was a formalism that was, had a certain amount of popularity at that time. And I was asked to prepare something on the fusion program as a part of the agency, doing this for all of its programs. And [00:04:00] in doing that I looked at our program and I Laid out a map basically that showed the different parts of the program on a map like a roadmap and what the timelines might be and what the functions of those of facilities would be. And when the decisions might be and what decisions would work into into, into what, and that was never published in, in a report, but it w except internally, but the map itself was published and widely distributed. And I have it on my wall and it's in my book. So that was the first, my first venture into. Into doing something that resembled plan, it was not a detailed plan, but it was an outline of decision points and flow this sort of a flow diagram, but it did connect all the different parts of the [00:05:00] program and the identified sub elements, you know, not in great detail and, and budgets were not asked for at that time. So that's how I got into this idea and a little experience in, in the planning area. And then a few years later, we had the gasoline crisis in the U S where there were long lines and we couldn't get gas and people were sitting in their cars over overnight. And the, the white house at that time said that you know, we had to become energy, independent oil you know, the OPEC. And, and so Bob Hirsch, who was at that time about to transition from the director of the fusion program to an assistant minister traitor of Urdu in, I think it was 74, late 74, 75. The, the government decided to Congress decided, or the [00:06:00] administration decided to abolish the atomic energy commission and transition it into something called the energy research and development administration or arena. And the reason for that was to. It create an agency whose function was clearly for all of energy and not just for atomic energy in order to respond to the energy crisis and to get us off of the dependence on foreign oil imports for, for vehicles and things. And so when, when, when that happened, my boss, who was Bob Hirsch at the time he became, he was actually appointed in assistant administrator of errata for basically all the long range energy programs, which included fusion. And as he was at transition, he, he came up with the idea that we should create a detailed long range plan for the, [00:07:00] for the program. And he, he was obviously becoming sort of a senior manager for the many things and he wasn't certainly going to try and do this himself. And so he and I were very close. I was at that point he had three divisions in the fusion program and I was the director of the largest division, which had all of the main experimental programs. And so he asked me to prepare this plan. And if you look at the plan at the very beginning, there's this there's a chart that shows Bob's basically guidance, which was to note that that there needed to be a multiplicity of pathways because no one organization or, or group or division or program was in response could be in full control. And that in order to have a plan that might have some hope of [00:08:00] Last thing that you had to take into account a number of policy variables he said, and technical variables, which meant that he said, because need for the, for the, for fusion and the intent of the government and the funding is all in control by other people in the government. We had to have a number of plans by which the program could be conducted. So he came up with the idea that, well, let's have five plans, which he called logic. So he basically created that framework and turned it over to me at the beginning, I guess, of 1975, I think it was. And to, to create this. This plant. So that's how it all got started. And I had been doing a number of things with the program in terms of the major [00:09:00] experiments that were under my control as a director of the confinement systems, division magnetic confinement systems. I was forcing all, all the people that were that whose budget I had to control over to, to tell me what they were doing and what they needed to do. And so on. It's all right though, I had already been and working on a lot of these things in, within my area, but at that point I took over the responsibility of creating the, in the entire plant. And so I, I, I took it over and I started I created a, a small working group within our office. And we added people that we thought were responsible that could do this for us, or give us the details out in the various parts of the program, all elements of the program. And we created a team and we, we launched this and and this was the result. We were determined to look to these five [00:10:00] logics. They ranged from both, you know, basically a steady level of effort to a maximum level of effort. And and we just started creating these things. During that six months, first six months of 1976, And this was the result. Nice. And did you, so, so each of the logics is kind of a, a wiggly curve. Did, did you go in knowing what the shape of the funding curve for each logic would be, or did you just go in with the framework that there would be five logics and over the course of designing the program, you figured out what the actual shape of those curves would be? Well, we created a definition, a rough definition of what each of the logics was supposed to look like, not in detail, but for example, a [00:11:00] logic to what says moderately. Expanding. But the tech progress would be limited by the availability of funds. But new projects were not started unless we knew that funds would be available. And so we knew that we could not address a lot of problems in parallel. And so we had a general idea that this was a program that was not running at a maximum maximum feasible. Pace. And then the logic three, we said, well, let's look at one, that's a little more aggressive. And we would lay out in that one that as soon as these projects were scientifically justified, they would be in the plan. We would not wait till we knew that there were probably people that wanted them were when he was available. But, and we also said in this scenario, we would address a number of things concurrently rather than in [00:12:00] series. So we assume that the funding was ample. We didn't have a number in mind. At that point, we started laying these things out and asking people. If you had all the money you needed, what could you do if you didn't have quite enough money, what would you do? And people started responding to us that we're working on all of these subtopics. We were mostly working at the beginning laying out what the topics were and what had to be worked on eventually to get to the end point and that these topics could proceed at different rates and with different amounts of risks, depending upon the budget. So this was a sort of an iterative thing that went back and forth with the community and the areas, and our team kept putting these together until they made some sense. Got it. And just to, to sort of step back a second so before [00:13:00] you created this plan sort of all the activities were happening already. Is that, is that right? There were activities in all these areas that were going ongoing. Yes, that's right. They were at, at relatively low level at that stage at the early seventies, the total fusion budget was $30 million. And by the mid seventies, because of the energy crisis, we were told, you know, tell us what you want. And we had raised that budget from 30 million to 300 million. So the program had been undergoing the first five years between 72 and 75, a very rapid expansion. And we had started a lot of new programs. And so the program had been built up quite a bit, although with all of these programs, of course, because they were new. They were at a, still a fairly early stage of their. Their development. The other thing that drove the, the the curves was the [00:14:00] ignition that getting to a fusion power plant required a couple of identifiable major facility steps. And these actually came from that map. I mentioned from 72, which said that from the experiments that we want to do in the near term, which were to build like a physics proof of principle experiment that had to be followed by an engineering step, that was an engineering test reactor. And that had to be then followed by a demonstration power plant. And that those steps were big facilities. Each one, much more expensive than the previous one and making a much more definitive demonstration of fusion that was on a. And, and the wiggly curves that you see, not the, the, the the smooth ones have these bumps on them.[00:15:00] And those bumps reflect the fact that these major experiments were going to cost a lot of money. And depending on how fast you build them would, would also reflect a different path pace to an end point you know, the faster you build them, the faster you get there, because these major steps really drove the progress and drove the budget. And do you think that sort of, I guess it's hard to think about, but like do you think that the plan helped anything in the sense of. If, if instead you just sort of had continued with the, the program as it started, where I imagined it was like much more sort of bottom up. Do you, do you think that the, the outcome, how do you think the outcome would have been different? I think without the [00:16:00] plan, I don't know what would have happened. I don't think we would've gotten the support that we got in the next few years during the seventies that we got, because the outcome of this cut was that. The plan, the plan was published with all of its detail and all of its budget. It was published publicly. The office of management and budget tried to stop us from publishing this plan because they didn't want budgets out there that said, well, if the Congress would give you so much money, then, then you'd get the job done because that would tie their hands because, you know, they like to be in control of how much money they're going to give to every program. And so they don't want the agencies to put out plans with budgets. And so we had to fight that. And luckily for us, the energy research and development administration, which was fairly new and I [00:17:00] actually only lasted a couple of years before it transitioned to the department of energy, had a a head of it. Bob Siemens, who came from NASA. He overruled the office of management and budget. He said, I'm in charge of this and I'm putting whole plan out. So we probably pushed it. And it got picked up over in the Congress by Congressman Mike McCormick and his staff. And they became champions for this plan and they came. What's the a legislative agenda and they got the Senator from Massachusetts and the Senate to get on board. And and by 1980, I think it was in October, 1980. Congress had passed the magnetic fusion engineering act of 1980, which basically adopted our plan for getting to the end point by the year 2000. [00:18:00] And so the result of our plan was that Congress picked it up. It passed a legislation, making it national policy. And it was signed by president Carter in October 7th, 1980. And we thought at that point we were in that we had a commitment of the United States government at the presidential level to implement this the, the plan for getting there by the year 2000. And so the, the problem, the only problem was that he president Carter signed it in October and lost the election for reelection in November. And as you probably know, whenever there's a change of administration to, especially if it's a change of party in here, Almost everything that the previous administration has decided to do. The other, the new people want to either not do, or they [00:19:00] want to completely reevaluate and start over. And so that's what, that's what happened to this plan in 1981. Got it. And so, because as far as I can tell we've, we've sort of like the, the, the way that it's panned out is that we've, we've sort of followed below logic one, right? Oh yeah. Oh yeah. It was less and less than a, less than ever Guinea. It's the never get their logic. There's one, but there's one caveat to that is that in the 1980s Ronald Reagan was a post to all of this energy stuff until 1985. When he met with Gorbachev. And they decided to work together on fusion and build our first major step that was in our plan. We were going to build this engineering device in the 1980s and he and Grover Jeff decided let's get together and build it together with Europe.[00:20:00] And this became the eater project, which is under construction in France. So what the program really did to work around this problem of the budget being so low was to say, okay, we're not on our own track, but we're on a wall track and we're all working together. And so they're building this multi tens of billions of dollars engineering test reactor and it's taken them a long time to get it going, but it's hopefully going to be finished in a few years. It's going to turn on by hopefully 20, 25 is plasma, so we're way behind, but, but that was a response to being on this. Their thing was to say, we're all in this together. And we don't have our own plan to get there, but the world has a plan and we'll get there together that that's how this all evolved. Got it. And so I guess the, if, if I'm understanding this correctly, the, the sort of the, [00:21:00] the purpose and the value of this plan was less as a coordination mechanism for the people doing the work and more as a sort of communication mechanism with people sort of outside the organization in terms of. What the work would entail is that, is that accurate? I, I can tell you that when I was doing this plan and I was in a senior management position there, I had responsibility for the bulk of the program. I didn't have the basic pilot, the physics program and universities, and I didn't have the technology part, but I had all the major experiments in my ballywick. And when Bob Hirsch was, I was still reporting to Bob Hirsch and he had all the energy programs in Herta, it was our intent to manage his program, to implement this plan internally. It did turn out that part of the plan, part of our implementation required us getting the money and that all went through this energy building in [00:22:00] Congress. We thought we had the whole thing put together that not only we did, we eventually have the Congress on board, but we also had a management and we had 80 staff in the office then. And And we were prepared to manage the program to, to implement this if we got, get the money in detail. So it was both the management plan for implementation within the within an arena. But of course the other thing that happened in all of this was that Erna was abolished and became the department of energy. So I think Jane and I left, I left in 1979 because I thought we were about to implement this plan. And I formed fusion power associates, and I got a dozen electric utilities. I mean, a dozen major industries like Westinghouse and companies like that to, to form this organization, to bring industry in, to actually bring industry into the, into the implementation phase of this program plan, we were all set to [00:23:00] go. And even in the early eighties, before the whole thing sort of fell apart, I had a dozen electric utilities in fusion, power associates. And so we had both industry that wanted to do this and the electric utilities that were on board and all we needed really was for the new department of energy to to follow through with the management of this thing and try to get, get the money, but. The money never, never came through. And the industries and fusion power associates in the early eighties realized that there wasn't going to be any money for industry because there wasn't any money coming through. And the electric utilities were deregulated by Ronald Reagan and they abandon all their R and D departments, which were the ones that were in, you know, in our organization that were interested in developing fusion. And they became taken over by [00:24:00] business people in the utilities whose main purpose was to make money. And they were not interested in getting involved in brand new technologies. They are only comfortable with the technologies that they had. Yeah. But that makes a lot of sense. And I guess to sort of go, go back to you. You mentioned earlier that this plan was sort of part of a bigger trend of management by objectives. Do you think that that was effective management by objectives? And just because I feel like sort of the, the modern idea, very much like projects, plans like this that like, you know, multi-decade technical plans are at, at best foolish and at worst detrimental. And so, so what do you, what do you think about sort of like big plans for technology projects? More generally, [00:25:00] just sort of say that management objectives was an OMB guidance in the early seventies. And it's soon disappeared from the the roof work, if you will, as the the OMB. One of the things that happens in Washington every two years is that people change in administrations changed. Whatever one group is wanting to do it just a lot, go by the wayside. So by the mid seventies, when the came about there was there was no management by objectives, formalism still going on in the government. Basically they start all over again with how they're going to try. Do these things. And as this thing all evolved, you know, up to the present at the OMB I don't know, probably more than 10 years ago, 10 or 15 years ago, the OMB said to fusion, you are guys are not an energy [00:26:00] program anymore. You are a science program and we are going to evaluate you and have you managed like a science program. And so they stopped even asking us for both aimed towards an energy program. They said that we should go to the scientific community, take unsolicited proposals from the community to do good science, evaluate them under peer review by other scientists. And if it was good science, we should fund. And we should not be trying to make them into seeing we should not evaluate these proposals as to whether or not they are getting us to an energy source. So for over a decade now, the fusion program has not had an energy source as it's, as its goal, and it hasn't been funded or evaluated within the government as an energy program. Now, this has all changed in the [00:27:00] last year, but up until just very recently they're trying to put now the energy mission back into the, into the mission, but it hasn't actually formally happened at OMB yet. Got it. And just, just to sort of pull us back to well mentioned by objectives and just more broadly having very concrete plans W w w do you think it was useful or do you think it was just sort of like a a fad almost. Well, it's been disappointing from him personally. I think that it's been disappointing that like, we haven't actually done the plan. Right? Well, it's just a point. You spent so much effort laying out how you, how you would do it and how you would make decisions and you get everybody that's under your purview out in the in the community of people that you're funding, you get them all set up to try to achieve these things and you try to get them the [00:28:00] money and then it all falls apart. And then somebody tells you that, well, we don't care because we don't really think we really don't care if you ever get there. It's been the attitude until very recently. So it's very demoralizing, you know, to everybody, except the scientific community itself is kind of immune from this to some degree, as long as they get funded for research. As long as the universities getting money for basic research in this area, and they're training students in these trainings and these students can get jobs either in the private sector or they start their own companies, or they go to work at government laboratories, as long as that is moving along at some reasonable degree of success for people getting trained and getting in doing work and publishing papers. There's a certain degree of apathy if you will, or there even a certain degree [00:29:00] of satisfaction in the scientific community since nobody seems to care, if you should never goes on the grid. Yeah. Yeah. And so I guess like counterfactually, if the money had been there, so actually one thing that I, I still do find really impressive about the plan, although it is disappointing is that you basically predicted that. Right. Like you, you said, you said here's logic one. If you're below this line fusion won't happen and indeed you were right. So that's, let's just say like, that is one of the reasons why I'm I'm so impressed by it. Because it, it really did, it made a very precise prediction and that prediction came true, although it is disappointing. If you, if you could imagine that, like the say the money came through, do you think that this plan would have been useful in the sense of like, like how much confidence do you have that you sort of [00:30:00] accounted for all the things that you would need to do over the course of several decades? In order to, to get to fusion as an energy. Well, as it says in sort of early part of the plan, these plans are not bent to be followed blindly in their detail. They are guidance to management and management has to keep updating them and looking to see how they're doing and keeping an eye out for new discoveries and revising the plans in detail to see if new things are emerging or some things are failing. Or the money is coming in in such a way that that the plan schedule has to be changed. That's why you need management structure that's in place and following it, but not blindly following it. Yeah. So I personally believe if the management structure that we had in the [00:31:00] mid seventies had been maintained and, you know, right now I think we had 80 people in the office and they were all management oriented. And right now I think they probably have about, I don't know, maybe, maybe 15 people in the office because they're running it like a research program. So they just taking proposals and getting them evaluating and sending out money. So they're not managing in the way that we would have managed if we had had 80 people and we'd had the divisions that we had divided up and we revise the management structure from time to time. Along the way. And I know if I hadn't been there and what we had in mind, we were going to transition the money starting out into industry to get these things built and to bring engineering oriented people in more into the program, because even in the mid seventies, the pro was dominated by plasma physicists. And we were only in the process at that point of starting to bring in engineering [00:32:00] people, but still the money. The government's laboratories in their technology. People like Oak Ridge has a big technology laboratory. And so there was technology programs being developed in these laboratories. And other a little bit of it was going out into industries as on a job basis for the labs, but we didn't have a big industry program. And you know, one of the things I did just before I left was I brought in McDonald Douglas, which a big aerospace company to build an engineering center at Oak Ridge for fusion. That was sort of the last. Done and you know, and when this whole thing folded in the eight earlier eighties McDonald Douglas basically was told to shut down and they went away. They were, they were eventually bought out by Boeing. So we had started a transition where part of the implementation of this plan was to implement it by bringing industry in to bring [00:33:00] that talent from, we had a bunch of people, for example, in fusion power associates at the beginning, who were the architect engineers that were building nuclear power plants. So, you know, those were the people that we needed to implement the plan, but they were not quite in the program by now, 1980. And when the money didn't come through them, they just all disappeared from any plan that the government had because the government in the eighties and was only interested in trying to make their scientists survive. Yeah. And I guess you don't really see plans like this today. It feels like. And so I get the sense that creating plans like this, and more generally like technology management, like competence, technology management is a bit of a lost art. Do you, do you think that's true or, or am I, or is it like, am I missing something?[00:34:00] Well, I don't know if it's true or not across the board that they must be out there somewhere. I think when you look at big construction projects and the people that do those projects know how to manage and they know how to cost things out and they know how to, they know the importance of, of keeping things on the schedule and they know how important it is to have pieces of the schedule coming in on the right time timing so that the whole project comes together. And, and we tried to do lay that out so that, that could be done for fusion, but I don't see it being done in the department of energy. And I don't know about any other agencies. I I can I have the feeling that maybe the defense department does it a little better on weapons systems and aircraft systems and fighter systems with some of the big aerospace companies? I mean, I think my observation from a fire department of defense is that [00:35:00] they, they do it the right way, but they're not on top of the cost and schedule and they do get taken to the cleaners by these companies, but somehow or another, they do get the job done, even if it's costing more than it should and taking longer. Yeah. That's, that's the thing that there's sort of been this like wider observation that since the 1970s things just take like sort of complex projects like this take longer and cost, like have, have dramatic like cost and time overruns. And it's sort of like this, there's like this trend of that happening more and more. And so, so I wonder if it's like w what it is about the world. That's changed. Do you have any hypotheses there? Well, you know, I'm not sure if it was ever that good first place ever, because when we, when I was there [00:36:00] in the seventies and we were laying out our plans, we thought we knew how, how to do it and do it right. But at the same time, within the atomic energy commission, there was a a nuclear fission program called the breeder reactor program. And it was a mess. And, and yet the industries out there like Westinghouse and general electric, they were actually building nuclear power plants in those days. And they were building nuclear reactors for submarines in those days. And so those programs were actually working, but at the department, they were working on advanced reactors and they weren't getting them done. And they eventually had to shut down the breeder reactor program because it just wasn't just wasn't seemed to be working. So I'm not sure the government, at least the part that I knew ever did that. Well, you know, when Admiral Rick over wanted to put a. I nuclear reactor in the submarine. The Navy wanted to fire [00:37:00] him as a department of energy, wanted him to put this program into the Lac, their national laboratories. And he had a fight them tooth and nail through his friends in Congress to get put in charge of the program and be allowed to put this out to general electric and Westinghouse. He had to fight them, and this was back in the sixties. So I'm not, I'm not sure that the government itself ever was very efficient at any of these things. Now, I have to say that NASA seems to have a good reputation and I, if it's true, it's I attributed to the fact that Kennedy went public and made it a national priority to get there by the end of the decade. And he demanded that they do it in a way to make it. And he, he had the backing of the Congress and he completely set up a whole new agency focused on [00:38:00] just that. And they got there. So I have to say that that was a success story and it remains a success story today with the evolution of a commercial industry. That's coming out of all of that. All this is quite a few decades later, but nevertheless they seem to have done a good job. I've never, I've, I've never been in NASA. So I only can see it from a farm. I'm sure there's some problems within it, but you know, somehow or another, it proved that we could get it done. And going back further to the Manhattan project for the atomic bomb was clear that when there was a commitment from president Truman, I guess it was, or or maybe it was, maybe it was Roosevelt. To do it and the army set up to take charge of it. They put a general in charge of it and they went to Los Alamos and they forced to deploy the part, the atomic energy commission laboratories to, to work on the problem that was at hand to get it done in a short amount of time. And when you had that kind [00:39:00] of leadership and management, it seems like it can be done, but it all depends on management and it's rare in government. And I would say it's rare even outside of government as well. And, and so, so I guess the upshot of this for me is that and correct me if this is wrong, but that you feel like it's much more about sort of the, the individuals in charge. And then it is about sort of like the, the process of, of planning and roadmapping out the techniques. Yeah, absolutely. I can't tell you how many plans have made since the one that you were looking at that I, that I've gathered dust on shelves. They almost every other year, the program launches a new plan. It finishes the plan. Everybody says whether they like it or they don't, and it's not implemented in a couple weeks [00:40:00] later, they'll turn it over to the national academies to evaluate or proposal new a plan. And I can't tell you, it's countless number of plans in fusion that are gathering dust on shelves over the past 40 years. You mean, it's the managers and the people that are want to implement the plans that, that supervise the plan. And as long as they're there we'll implement the plan, but as soon as they're gone, they, somebody else comes in, maybe makes a new plan or makes no plans at all. You know, just try to keep things alive. And w what do you, what would you think about so I feel like the sort of modern ethos is that planning plenty. Is it that useful? That you should just go and just start doing stuff? So I guess if we, if we think of like a counterfactual world where you just [00:41:00] have a very, like, you, you have consistent management, but they don't have a plan. How do you think that would be. I'm not quite sure what you said, but let me, let me give you an example of this big international project. Either in France, it was, it was, it was started by Ronald Reagan in 1985, but it didn't really get launched as a serious construction project for 2006. And it very rapidly became something that was getting behind schedule and over budget. And it was completely out of control until about 10 years ago. They, they had a management review and they said, we've got to get control of this project. They brought in this guy, that's now the director, Bernard big go. And he, and he took charge of this. And now he's got the thing organized, reorganized. Countries [00:42:00] from all over the world on a schedule to deliver this piece of equipment or that piece of equipment on a certain time schedule, he's got them all being delivered in a sequence and he's having them put together in a sequence. And he's got a great management plan and he's been keeping the thing on schedule now for the last five years. And I have great confidence. He's going to get the job done, but it all started with putting somebody like him in charge that knew he had. Have a plan that was in detail for everybody working together because, and totally took charge every country that had part of the job wasn't controlled with the wrong piece. And there was no, there was no control if they got behind. Sometimes the director in, in France didn't even know until it was too late to get it back on schedule and, and he didn't control the money anyway, each country controlled its own money. So, you know, I think it all comes down to management and then the management [00:43:00] makes the plan. Yeah. And w we'll see, so that's that I do think is worth noting in the sense that there's, there's also sort of a philosophy of management that says management shouldn't actually be imposing a plan on people. They should just like. Let it be very bottom up. Right. And just like, instead of planning, like, you don't know what's going to happen, so you should just sort of like let ideas bubble up from, from the bottom and let people work on what they think is the best thing to work on. Right? Well, you know, managers are managers of people and they oversee people. And so in a company, there's somebody at the top when there's somebody under him, but underneath them, It companies, there are thousands of people they're doing their bit. So a managers is not just say, Hey, we're going to get this done by tomorrow or next week he, he supervises all these people and these [00:44:00] people feed him up the information and help create this plan. And they all have to be on board and supervised properly all the way down the line through it, through a management chain. So it's not like one person does the whole plan by himself or with a couple of people in his office. He supervised the preparation of a plan with the community. So I had, you know, dozens of people around the country who helped prepare this plan. I helped them piece it together. And, you know, I helped organize the structure of the whole thing, but it was, it was an ongoing interaction that went from. And then guidance from top down, it was back and forth through the whole process. Got it. So you could almost think of the plant as a coordination mechanism in a way. Absolutely. Because the managers can actually do the work. Yeah. [00:45:00] Yeah. And they probably like don't, the managers can't know enough to be able to say accurate. They don't know the level of detail. If there's a problem. For example, if there is a problem they can say, okay, let's fix that problem. And they go back to the people that know about it and they tell them, okay, you guys go out and find out how you're going to fix this problem and come back and tell me how you're going to do it. But then the manager has to approve it. You know, if he doesn't, if he thinks it doesn't been done, right. He will go back to them and until they get her. Right. So, and I guess another interesting thing about Th the, the plan is that at some point someone was willing to make a prediction but a decade or more out. And that's sort of an attitude. I, I see people as being very hesitant to make that predictions on that timescale now do you feel like there's, or at least with that amount of w with like that [00:46:00] amount of precision, right? Like people make very, like hand-waving predictions now. Do you think, like there's been some kind of attitude shift around making predictions like that? Well, it's changing in the last year or so. There's been a lot of planning activities going on here and you'll see some time schedules and all of these, like right now there's a whole bunch of the companies that are all saying. But by 2030 or 2040 or 2050 and so on and so forth. And there's sort of a goal that's been proposed to have fusion on the grid by 2050 and in order to participate in the climate change solutions. So there's a lot of thinking about this and there's a lot of people putting out what they think is a reasonable timeframe that is achievable. And it's interesting that these, these timetables are all. One two or three decades out, which is sort of like almost the timescale [00:47:00] with the timescale that we had. So it's not uncommon to think that almost anything that's technically thought to be feasible can be done in 10, 20 or 30 years, depending on how difficult it is. So it's pretty easy for people to think that something can be done in those kinds of timescales and then start backfilling the details to see how it can be done and what it costs. Yeah. I think, I think the thing that strikes me is different between the predictions that I see now. And what you worked on is that. I, I feel like the, the fusion plan was a, the producers were very precise. Like it wasn't like, oh, we'll get this thing working by this time. It was like, okay, we need to show this experiment, this experiment and this experiment. And then there are also like very clear sort of intermediate results and, and like different pathways. All of which I, I don't [00:48:00] see in, in modern modern predictions where there, who are, who it feels like it's like, step one, start project. Step two, question mark. Question mark. Question mark. Step three 30 years later, have this amazing result. And I feel like that well, and you see our times scale to look to around the year 2000. Come out of whole cloth, it was set by the fact that we were in a physics phase and we had just authorized the construction of a physics demonstration called Tokamak fusion test reactor at Princeton. In 1975, we had already launched construction of that, and we knew that to get to a power plant. We had to make two major steps. One was an engineering facility and next was a demonstration power plant. And the time to construct those things is, is kind of known that it takes [00:49:00] five years to build them and five years to run them. So that kind of for each step was a 10 year step. And that gets you to a 20 year timetable. And so that really the time to build those two facilities and operate them, set the timescale. Of 20 years, more or less, depending upon how, you know, give or take a few years how fast the money came in and so on. So you know, we had a, we had a reason that that 20 year time frame was sort of set that we couldn't get there any faster because we couldn't go direct to a power plant. Right. And, and I guess like, so, so two questions one is, how do you think about the difference between a engineering project and a physics project and then two, like how did you know that you couldn't go direct to a. Well, if you [00:50:00] look at all the pieces of a power plant, you'll know that there's an awful lot of stuff in there that is not needed for a physics experiment, you know, a physics experiment, you know, what makes up a fusion plasma, and it has a whole bunch of diagnostics on it, and you're not sure what it's going to do. And so you're, you have to allow for surprises and then you'll have to do theory and computation to see if you understand what's going on. And all of that requires people who, who understand the physics for a power plant. You have to actually have confidence that the plasma that you're making is actually going to sustain fusion for a long period of time and produce heat. That can then be converted into electricity. And that means that these power plant has to be doesn't have room for a lot of diagnostics to be doing experiments, to try to figure out [00:51:00] what's happening. You have to have high confidence that when it turns on it's going to run and not have to be shut down every day or every week to be fixed. Right? So all those things require technology and engineering development, where components, you know, there may be a thousand major components or hundreds if you combine them in the right way into a power plant that has certain functions. And each of these has to be developed by engineers as a company. It has to be run and tested for long periods of time to see you with your breaks, to see how to fix it. How long does it take all of these things have to be demonstrated before you put it all together. Otherwise when you put it all together in Nepal plant it's too late because you can't just hate the far plan a party again, and start over. So the engineering and technology has a whole separate track of development that requires [00:52:00] testing and and development of codes of, of of a manufacture materials have to have codes. How long they'll last in this environment? Yeah. When will they fail? There's a whole skill set called Time to failure and time to repair the engineers, work with that physicists don't work with, if it breaks, it breaks, they just, you know, you know, they, they, they fix it because it's a small piece and they put pieces in, it takes them maybe a few weeks, but it a major piece of a power plant. It might take you a year to take that piece out and repair it and put a new piece in. Yeah. So, so like, meanwhile, you're not making any money selling electricity, electric utility will not buy a power plant like that until someone's shown that every piece works and worked all, all together can be sex if it breaks, you know, in a week. [00:53:00] Yeah. Interesting. So, so in a sense engineering work has a lot more to do with robustness than, than physics. Once, you know, the physics, it's an engineering problem to power commercial aviation. Okay. Yeah. I think that, I guess in my mind that that's still, like, there's still a lot of like research work to be done in engineering problems, even if it is just an engineering problem. There's a, there's a melding of physics in it. That's what they call applied physics and there's basic physics. And so, and there's technology and then there's engineering and all of things. I have slightly different slants and slightly different communities, but they all have, and that's one of the functions of management is to work on a timeframe and with money to meld these things in the proper sequence to get to where you need to. Yeah. That's why a program has to, that's why a program like fusion has to evolve from [00:54:00] totally physicists to mix of physicists and technology, people to a mixtures of engineers, to commercial companies that do costs and schedules and all of this stuff. This all has to be supervised by management. Got it. And so sort of a nitty gritty that I'm interested in is like, how did you think about budgets and like how much things would cost? Cause I feel like there's, there's no good canonical resources about like, how to think about how much research programs cost. Well, the way we did it was we divided it into systems and subsystems. And we went to the people that were working in each area and we asked them to go into more depth and that's, what's in our other volumes. So we had teams of people in all these areas, and [00:55:00] then we use you know, people that from industry and from utilities. Had done similar things. He found, we looked at the cost of nuclear power plants. That was a big part of our, our thinking as to what we knew that the fusion plant had to compete. So, you know, the, the, the skillset was all out there. Technologic technology wise for the power plants because of fusion plant is almost like a nuclear power plant, except a fuel is different in the center. I mean, it doesn't look the same, but it has all the same pieces to get the power. So there, there was a lot of skills out there that we, we were able to draw from. And, and we did the best we could. We know we can't claim that. And we put some contingencies in there, you know, we didn't let them low ball or high ball us, you know, because we had, they had to fit into the different logics as to how much money might be available and stuff like that. So, and we didn't say that this number is where we're, you know, in [00:56:00] stone that they were, they were absolutely. Yes. Yeah. And how did you think about like, places where there's just like, sort of deep uncertainty like where you would need to actually, in terms of a physics problem where you would actually need like some kind of discovery in order to get the thing work? Because it seems like there, there could be a situation where like, you know, it's like you can make that discovery next year, or you could, it could take you 10 years to figure it out. Well, if you look at the say the logic three reference option to page 12 of the blue colored volume you will see. That there are a variety of paths the Tokamak with the lead path and freed, laid out a reference a lot for that to get there by a certain date. But underneath that, there's a path for authentic concepts. And there were decision points, which said that well, if these [00:57:00] things come along and there's even one.at the bottom that says other things that were in very early stages of proof of principle, but we weren't knew that these things might come to fruition. We laid out a timeframe for hoping that we would fund those so that they could be evaluated. And so if those things came to fruition, then they would transition to a next step. And so that would all, that was all sort of taken into account as to the decision point as to when some of these things might, might happen. And, and of course, if, if something really radical were to come along a long, one of these other paths it's listed, I'll say can see one if you'll, I don't know if you have it in front. But under other, you'll see a decision point in 1985 that we're going to try to bring some of those things to a decision.[00:58:00] If it looked like a positive one, we would proceed to what we call a prototype engineering, power reactor. And so it would take the place of that one up above that was called the Tokamak EPR that would have already been under construction if we kept following the back path. But, but it might still be. But if this other one came along, we would start its own track to compete. 1985 and then it would pick up at its own track and then it would come in later and we'd have to, at that point, if that became the favorite path, or maybe even there'd be three paths, you know, we didn't say that there could only be one winner. So you had a, you could eventually wind up with several of the earliest ones might come on around the year two thousands, but some of these other ones like abandoned 2005 or 2006, if they were better and they'd be a options for the [00:59:00] utilities, if they were better. Got it. Yeah, this is so cool. One of the really big takeaways that like, just like keeps coming through is almost just like consistency of, of management and not so much like the plan, but like of, of a plan. And, and I think like that's what you see. Not happening. And I guess sort of pulling us to today. Do you have a sense of which things that are happening in fusion now that you think are most. Well, you know, I don't want to get out on a limb to pick winners and losers because fusion power associates really is a home for all of these people. And I encourage the ball and there are people that we will not let into fusion, power associates as they're out there because they're so re almost crazy. And their claims are almost crazy that I wouldn't want to be associated with them. There are few and far between [01:00:00] fortunately, most of the alternates that are out there and these little companies they've been formed by good fusion. People who have, who have fallen on bad times because the government started funding all their money into tokamaks and stopped funding their off and net ideas. And so these people branched out and got support on their own. And I know some of these people and they're good people and their ideas need deserve to be pursued. But the truth is that most all of these are at what we used to call the proof of principle stage on their physics. They are not fully thought through power plants and their physics is not fully developed or at least not even far enough along to develop to know how probable their success is. They should be pursued. What was the room in the program for these, because improvements all those come along, any tech technology. So the first thing that comes out is it not going to be the best [01:01:00] thing 20 years after it? So I encourage all these things if they're credible people and you know, right now there are a couple of. Things in the Tokamak area, you know, the Tokamak mainline is the conventional Tokamak that is represented by either, but there are variations. There's one Commonwealth fuse and the systems at spying out of MIT. That's almost the exact same concept as the mainline Tokamak, except they're using high field new superconductors, which make the machine smaller. And which allows them also to be able to disassemble and repair it faster than the conventional tokamaks because the Magnus come apart in a different way. And the exhaust system that they've designed is more efficient. So that may help with some of the materials problems as the conventional talking back. So it does look like a much improved Tokamak and they're getting money and they're trying. You know, [01:02:00] they've got a facility that they're, that they've committed to in Massachusetts, and they're trying to build one step, a physics demonstration followed by a electricity generator. And so I, I have great hopes for them. If they can get money, they're privately funded. Now they're not getting hardly any government money at all. I think the government's helping them a little bit with some support work in the labs, but basically it's a private sector venture. And I think that one of the most promising, and there's another variation of the Tokamak called the spherical story. Or physical Toca Mac the British are going gangbusters on that. They've got one in operation. They've got a company that's also built one and they've got a site for building as a next step one, which they a site where they hope though will the actual power electricity generator. So that variation of the Tokamak is also looking very seeing. And it's the British are way out in front on that. Although [01:03:00] that idea first came. Ascend Princeton is actually had built one of those. And as another one coming in operation in a couple of years, that would support that line. So there's a couple of variations along the token back line that are looking, looking very good. All the other things that you hear about there are at a somewhat earlier stage of develop. They're all doing good work. T a E a tri alpha energy. Your TA in California is probably the most radical of the mall. But they are the farthest along of these alternates. And they've all, they've had success along the way. They built two or three generations. So machine, and they're all trying to get money for a really major step that would really demonstrate most everything they want to demonstrate before going on to a real power producing machine. So, you know, I think I have for them too, there's another company in Canada called general fusion that perhaps is a little bit farther behind, but they're working with the British and a [01:04:00] two. And so that's a promising area. And you know, I hope I have hope that that will evolve. This actually made me think of a question, which is Was sort of now all as, as you alluded to all the fusion development is being done by these sort of separate private companies which sort of stands in contrast to the, the fusion plan, which sort of implicitly is that everything is being at least managed from a central like a central management team. Do you think that w w what do you think about those two, two sort of different approaches towards getting to a technology of like, sort of the, the let a thousand flowers bloom in, in private companies versus a much broader program. Well, I think in the last maybe five years or so times have changed in that regard, you know, in the seventies and up until very recently, it was [01:05:00] only the governments that seem to be able to afford to do this. Those are the timescale and the cost. And so if was to come to pass, the government had to step up or the international governments had to step up and work together. And it was, seemed like the only way to get there was for the government to do it because of the cost. Now it seems that things have come along far enough, especially in the Tokamak area that some private companies are coming up with what they think are. Ways to fund what they want to do to demonstrate what they need to demonstrate because their ideas are at the moment, at least on relatively inexpensive facilities. Now they, they are going to run up against funding problem. If they're successful in the near term, you know, they're getting hundreds of millions of [01:06:00] dollars, some of them from private investors and they're building some things and hopefully they'll be successful, but these will not be powerful. And so they will have to be so successful that they will be able to get much, much larger amounts of money. They may have to be, be bought out by a Westinghouse or something in order to, to become real power plant manufacturers. These are not industries yet, even though they have an industry, what they call an industry association, there are small companies, and if there may be big by some companies standards, but they are not really money-making companies and they don't have their own money. So they have to continue to, to get money from investors and, and even maybe getting a hundred million dollars or $200 million from some billionaire venture capital company is doable. These days, getting a billion for the next step is a much different [01:07:00] problem because there isn't going to be a real fusion demonstration plant built for less than a couple of billion dollars. And private money doesn't come that easily at that Atlanta, unless the thing that's being built is going to make money back fast. Steven Dean. Thanks for being part of idea machines.'…
I
Idea Machines


1 Policy, TFP, and airshiPs with Eli Dourado [Idea Machines #38] 1:06:39
1:06:39
Play Later
Play Later
Lists
Like
Liked1:06:39
Eli Dourado on how the sausage of technology policy is made, the relationship between total factor productivity and technological progress, airships, and more. Eli is an economist, regulatory hacker, and a senior research fellow at the Center for Growth and Opportunity at Utah State University . In the past, he was the head of global policy at Boom Supersonic where he navigated the thicket of regulations on supersonic flight. Before that, he directed the technology policy program at the Mercatus Center at George Mason University.. Eli’s Website Eli on Twitter Transcript audio_only [00:00:00] In this conversation, Eli Durado. And I talk about how the sausage of technology policy has made the relationship between total factor productivity and technological progress, airships, and more Eli is an economist regulatory, hacker, and senior research fellow at the center for growth and opportunity at Utah state university. In the past, he was the head of global policy at boom supersonic, [00:01:00] where he navigated the thicket of regulations on superstar. Before that he directed the technology policy program at the Mercatus center at George Mason university. I wanted to talk to Eli because it feels like there's a gap between the people who understand how technology works and the people who understand how the government works. And Isla is one of those rare folks who understands both. So without further ado my conversation with Eli Dorado. So just jump directly into it. When you were on a policy team, what do you actually do? Well that depends on which policy team you're on. Right. So, so in my career you mean, do you mean the, in sort of like the, the public policy or like the research center think tanks kind of space or in, in, in a company because I've done both. Yeah, exactly. Oh, I didn't even realize that you do like that. It's like different things. So so like, I guess, like, let's start with [00:02:00] Boom. You're you're on a policy team at a technology company and. Yeah. Yeah. So when I, when I started at boom so we had a problem. Right. Which was like, we needed to know what landing and takeoff noise standard we could design too. Right. Like, so, so we needed to know like how loud the airplane could be. And how, how quiet it had to be. Right. And, and as a big trade off on, on aircraft performance depending on that. And so when I joined up with boom, like FAA had a, what's called a policy statement. Right. Which is, you know, some degree of binding, but not really right. Like that they had published back in 2008 that said, you know, we don't have standards for supersonic airplanes, but you know, like when we do create them they, you know, they're during the subsonic portion of flight, we anticipate the subsidy Arctic standards. Right. So, so for, [00:03:00] for, for landing and takeoff, which is like the big thing that we are concerned about, like that's all subsonic. So we, you know, so that sort of the FAA is like going in position was like, well, the subsonic standards apply to, to boom. And so I kind of like joined up in early 2017 and sort of my job was like, let's figure out a way for that, not to be the case. Right. And so it was, it was basically, you know, look at all the different look at the space of actors and try to figure out a way for that, not to be true. And so, and so that's like kind of what I did. I started, you know, started talking with Congress with FAA. I started figuring out what levers we could push, what, what what angles we could Work work with to ensure that that, that we have we've got to a different place, different answer in the end. And, and so the, like, so basically it's just like this completely bespoke process of [00:04:00] totally like, even trying to figure out like what the constraints you're under are. Exactly. Right. So, so yeah, so it was, there's like a bunch of different, different aspects of that question, right? So there will you know, there's, there is statute, you know, congressional laws passed by Congress that had a bearing on the answer to that question that I went back to like the 1970s. And before there w you know, there was the FAA policy statement. There was, of course the FAA team, which you had to develop, you know you know, relationships with and, and, and, and sort of work with you have the industry association, right. That we remember of that Had different companies, you know, in addition, you know, in addition to boom, there, there were a bunch of other companies Ariane, which is no longer operating. We had Gulf stream, which no longer has a supersonic program. Or actually they didn't Edward admitted to having it announced really dead. They, you know, there was, you know, GE and rolls Royce. And so you had all these companies like coming together, you know, sort of under the, [00:05:00] under the watchful eye of Boeing, of course also. And, and so like the industry association had to have a position on things, and then you had like the international aspect of it. So you had a, there's a UN agency called Oko that sort of coordinates aviation standards among all the different countries you had the European regulators who did not like this idea that there were American startups doing Supersonics because, because the European companies weren't going to do it. And so they wanted to squash everything and they were like, no, no subsonic standards totally applied. Right. And so so that was, that's really the. The environment that, you know, sort of, I came into and I was like, okay, I've got to figure out, you know, I've got to figure out, build a team and, and, and figure out an approach here. And and, and try to try to make it not be the case that the subsonic centers apply. So we, so, you know, basically we tried a bunch of things at first, right. Like we tried to like, get our industry association, like all geared up for like, okay, well, we've gotta, we gotta fight this and they didn't want to do that. Right. So like, like [00:06:00] the other people didn't want to do that. Right. We tried a bunch of different angles in terms of, you know, we, we, what we ended up doing w w we got Congress to get excited about it and sort of, they, they started to, you know, there was a. Sort of a draft bill that had some, some very forward-leaning supersonic language that we, we you know, worked with Congress on it never passed in exactly that form, but it passed later in the 2018 FAA reauthorization. And then the thing that actually kind of ended up working was I had this idea in late 2017 was, well, you know, what. The, the sub the subsonic standard changes at the end of this year. Right. So, so so the end of 2017, so I was like, well, let's apply for type certification this year. Right. So we applied, like, we are nowhere close to an airplane. Right. And know we're close. Right. Right. And I was like, well, let's just, let's just, let's just like, screw it. We're going to apply like, like in 2017. And I had to like, get the execs to sign off on that. Right. We're going to do it, but we did. [00:07:00] So by the end of, I think December, 2017, we applied, I of course, you know, talk to my FFA colleagues and told them like, Hey, we're going to apply. Just so you know, they're like, well, that raises a whole bunch of questions. And, and that sort of got it, got them working down this path where they were like, well, you only have under part 36 of the FAA rules. You only have five years to to keep that noise standard. If, if you apply today and you're probably not gonna be done in five years. And I was like, that's true. We're probably not going to be done in five years, but we think that part 36 doesn't apply to us at all right. The way it's written. And then they went back and then they looked at it and they were like, oh, Part 36 doesn't apply to them like they're right. Like, you know, Eli's the first person in the history of Supersonics three per 36 and very closely. Right. And so and so then they went back and they like talked to their lawyers and, you know, they, I think came up with a new position in a new legal interpretation [00:08:00] w basically a memo that, that was, that was published that was like, okay, the subsonic standards don't apply and we don't have standards. We can start making some standards. And if we don't have one at any time for any particular applicant, we can make one for that applicant. We can, it's called the rule of particular applicability. So that kind of, once we got that, then in early 2018, like that kind of solved their problem. Like, and I think in in at least th th the domestic part didn't solve the international part, like from, from from Europe and so on. So. I mean, I, so, so if you think about like, what do you do on a policy team? Like you figure out like how, you know, how, how do you solve the problem that you have, that, that you were, that you were hired hard to fix and you just try things, try things until something works. It's part of the answer. Yeah. That's I mean, that's, I really appreciate you going into that level of detail because it's like the sort of like affordances of these things seem incredibly opaque. And just [00:09:00] for, for context, the subsonic standards are the standards that do not a lot, like that set a very like low noise bar. It's very stringent. I mean, the modern, the modern standards are pretty stringent. Like it used to be like, you couldn't, you couldn't basically like stand on a runway and have a conversation while plane's taken off these days. Like, I mean, it's, it's, it's gotten very, very impressive, but they, you know, the, the modern planes have gotten that way because they have high bypass ratios and the engines like big, big fans that move a lot of air around the engine core, not through it. Right. And so so that is, you know, that's just not workable when you're kind of trying to push that big fan through, you know, through the air at mock you know, 2.2 is what we were doing now. Now it's 1.7 that boom. But but but anyway, that's that, you know, that, that just doesn't work as a solution. So that's why, you know, it had to be different. Right. Right. And then did you say it's 30 S w w was it articles 36 [00:10:00] or 36? And volume, volume, volume, 14 of the code of federal regulations, part 36. Yes. Yeah. Yeah. And that's that, that's the part that specifies all the takeoff and landing noise certification rules for bar, all, all kinds of aircraft. Got it. And, and you re and there's like, like particular wording in that part that does not apply to that didn't apply as it was in, in 2018. I think they've now rechanged some of the definitions. They went through a rulemaking To, to cover some supersonic planes, although interestingly, still not Boone's plane. It covers the plane up to Mach basically between Mach 1.4 Mach 1.8 and below a certain weight limit. So basically biz jets, right. Business jets, small sort of low Mach business jets, but it would be covered under, under the new role, but as part of that, they might have incorporated. [00:11:00] I, I forget the details, but they, they might've changed the definition so that so that boom was at least you know, would, would apply the five-year time limit and stuff like that might apply. Got it. Okay. And so that's so, so sort of like they, at a company, the policy team is like really going after a specific problem that the company has figured out anyway, to, to address that I mean, that, that was, that was how that one was. I mean, I think that there are different, there are different companies, right. And the companies that are playing more in defense rather than offense. Right. So you could imagine oh, I'm thinking of like a company like Facebook, right? Where like the first amendment applies for 30 applies. Like they have like the legal, like they have all the legal permission to operate as much. As as they need to. And they're mostly just like putting out fires right. Of like, like people wanting to like regulate them as utility and things like that. So, so it's, it's, it's more of a defensive mode in those companies, I think. But, but yeah, it's going to [00:12:00] vary from company to company, depending on what it is you need to do. And you just have to kind of be aware of all the different tools in terms of, well, you can go to Congress and get them to do something, and you might be able to get the executive branch to do an executive order, or you might be able to you know, get a new rulemaking or a new guidance or, you know there's, there's just a whole host of different tools in the, in the toolkit. And you've gotta be able to think about them in the different ways that you can use them to solve your problems. And actually so this perhaps getting a little ahead of ourselves, but speaking of those tools, like what in your mind is the theory of change behind writing policy papers? I think that sort of among many people, like you see. Policy papers being written and then, and like policy happens, but like, there's this like big question Mark Black box in between those two things. I think there's, there's, there's definitely different theories, right? I think so before I started at boom, when I was at the Mercatus center, Sam Hammerman and I [00:13:00] wrote a paper on Supersonics and that was, you know, that one I think actually was really influential. Right. So we, we published it a month before the 2016 election, when we thought Donald Trump was going to lose and we titled it sort of as a joke make America boom again you know, so it was like, the slogan was perfect. And and then lo and behold Trump gets elected and that paper like circulated in the, the sorta like when, when his administration got constituted in, in January, 2017 you know, a DLT like that paper circulator and people are like, okay, this makes sense. We need to be very forward-leaning on Supersonics. And, and so, so that, you know, like we still haven't changed the law that we said was most important in that paper. Right. That what we said is that we need to re repeal the Overland ban and replace it with some kind of permissive noise standard that lets the industry got going on Overland, Overland flight. But I think it was influential in the sense of, it was some reference material [00:14:00] that a lot of different policymakers could look at quickly and say like, okay there, you know, there's some good ideas behind this and we need to support this broadly. And, and, and it's, you know, it's a reputable sort of outlet that, that came up with this and it's, and it's got all the sort of info that we need to, to be able to operate independently and moving this idea forward. Got it. So, so really is like a lot of just sort of like tossing, tossing things out there and hoping like they get to the person who can make, make a decision. Well, I think you know, ideally you're not just hoping, right? Like ideally like you're, you're reaching out to those people establishing relationships with the right people and and, and sort of getting, getting your ideas taken, taken seriously by everybody that, that matters in your field. And another, so, so this is, again, just coming from [00:15:00] someone who's completely naive to the world is like, how do you figure out who the right person is? Well, I think it depends on what you need to do, right? So like, if you need to repeal an act of Congress, you know, you've got to go to Congress. Right. So, so that's that's an example. So I, so I don't know. I think a lot of times the right person is, is not just one right. Person. I think that there's like a, there's also a move where you're really just trying to go after elites in society. Right. Like if you can get, if you can get sort of like elites, however you define, I don't know what the right, right definition of that term is. But but, but you know, if you can get sort of a consensus among elites that you know, that, that supersonic flight should be allowed over land or that you know that, that we should invest, you know, like the con the government should invest deeply in, in like geothermal energy or that you know, Wait, we need to like have a a Papa program for ornithopter whatever it is. You know, if you convince, like it leads across the board in society that we should do this, [00:16:00] like, it it's pretty likely to happen. Right. It leads still, still sort of control the stuff that at least at least the stuff that nobody else cares about. If it leads care about it, then, then they'll, they'll get their way. One. What sort of pushback to that then I actually wanted to ask you about would be that there's there's this view that in a lot of cases, regulations sort of a codes, a trade-off into a very like a calcified bureaucracy and then sort of like seals it off specifically like an example being you could make this argument that. Nuclear regulation, as opposed to sort of being about health and wellbeing or the environment is actually encoding a trade off that like in order to absolutely prevent any sort of nuclear proliferation at all we basically just make it so that you can't build new nuclear things. What do you w what do you think about that? Do you have technology [00:17:00] regulations? I mean, I think like nuclear is, would be like, I would think that that would be like one of the hardest regulations change, right? The, the, the sort of you're taking an entire agency, like the national the nuclear regulatory commission. Right. And you're saying like, we have to completely change the way, like, like if I were, if I were at one of these efficient startups, right. It'd be like, All right. My job here as the policy lead or whatever, is to completely change the way this entire agency operates. Right? Like that seems really hard, right? That is that's, that's, that's really challenging. And, you know, I don't, I'm not optimistic frankly, about, about their success. And so, you know, so in, in sort of the more like the research-y like nonprofit side of policy that I do now, you know, like a lot of what I'm looking for is areas where it isn't, that it isn't hopeless, right? Where there, where you can work and where you only need like small change and it makes a big difference. Right. And so you're trying to find those [00:18:00] leveraged policy issues where, where you can make a big difference. So that's, that's, that's how I think about it. And it's issue selection. Like when you're, when you're in the nonprofit world and you have the luxury of that, right. Which you don't necessarily in the for-profit world Like that's really, I think that's really important. And that's what separates like good policy entrepreneurs from bad policy entrepreneurs is, is that sort of like awareness of issue selection, and, you know, small changes that make a big difference. And, and so let's dig into that. How did, how do you sort of like, look for that leverage? Like what, what yells to you like that, that you could actually make a big difference by changing a small thing? So I mean like, like Supersonics is a, is a great example, right? That's one that I chose to work on for several years. And that's like, if you could get rid of the Overland band, right. One, one line in the code of federal regulations, the bands over land and flight over land, right. You [00:19:00] would unlock. Massive amounts of aerospace engineering development in a completely you know, new regime of flight that no one else has, no one else is doing. Right. You'd get rapid learning. Then that curve you get like engines being developed specifically for that use case, you'd get, you know, variable, geometry, everything being developed. For, for airliners and so on and, and you'd make a big difference you know, in, in the future of the industry and, and in the, in sort of this state of the art for, for flight. So I think if you could change that one line, even if you could, even if you couldn't change it international, right. If you could change it just in the U S right, you would get, I think the U S is big enough that, you know, sort of LA to New York and, you know, other plus all the over plus all the transoceanic markets that, you know, sort of the, you know, like a boom is going for now, right. If you got, if you got the combined, combining those two markets, you're at like, you know, DUP say doubling the market size for those planes. And and you'd get a lot more investment. And so, you know, it would be [00:20:00] a, it would be a huge A huge improvement. Right. And so, so I think that's, that's a highly leveraged one, one that I'm working on, you know, a lot more lately, I'm sure you've seen is geothermal, right. Where sort of like, I think there's no like real policy blocker, but the sort of the thing that I've been focused on is permitting, right? So if you want to, if you want a permit you know, there's a huge overlap between like the prime geothermal locations and federal lands. And so, so a lot of it's on, you know, so you need to get the federal government to give you a lease and, and you need to get their approval for it to drill the well. Right. And so that, that approval brings in, you know, environmental review and so on and conveniently the oil and gas industry has gotten themselves exempted from a lot of those environmental review requests. And my argument is like geothermal Wells are like the same as oil and gas Wells. So if they're exempted, like geothermal should be two, and that would speed up the approval time from something like two years to something like two weeks. [00:21:00] Right. So you'd go, you massively speed it up. Right. And so, and so, so just that sort of speed up on federal lands that wouldn't even change anything on, on private lands or on, on state lands necessarily. W w that, that sort of acceleration, I think, would, would, you know, could bring forward sort of the timetable for sort of the geothermal industry as a whole, by a few years. Right. So, so one small change. And so that's, that's, if you think about that, like socially, like, what is the value of that? It's many billions of dollars, right? So if I spend a year of my time working on that and, and get that changed You know, like my ROI for society for that one year is, is many billions of dollars, which is pretty good. It's pretty good. Pretty good. Pretty good way to spend my time. Right. Yeah. Yeah. I mean, there's, I mean, other things you know, like like I'm really interested in, in enhanced weathering, right? So olivine you're using olivine to, to to capture CO2. And I think it's like, it was the neglected thing and I think policymakers just don't know about it and if I could [00:22:00] educate them and sort of, you know, get them, get them get buy-in for like some sort of, you know, pilot program or, or whatever, whatever would be, whatever the right answer is for for that. And I'm not sure what it is exactly. But if, you know, if you can get them going on that, it's like, oh, we, we, you know, potentially. Capture, you know, many gigatons of CO2 for, you know, 10 to $20 a ton. Yeah. That's, that's pretty cheap and we'd solve a lot of other climate problems. Right. And, and, and it would be maybe the cost of dealing with climate change would go down by something like an order of magnitude. Right. That would be that's, you know, like again, like pretty highly leveraged. So that's like, those are some examples of like, why I've chosen to work on certain areas. But I think, I think I'm not saying those are the only ones by any means, and it just, just what makes a good policy entrepreneur is figuring out what those are. And, and I guess, like the thing that to put a little bit more is like, how, like, is there something that people could do to [00:23:00] find more of those leverage points? Like it was, it, is it, I guess there's like two, maybe two purchase. One would be just like take an area of interest and like, just like comb through the laws. Like basically like point changes that way to unlock things. Or is, is there a way to like actually sort of like look for potential point changes agnostic of the actual no, it's a great question. So, so, so I've been, so I've been, you know, trying to talk to people about like, what is the way to systematize this. Right, right. So I think that's the question you're asking and, and, and I've been, so I've been thinking about like, what, what is my, you know, what is my system, if I have such as I, such as it exists. And I think that the right answer is to come at, I mean, one is to come at it from the perspective of the entrepreneur. Right? So, so if you, if you think about it from the perspective of, you know, this is a company that is trying to do this thing, or I wish there was a company that was trying to do this thing, like, what would, what would, what [00:24:00] would they run into, right? What is that? What is the actual obstacle? What is the actual policy obstacle that they face? I think that that is the most construct. Way to do it. And, and, and to give you an example of a different approach, right? You can think about some, you know, a bunch of our friends, you know, we're working on this endless frontier, Zack, right. Which is like complete rethinking of the entire like science funding and technology funding thing. Like that is a different approach. And maybe that maybe, you know, we probably need some people working on that and that modality as well. But I, I think it's released for me, it's more effective to do this sort of more bottom up You know, think about it from, from the perspective of here's this thing I want to exist in the world. Like here's the specific narrow problem that they would face if they tried to do it, like, let me work on that as much as possible. Yeah. I think, I think another thing that's really important is you know, the, the policy analyst or whatever should try to learn as much [00:25:00] as possible on from on a technical level about, about the technology and how it works and like the physics of it or the chemistry of it, whatever it is. And I think a lot of, a lot of policy folks don't right. I think that they they're like, well, I'm going to deal with this like legal stuff. And I'm just, you know, I'll go to the engineers if I have a question, but I don't really want to learn it. And I think that that's, that's that's not helpful. I think you want to get in the weeds as much as possible. I mean, Boom. Like I sat people down all the time. It was like, I need you to explain this to me cause I don't understand it. And, and, and I just had tons and tons of conversations with the engineering team and, and, you know, people who weren't on the engineering team, but who understood things better than me and over time, you know, so it got to the point where like, okay, I understand, you know, these airplane design trade-offs pretty well. Right. And, and then, and then, and then when I'm talking to a congressional staffer or, you know [00:26:00] someone at, at a federal agency or something like that, that I can explain it to them. Right. And in sort of in a way that they can understand. So, so I think that you know, thinking from the bottom up you know, try and trying to put yourself in the position of the bottom of the entrepreneur working on it, looking at it from looking at it from you're not being afraid to dig into the technical weeds. I think those are. Those are the things that I would encourage sort of other people working in policy to, to experiment with and to try. And I think that would make them, you know, more, more successful. Yeah. And actually on that note another thing I wanted to ask you about is if you have any opinions about sort of how to get more technical people in to government and policy and like vice versa, help more government policy people like actually understand technical constraints. Cause I just find like very often, like it's like I had this instinct too, where I'm like, I don't understand policy, so I'm just going to like try to avoid [00:27:00] anything that touches government. And, and like that seems suboptimal. Yeah. So it's something that I think about a lot. We're thinking about a lot at the CGO actually is, is, you know, how can we. How can we, you know, either when we train people up, you know, in terms of, you know, young policy analyst, how do we get them to like, engage, you know, like maybe so we're exploring ideas right. Of how we would do this. Right. How could we, could we bring in young policy analysts and like kind of mentor them or like teach them how to, how to sort of, how to self-teach some of the technical stuff, right? Like, like like work through this stuff, or conversely, as you say, like we can take some technical people and, and sort of teach them the road. So policy, if that's what they want to do. Right. And, and, and give them that, that toolkit as well. And cause I think that the overlap is, is really, is really effective. If you can get it, if you can get someone that's interested in playing in both spaces, I think that that is really effective. [00:28:00] And, and the question is like, who are these people that want to do it? You know, there's not, it's not really like a career track. Exactly. Right. It's. And so, you know, if we, if we found a bunch of people that wanted to be that, that you know, in, in that sort of Venn diagram overlap, like we would, we would definitely be interested in training them up. Yeah. W w one thought there is actually sort of what we're doing right now, which is making the, the policy process more legible. In that, like, I, I think it's, it's very silicone valley has done a very good job of like, making people see, like, this is how you change the world by like starting a tech company, whether that's true or not. But it's, it's like very unclear, fuzzy how one changes the world by like helping with policy. So like just making that legible seems very important, you know, I think, I think the other thing about it is that you know, Silicon valley, you know, I think investors and entrepreneurs are too afraid of. You know, what they would call [00:29:00] policy risk, right. Or something like that, you know, like, like, you know, I think it's you know, I think it varies case by case how much of a risk it actually is. But I think it, you know, sort of my view when I was at boom was like, look, there's no way that FAA is not going to let us certify plane. Like, there's no, like, like w we will, they will run us through the ringer. It'll be expensive. Like we'll have to like, spend, you know, new, all kinds of tests and stuff like that, but they are not going to get, we're not going to get to a point where, like, we have a plane ready to ready to fly. And like, yeah, it's not certifiable because of like, something like, like noise. Right. And so, and so like, like there was, or there, you know, there is not like that much policy risk and, and a lot of things you know, I wouldn't feel that same way about like a nuclear startup, right. Like like efficient startup, but but, but sort of being, you know, I think that I wish that The investors were a little bit more savvy about like, what is a smart policy risk to take [00:30:00] and, you know, what, what can be, what can be worked and what can't in terms of policy risks. Yeah. And again, I think it's, it's one of those things where it's like, we need more ways of people actually understanding that of like, like how do you, how do you grok those things? And then I guess, I guess the last thing on, on sort of the regulation front is like, are there historical examples of like sort of like very broad deregulation that enabled technology, like actual, like, it feels like regulation is very much this like bracket where like we keep regulating more and more things. And every once in a while you get like a little bit better, like in the FAA case, but like, is there ever a situation. There's a really big opening up. Yeah, there, there are a few cases. Aviation is a perfect example, actually. So, so if you're, I don't know if you've read the book hard landing, but but it's an excellent recommended it if you're, if you're interested in this at [00:31:00] all, but it's basically a history of sort of the aviation industry up through what they call deregulation. Right. Which is there's happened in the I guess the late 1970s. Because up until that point from I don't remember when it started, but there was this thing called the civil aeronautics board that basically regulated routes and affairs. So if you were an airline, you got to fly the routes that the government told you, you could fly and the fares that they, and you, you, you got to charge the fairs that they Told you, you could charge. Right. And you couldn't give discounts or anything like that. Right. Like you had to charge like that fair. Right. And so, so like, what did you have to compete on? Like, like, not very much, right? Like you, you competed actually like on in-flight service and stuff like that. So So, I mean, you had sort of before that deregulatory era, you had a very lavish in-flight meals and stuff like that. And, and super expensive, super expensive, super expensive tickets and not a lot of [00:32:00] convenient route choice and so on. And then And then sort of in the late 1970s under Jimmy Carter, I think I think Ted Kennedy was was the, one of the big proponents of it. So was like getting rid of the civil aeronautics board. They got rid of it, right. Like they got rid of an agency. And so and, and so that sort of deregulated the, the routes and, and the city, you know, city pairs and, and times, and, and the fairs that they could charge. So now, like you can buy like, you know, a ticket to Orlando or Charlotte, or, you know, whatever for like 200 bucks or less. Right. And, and it's and you know, that's all thanks to deregulation. Oh, that's not really exactly an enabling technology, I think, which was your initial question, but it just allowed the industry to move forward and and, and become a whole lot more efficient. And so one could imagine something similar for. Like technology regulations. Yeah. I think in getting rid of an entire agency is pretty rare. But [00:33:00] but, but, but yeah, I think that but yeah, it's, it's not, it's not like a lot of people think like regulations a one way ratchet. That's not totally true. Like there have been, has been times in the past where we got rid of a whole lot of regulation. Yeah. And actually related to that, do you have any good arguments against the position of like, we need regulation to like keep us safe besides sort of well, we also need to like, like there is too much safety. Like I, I find, I wish there was like a more satisfying thing besides like, well, you know, it's like sometimes we'll have to take risks. Right. So I think, I think, I mean, it's, it's true that Like, there's not, there's not like from an economics perspective, like there's not really a good argument for regulating safety, because you would think that the customer could, could make their own choice about how risky they want to live their life. Right. And so so, so it is a little bit awkward from that point of view, I think we're never going to get a situation where the government [00:34:00] doesn't regulate safety and a lot of things, right. They just it's just reality is that you know, the peop the public like sort of wants the government to regulate safety. And so therefore it will. But I think that there is still a difference in the kinds of kinds of safety regulation that we could have. Right. So, so I think one example that I think about a lot is The way planes are regulated versus the way cars are regulated. So if you, if you think so with, with planes FAA sort of type certifies, every plane that is produced or that is registered model of plane that is produced and you have to get that yeah, it has to get an airworthiness certificate and stuff when you register it. And so that's, that's an example of what's called pre-market approval. Before you go on the market, you have to be certified, right? Drugs are work that work the same way with cars. It's a little different, right? You have car safety standards that, that NITSA promulgates and enforces. But The way that that is [00:35:00] enforced or the way that that is, is dealt with is that the car companies, you know, know that they have to design to these standards NITSA monitors, the market, all right, the marketplace, they sample sample cars that, that and, and test them and stuff like that. And or if they observe a lot of accidents or whatever, they can go back and they can tell the, the car company. Okay. You have to do a recall on this car. And, and make sure, you know, fix all these things that we found that, that aren't up to snuff. Right. Right. And so, so, so that's, that's an example of post-market surveillance, right? So those are both safety regulations, but they have huge structural differences in how they operate in terms of, you know, how, how much of a barrier is there to like getting to market, right. The pre-market approval cases. It means you're, front-loading all of the costs. You're delaying you're, you're making it hard for your investors to recoup any, any returns, just see if the whole thing is going to work, et cetera. So there's like all kinds of effects of that. Whereas in the post-market surveillance model, like you're incentivizing good behavior, but we're not going to [00:36:00] necessarily like verify it upfront. We're going to, which is costly. We're gonna, we're gonna let it play out in the marketplace for awhile. And if we detect like a certain degree of unsafeness, we're going to make you fix it. Right. And so I think of that, I think of that structural difference is really important. And I would, I would like to see. It's more of that that post-market surveillance model. I mean, you could think about it even for drugs too. Instead of, you know, instead of upfront clinical trials, we could say, okay, like you have this technical here. Like we see that it makes sense as a potential treatment for this thing. Like, you know, you would have to test it on people one way or the other. Right. In terms of you know, w whether it's clinical trial subjects or patients who have had the condition we will allow you to use it on this, but we're gonna, we're gonna monitor like, carefully what the side effects are in those early applications of the drug. And if it turns out to be unsafe, we're gonna pull it. Right. And so that that's, that would be a different way of doing it. You know, you can imagine we could do that. Right. But that's, [00:37:00] that's just not where we are. And so I think it is hard for people with You know, sort of bought into the current system to, to think about like how we would get there or how that would be, you know, why we would ever do that. Right. It, it, it does seem much more attractable to just say like, okay, we're still going to regulate, but we're going to do it in a different way though. Like, I, I really liked that and I, I hadn't thought about that very much. I'm going to completely change gears here. And let's talk about GDP, total factor productivity. Your, your stated goal is for GDP per capita to reach 200 a thousand dollars by 2050. And just for the listener context, I looked up some numbers. The current global GDP is $11,000. So we're talking about more than an order of magnitude increase. The highest right now is Monaco at 190 K. So they're not even so I, so I'm, I'm, I'm thinking like S specifically I want to get to 200,000. I want to get everybody there [00:38:00] eventually, but by 2050, I think we, I think we could get the U S so the U S has 63 K right now. Which so, so like we've got a triple it, yeah, we've got it from the blood. And so the interesting thing that I think is like, so the U S looks like it's both low places like Ireland and Switzerland. And like, so, so, so my, the thing that I'd like you to justify is like why high GDP is the thing we should be shooting for, because I would argue that like, sort of on a, like, things that are going on there's like, I would rather be in the U S than Ireland or Switzerland. And so, but like they have higher GDP. Yeah. So like Ireland, is this a special case where like, they have a bunch of tax laws that are favorable and so a lot of like profits and stuff get booked there. So, so I, so I think that that's, I think that's what's going on there. So I would say so GDP is Is it not a perfect metric. [00:39:00] I think that the degree to which it's imperfect, it's often overstated by, by people. So it's, it's pretty good. Even, so I would say I like TFP better as a like, so I, I, I use GDP per capita because I think people are more familiar with it and stuff like that. But I, what I actually think about is in terms of TFP and so total factor productivity is just like, how much can you get more output? From a given amount of inputs. Right? So like, if, you know, if I have in my society, a certain number of plumbers and a certain amount of you know, lumber and a certain amount of, you know, any, all the inputs that you have, right. What can I make out of them? Right. Like how much, how much, how much was the value, total value of all the goods that I can produce out of all the, all the resources I have going in. Right. And you want that number to be as high as possible. Right. You want to be able to produce as much as possible given your inputs. Right. And so that's, that's the, that's the idea of TSP. [00:40:00] And just to like, dig into that, how do, how do you measure inputs? So like, like outputs is just like all, all like basically everybody's receipts, right. So I'll put, so, so in this, there's a very simple model yeah. That people use, right. It's called the, the sort of the solo model. Right. And the idea there is you have you have GDP, which is just a number, right? It's a, it's a dollar value real GDP is what you're concerned about. And then you have how much, how much labor do you have and how much capital do you have. And then, and then you you take logs actually of it, and then you do a linear aggression. And then the residual, the residual term in that regression is your, is your number for a total factor productivity or log total factor productivity. And so that's, that's how you would do it. Is it, that's a very, very rough estimate right. Of, of how you do it. Sometimes people add in things like human capital levels. Right. So if we if we brought in like a bunch of an educated [00:41:00] immigrants and and brought them in, so, okay. Like labor productivity would go down. If it's measured naively, but if you include in that regression, like a human capital term to, to to reflect education levels, like then, then it wouldn't right. Ideally it wouldn't. So, so anyway, so that's, so that's how you do it is you, you, you, you take labor, capital and output and you figure out the relationship between them and you see that you're getting more output than you used to from ideally hopefully from the given amount of, of labor and capital that that went into it. That's not true in every country. Right. You know, actually our countries where you go down in an output over time. So Brazil, where I. Peaked in total factor productivity in the year of my birth in 1980. And so, so, so it takes about 50% more resources today to produce the same amount of output that they produce that in real terms. Right. And, and, you know, Venezuela is like a basket case, right. They produce way less. So, so so it's, it's, I think it's a [00:42:00] good it's a good concept for thinking about two things bound up together. One is technology and the other is the quality of institutions, and those are the two things that if you improve them, then, then your output, given a certain basket of inputs is going to is going to be higher. Yeah. That's, that's compelling. I buy into the school of thought that institutions are like kind of a social technology that like, should we just actually talk about it that way? And like, to sort of sort of like prime my intuition and like other people's intuition about TFP are there examples. In history of like technologies that like very clearly increased TFP. Like you can like, see like thing invented TFP, like brand of TFP increased shoots up. Yeah. So, so the the guy who's written the most about this is this guy, Robert Gordon. And what he actually would argue is the thing it's like thing invented like a few decades pass [00:43:00] while things like integrating it and figuring it out, then big increase in, in, in, in TFP and GDP. Right, right. And so, and so he, he had this paper and then eventually a book on the five grade inventions. Right. And I, and so things like the internal combustion combustion engine, the idea of. Like sanitation plumbing, et cetera. The idea of pharmaceuticals, chemistry, and pharmaceuticals electricity was probably one and I think that's four, right? And I, and the fifth escapes me right now, but he, he basically argued that we had these sort of five great inventions in the late 18 hundreds. It took a few decades for them to get rolling. And then from 1920 to 1970, you had this like big spasm of growth TFE grew 2% a year. And he basically would argue today that's unrepeatable because we don't have those great inventions. And all, all we really have, according to him is, is progress in it. Right. Like we have, so we have one great invention [00:44:00] and, and that's, you know, it really still hasn't shown up in the productivity statistics. It may still be coming, but he would argue. Yeah. There's just, you know, we've, we've eaten all the low hanging fruit, like there's no more great inventions to be had. And when we just got to settle for a, you know, half a percent a year or TSP crows from here on out, but as I understand you disagreed like I, I certainly share your biases. And so recently you posted a great article about like possible technologies stack that could come down the pike. Do you have a sense, like, and so like through the framing of TFP do you have like, of, of all the things that you're excited about, like which ones do you think would have the biggest impact on TFP and like, what is the mechanism by which that would happen? I mean, so, so, so I think probably the closest, the thing that's like closest to us, where we are now is it's probably like big energy [00:45:00] price reductions. Right? So I've, I'm really bullish on geothermal, I think like 10 years from now. It's totally possible that we would have you know, sort of a geothermal boom, the way we had like a shell boom, right. In energy, in the, in the last 10 years. And then we'll be talking about like, oh man, like energy is getting so cheap. And so energy is something that sort of like infuses every production process in the entire country. And, and so it's difficult to really explain like how exactly it moves iffy. It just moves everything. Right. It just makes everything. You know, if we get, if we get energy costs, you know, down by, by half or something like that, then it makes a lot of things twice as, as productive or, or some, or some maybe not exactly twice, but a lot more productive. So that's, that's one example, but then like other things like longevity, right? Like, let's say we, we we, we fix a fix, you know, extending lifespan and say compress morbidity. Right? Like we make it so that people [00:46:00] don't get sick as much. Right. Well, that manifests as lower real demand for healthcare services. Right. So, so it's like, you don't even go see a doctor until like you're 90. Right. And like, and you don't need to learn because like you're still healthy. Then show up in GDP. They do. Right. But they, but what would happen. See here's where you have to distinguish between real and nominal GDP. Right. So in real, in real GDP, like we would, we would get the same, like with, with proper accounting, right. We would get the same or better. We'd get better at levels of health with fewer dollars spent on it. Right. So we'd be more productive in that, in that sense. Right. And so so we would so we might spend less on health services. But we would also have, we would employ fewer people in those sectors. Right, right. The employ those people would, you know, smart people right now who work in the healthcare sector, those people would all get to do other things like, and they would, they would all become researchers or, [00:47:00] you know, other, other kinds of technicians or, you know, whatever. And, and, and those people would produce things in their new role. So it's like, if, if, if all of a sudden we did not need. As many x-ray tacks or something like that. Right. And all those x-ray techs are out doing new things. That's like getting the x-ray texts for free. Right. It's another way of saying it is like we're getting all that for free, that same output that we used to get, we're getting it for free. And now we are we're taking those same people and, and getting the produce even more on top of it. So, so, so when you think about real GDP, like jobs are costs, right? Like you don't want jobs and you actually, you actually want to reduce as much as possible, like the spending on the need to spend money on things even. Right. And so that's how you actually increase productivity and ultimately real living standards and real GDP. And, and do we actually measure real GDP? Is that like hospital or is it like, sort of like a theoretical concept? No, [00:48:00] we, we, again, it's, it's kind of like the FP, right. We infer it. So we, we sort of And we estimate nominal GDP based on just how we, how we spend, how people are spending their money and how quickly they're spending it and so on. But even that, it's not like we're counting every receipt in the economy and adding tabulating them. Right. It's it's still an estimate. So we're estimating nominal GDP, and then we're also estimating the price level changes. Right. And so you address the nominal GDP estimate by the price level change and that's your real GDP number. Got it. Okay, cool. This is, I really appreciate this because I see all these terms being thrown around and I'm like, what is actually the difference here? Like what's, what's going on. And last question on TFE, can you imagine something that would be like really amazing for the world that would not show up in TFP? Is it like as just like a thought. I think, I think stuff that improves the quality of your leisure [00:49:00] time is unpaid, right? Like, like or that, or that you almost get for free. So like you know, if let's say, let's say open a designer, like an open source video game or something like that. And like, everybody loves it and it gets super high quality leisure time out of it. Right? Like there's no money changing hands. There are utilities going up. Right. So, so like you would, you would think that that would improve living standards without, without showing up in measured GDP at all. Right. So that's, that's the kind of stuff that it's like, yeah, he's got, you got to have that in the back of your mind that, that that's the kind of thing that could, you know, throw off your Your analysis. Okay. And so, and this is actually what some people claim is like, oh, the value of, of the internet, you know, the internet has, has, has increased welfare to something sentence. It's like, okay, yes. To some extent, but, but is it, you know, it's not like a whole like percent, 1% growth a year. It's not, it doesn't, it doesn't account for the reduction in, in TFP that we've seen. Yeah. [00:50:00] Yeah. That makes a lot of sense. Changing gears again make the case for airships air shifts. Yeah. So I think you know, you have. Cargo that is, there's basically two modes that you can take cargo on today. You can take them, put them on a 7 47 freighter, let's say, and, you know, get them to the destination the next day. And it costs a lot of money or you can put them on a container ship and it's basically free, but it takes, you know, a few weeks or even months to get to your destination. And, you know, what, if there was something in between, right? What if there was something that would take, you know, say four or five days anywhere in the world. But it's, you know, like a fifth of the cost of, of an airplane, right? That, that that's like a sweet spot for cargo you know, anywhere in the world. And. You know, so, and then, so with airships, there's an interesting thing about them is that they actually get more efficient, the bigger they get. [00:51:00] And so this is, I think the mistake that everybody's made when designing airships is, they're like, okay, we're going to design this cargo Airship to take like 10 tons to remote places. Well, no, you should be designing it to carry like 500 times, right. Because there's a square. Rule. Right. Right. If you, if you if you increase the length by a certain percentage, the, the volume increases by that factor to the cube, to the cubic power, through the third power and the the surface area and that the cross-sectional area increases by that power or that factor squared. Right. Right. And so your lift to drag ratio is getting better. Cause you, your, your lift is associated with the V with the volume and your drag is associated with the cross-sectional area. And so you're, you're getting more efficient, the bigger you get. And so I think if you designed say a, an Airship to go to carry about 500 tons a time at a time, so it's like four loads for 7 47 loads [00:52:00] at a time. And and, and, and sort of your target. Goods that had a value to weight ratio. That's sort of in the middle of the spectrum. So it's not, not computers or really high value items or, or electronics even, but more of the things like machinery or cars or part, you know, parts for factories and stuff like that. You could that be a nice little business and and you could. You know, provide a new, completely new mode of, of cargo transport. I think that would also be revolutionary for people in landlocked countries. You know, so, so, you know, I, I spent gosh, like a week in, in Rwanda about 10 years ago and, you know, just sort of like studying the country. And and one of the things that we noticed was to access a port on, in Tanzania, like, you know, you'd have to like, it's like 700 miles away or something like that, but you, you have to put the goods on like rail and the real [00:53:00] gauge changes several times between there and the port. And every time the rail gauge changes, like you would have to like pay a bribe to somebody to like move it and stuff like that, like just do their job. And and so that adds up to a lot of inefficiencies. So it's really cheap to get your container to the port on the coastline, but then to, to get it the last 700 miles, it's really expensive. Well, what if you could just get around that by, by taking something in the air ship, right. And so if you, if you designed the Airship for this, like transcontinental or, or Intercontinental. You know, ocean shipping market it would also work for that for that sort of landlord market pretty well. And you could, you know, you could, you could actually bring more than just machinery to a country like Rwanda from from, from that. And then I think there's also a high value remote services market, right. And this is, this is the one that people are going after and sort of like a standalone sense to some degree, like you know, smaller ships that carry 10 or 20, or maybe even 60 tons. It's like, okay, [00:54:00] yeah, you could serve that market, but even better if you design it for a 500 ton model. So, so anyway, that's, that's sort of, my view is like, this is a missing product that we should have. You know, it's over a hundred year old technology. We have way better materials today than we had in the last sort of the last Airship. Yeah. Think about like the, the rigid bot they ships of the past, they'll use aluminum for their internal trusses and you know, carbon fiber protrusions would have something like a six, six fold strength to weight ratio improvement. And let's say you double the, the safety factors. Okay. So your, your weight goes down by a factor of three for your, for your whole structure. You could do it autonomously today. You don't, you don't have to have labs and heads and, and galleys and all that stuff, and you don't have to have bunks. Like you could, you know, if you were on a a manned air ship, like you'd have to have multiple crews because, you know, it's like five day journey. So, or at least some of them would be so do it completely autonomously. [00:55:00] And then another question is like, could you use hydrogen as a lifting gas? Right. Because I mean, so there's a bunch of different arguments for why maybe you could, but if you were on yeah. You know, even, even, even the safety regulator would have to say, well, okay, like this might burn up, but like there's nobody on board. So so maybe it's okay. So, so anyway, I think that there's, I think there's definitely something really interesting there in terms of new, new vehicles that we could have that would enable, you know, a new mode of transportation for at least for Kartra and the so, and you've also written that it's less a technology question and more that sort of like a company that's willing to go all in on, on logistics question. And it seems like th th the way that I see it, it's like the problem is that there's not a like super lucrative niche market to go after. I think it could be super lucrative. And I think the, the, the big market is super lucrative, right? If you're, if you're let's say, you know, [00:56:00] you are. Yeah, let's say you can get 5% of the cargo of the container market, not the bulk cargo, like forget the bulk cargo. Don't don't do that. Like, don't go for the stuff that's already on air freight. Right. You might get some of that anyway, but, but just, just the, the stuff that's containerized today, right. If you could get 5% of that, I think that that would be 4,000 airships. And, you know, if you're, if you're the first one to market, like you have a monopoly right on that, or at least that, that segment of the market, and you could charge it like a decent markup. I think, I think it's like a, you know, you could in revenue, you could make like 150 to 200 billion a year, something like that. Right. And, and then, and then say you get you know, half of that in profit, right. An operating profit at least you know, like it's not a small market. So the culture problem that I see is like that it's, it's worth calling out is like, that is you need to like come out of the [00:57:00] gates at a certain scale. That would make it very hard to sort of like ramp smoothly, I think is like, it doesn't, it doesn't work with a small airstrip. Like you can't do like a half size Airship and expect to be competitive or like a small company even. Right. Like you just come out of the gates with like a big fleet, right? Like you could say, you could maybe like, say like your first, your first five airships are targeting, like the remote market where they might have a higher willingness to pay. I think that that could be a thing you do, but yeah, you want to just, you want to rent production and just, just churn out you know, hundreds of, you know, hundreds of airships a year, right? Like that's what you want to do. It's hard to call out. It's not like that. There's like this gap here. It's like, there could be this amazing, this like amazing new thing, but it's just like the way that companies start now. Yep. It does exist. Cool. And so in this last part, I want to just do some sort of rapid questions take as [00:58:00] long or as little time as you want to, to answer them. Why is your love of vertical farming? Irrational? I think it's, I like I am by no means a farming expert. Right. So like, so I, I see these th this sort of technology and I'm like, this is awesome, but I know next to nothing about it. So it's not like an informed like, well considered love it. It's sort of just like, I I think that this would be super cool if we moved to, into our farm. Right. And that's, that's about the extent I would say it's like potentially rational. It's potentially rational, but it's, it's, it's, it's not it's not well grounded. Okay. Why are there so few attempts at world dominance? Oh, man. I wrote a blog post on this a long time ago and I don't remember the answer. Oh man. I don't know. I think it's, I think I think it's a, I think it's a puzzle, right? You, you see these people who become like globally famous and super influential and they and they just sort of they, they sort of Peter out and they become self satisfied with whatever they [00:59:00] accomplish. But like somebody like there, there are some really talented people out there that you would expect some of them to apply themselves to this problem that I feel like the power influence of like extremely like wealthy, powerful people is like shockingly small compared to what I would expect. Like, I dunno. It's like, I feel like Jeff Bezos actually has a lot of trouble like making the things that he wants to happen with the world happened. And I find that certainly certainly true with like blue origin. Yeah. Yeah. Or just like, sort of like any, anything, like, like you see, you see all of these people who like we think of as like rich and powerful and like, they want things to happen in the world. And like, those things don't seem to happen very often. And that, that puzzles me. Like I have no, you know, I'd say that it does raise the question of like, whether there are people who actually are having a massive influence, which don't know who they are. Right. The, [01:00:00] the, the gray eminence. Yeah. The person behind the scenes who are, who's like really, really influential. Yeah. Yeah. Sort of within your field defined broadly, or like, however you want who do you pay attention to that many people may not be aware of? Oh, thank you. Okay. But like in all seriousness do I pay attention to, I mean, I think I don't know. I'm, I'm blessed to have have people who just like, you know, me out of the blue and like, like tell me things. And, and, and so so I, so I have a, I have a couple of friends, so like one that I worked with for many years who like still texts me, like interesting things all the time. And, and, you know, sort of like the, sort of the private conversations that that could, that could be public conversations. If there were like more public people, but they just like choose to choose to be like totally behind the Steens and choose to be gray. Eminences let's say. And, and like that, I think that that is a. [01:01:00] Like that's who I pay attention to. A lot of the time. Yeah. Yeah. That's that's fair. And I guess just like finally what are, what are some, we've talked about some of them, but like some unintuitive blockers for your favorite technologies, unintuitive Walker. So I think that that, like, I've written a lot about NEPA, right. This, so you may have heard me see me do a lot about this. This is the national environmental policy act. And, and, and so, you know, I think it's like sort of the theory behind it is like, okay, before we decide, we're going to like, Build this highway or whatever we're going to like study it and make sure that like the, what makes sure we understand what the environmental impacts are and that if, you know, if there are negative environmental impacts, we're gonna like study alternatives as well. Right. And, and so what got me sort of worked up about that was I was in a very high level meeting with FAA, got like, seen very senior, very senior people. [01:02:00] And, and, and sort of like the conversation like went to like, well, why can't we just change the, you know, the Overland bed? Like, why can't we do it? And so, and like one of the answers, and it's not the complete answer, one of the answers was like, well, we would have to do an environmental review if we were to change. If we were to change the. Of Berlanti rule and we don't have the data to justify, like, to even say what the impacts are like, what are the environmental impacts of, of Sonic booms on people? Because like, you know, and so this is why like NASA is doing a, a, a study to you know, they're, they're developing actually a many hundreds of millions of dollars. Airplane T to be a low, low boom demo. And they're gonna fly it over you at the cities and like figure out what the response, the human response is, so that we can have that data so that we can do an environmental impact study. Right. So, so that's [01:03:00] so, so yes. And so, so under so last year there was a rule change in NEPA, sort of in the implementing regulations that said that if you don't have data, that is okay. You just have to say, you don't have the data in the environmental impact statement. That's supposed to be enough. That's supposed to be adequate, like NEPA is not a requirement to go and do science projects. Right. So I wonder if that conversation would go differently if we were having it today. But, but that was the answer at the time was like, we don't have the date. To do this environmental impact study if we were to change it if we were to try to change it today. So, so that they, to me, like that was like that, like that radicalized me on NEPA. Like that is like, that is really, really wrong that that we're like, this is, this is stalling progress in Supersonics, even though it was, you know, it's not really like what it was intended to do or or, or, or this idea of like, you need to go do a you know, $500 million experiment in order to get the data so that you can change one rule, one line, one [01:04:00] line in the regulatory code. That is, and then the question is like, how many other things has depo? Yeah, no, it's, it's when I started looking into it, it's everything it's like, it's like, I think it's it's not the only reason for, you know, the great stagnation, but it is a major factor. Yeah. Let's see. Is there an optimistic, like the one thing I don't want to do is close on a pessimistic note. So like, what is something that people should be optimistic about that they're probably not optimistic about right now? I don't know, man I think so I would say I would say air pollution, right? So, so the, so the bad news is air pollution is way worse than we thought it was. Right? Like it's, it's like in terms of like the health effects and, and sorta like the, there's like all kinds of like negative effects on crop yields. I just saw a paper about it just, it, the more we look into it, the more terrible days, but we are, you know, I think we're going to make a [01:05:00] transition pretty quickly here to electric fuel. And that is going to significantly improve air pollution. And we're going to get, like, I think all these unmeasured benefits from, from cutting out, you know, especially the diesel emissions are the worst ones. And, and sort of, you know, kids getting asthma and, and, you know, like low birth weight babies, and like a lot of other things, there's just like, a lot of those problems are going to be all of a suddenly and unpredictably and, and, or, you know, seemingly out of nowhere, like reduced because we're going to have a lot lower air pollution, especially. And then if you think about everywhere else in the world too, like, I mean, the U S isn't so bad relative to a lot of other countries. So so yeah, there's just, I think just the switch to electric is going to is going to be a big deal. Yeah. Especially it's, they're all powered by geothermal plants. Oh, yes. Right. That's even better. Amazing. Thank you so much for doing this. [01:06:00] You've really have taught me a lot. And hopefully it will push more, more of these things forward. Well, it's been super fun, Ben. Thanks for having me.…
I
Idea Machines


1 In the Realm of the Barely Feasible with Arati Prabhakar [Idea Machines #37] 53:36
53:36
Play Later
Play Later
Lists
Like
Liked53:36
In this conversation I talk to the Amazing Arati Prabhakar about using Solutions R&D to tackle big societal problems, gaps in the innovation ecosystem, DARPA, and more. Arati’s career has covered almost every corner of the innovation ecosystem - she’s done basically every role at - DARPA she was a program manager, started their Microelectronics Technology Office, and several years later returned to server as its Director. She was also the director of the National Institute of Standards and Technology and was a venture capitalist at US venture partners. Now she’s launching Actuate - a non-profit leveraging the ARPA model to go after some of the biggest problems in American society. Links Actuate Website In the Realm of the Barely Feasible - Arati's Article about Actuate and Solutions R&D Arati on Wikipedia Transcript [00:00:00] welcome to idea machines. I'm your host and Reinhart. And this podcast is a deep dive into the systems and people that bring innovations from glimmers in someone's eye, all the way to tools, processes, and ideas that can shift paradigms. We see these systems outputs everywhere, but what's inside the black boxes with guests. I dig below the surface into crucial, but often unspoken questions. To explore themes of how we enable innovations today and how we could do it better tomorrow. In this conversation, I talked to the amazing RFE provoca about using solutions R and D tackle, big societal problems, gaps in the innovation ecosystem, DARPA and more. Are these career has covered almost every corner of the innovation ecosystem. She's done almost every job at DARPA where she was a program manager, started their micro electronics technology office. And several years later returned serve as their [00:01:00] director. She was also the director at the national Institute of standards and technology and a venture capitalist at us venture partners. Now she's launching actuate a nonprofit leveraging the ARPA model to go after some of the biggest problems in American society. Hope you enjoy my conversation with Arthur. Provoca. I'd love to start off and sort of frame this for everybody is with a quote from your article, which, which everybody should read and which I will link to in the show notes. You say yet, we lack a systemic understanding of how to nurture the sort of rich ecosystem we need to confront the societal changes facing us. Now over 75 years, the federal government has dramatically increased supportive research and universities and national labs have built layers of incentives and deep culture for the research role. Companies have honed their ability to develop products in markets, shifting away from doing their own fundamental research in established industries, American venture capital and entrepreneurship have supercharged the startup pathway for commercialization in some [00:02:00] sectors, but we haven't yet put enough energy into understanding the bigger space where policy finance and the market meet to scale component ideas into the kind of deep and wide innovations that can solve big previously intractable problems in society. These sorts of problems, aren't aligned to tangible market opportunities or to the missions of established government R and D organizations today, the philanthropic sector can play a pivotal role by taking the early risk of trying new methods for R and D and developing initial examples that governments and markets can adopt and ramp up the hypothesis behind actuate is that solutions R and D can be a starting place for catalyzing the necessary change in the nation's innovation ecosystem. And so with that, with those, I think I want to test it in a nutshell exactly like that. So can we start with how do you see solutions R and D as being different from other R D and, and sort of coupled with that? How is actuate different from other non-profits. Yeah, I think [00:03:00] that's, that's one of the important threads in this tapestry that we want to develop. So solutions R and D let's see. I think those of us who live in the world of R and D and innovation are very familiar with basic research. That that is about new knowledge, new exploration, but it's designed all the incentives, all the funding and the structures are designed to have that end with publishing papers. And then on the other hand, there's. But the whole machinery that turns an advance into, you know, takes a technological advance or a research advance and turns it into the changes that we want in society that could be new products and services. It could be new policies, it could be new practices and that implementation machinery. The market companies, policymaking, what individuals choose to do pilot practices. I think we understand that. And there are places where the, you know, things just move from basic research over into actual [00:04:00] implementation. But in fact, there are, there are a lot of places where that doesn't happen, seamlessly and solutions, R and D is this weird thing in the middle. That builds on top of a rich foundation of basic research. It has it, its objective is to demonstrate and to prove out completely radically better ways. To solve problems or to pursue different opportunities so that they can be implemented at scale. And so it has this hybrid character that it is at the one on one hand, it's very directed to specific goals. And in that sense, it looks more like. Product development and marching forward and, you know, boom, boom, boom, make things happen, execute drive to drive, drive to an integrated goal. And on the other hand it requires a lot of creativity, experimentation risk-taking. And so it has some of those elements from the research side. So it's this middle [00:05:00] kingdom that I. Love because it has, I think it just has enormous leverage. And I, you know, I, I think a couple of points, number one, it's it requires to do it well, requires its own. Types of expertise and practices and culture that are different from either the research or implementation. And secondly, I would say that it, I think it's overall in the U S in the current us innovation system. I think it's something of a gap. There, there, there, there, there are many, many areas where we're not doing it as well as we need to. And then for some of the new problems, which I hope we'll talk about as well. I think it's actually a very interesting lever to boot the whole system up that we're going to need going forward. Yeah. And so actually just piggybacking right off of that, you've outlined three major sort of problems that you're tackling initially. Climate change sort of health, like general American [00:06:00] health and data privacy. I'm actually really interested in, like, what was the process of deciding, like, these are the things that we're going to work on. Yeah, but this whole actuate emerged from a thought process from a lot of. Bebe's rattling around in the box car in my head in the period as I was wrapping up at DARPA in 2016, at the end of 2016 and going into 2017 when I left and what I was thinking about was how phenomenally good our innovation machinery is. For the problems that we set out to tackle at the end of the second world war, that agenda was national security technology for economic growth. A lot of that was information technology. We set out to tackle health. Instead we did biomedicine. We went long on biomedicine, didn't break their left, left a lot of our serious health problems sitting on the shelf and a big agenda was funding, basic research and, and we've executed on that agenda. That's what we are [00:07:00] very, very, very good at what I couldn't stop thinking about. As I was wrapping up at DARPA is the problems that I think will, you know, many of us feel will determine whether we succeed or fail as a society going forward. So it's not that these challenges, you know, national security or how it's not that those problems have gone away and we should stop. It's just that we have some things that will break us at our. Yeah, arguably, they are in the process of breaking us. If we don't deal with them right now, one is access to opportunity for every person in our society. A second is population health at a cost that doesn't break the economy. Another is being able to trust data and information and the information age in which we now live. And the forest obviously is mitigating climate change. And if you think about it, these, these were not, but these weren't the top of mind issues at the end of the second world war, right? I mean, we had other problems. We didn't really know what to do about. So some of these are all problems that we didn't really know what to do about. Some of these are new problems. And, [00:08:00] and so, you know, now here we are in 2021, if you say what's what really matters those were the four areas that we identified that. Are critical to the success of our society. Number one, number two, we aren't succeeding. And that means we need innovation of all different types. And number three, we, we don't, we're not innovating, you know, we're either innovating at the zero billion dollars a year level, or we are spending money on R and D, but it's not yet turning the tide of the problem and, and that, so that's how we ended up focusing on those areas. Got it. And what could you actually, like, I, I love digging into sort of the nitty gritties of like, what was the process of designing these, these programs? Right. So just to sort of scope this a little bit, these broad areas that I'm talking about, I think of as. But the major societal challenges that we face today, actuate, which is a tiny early stage seed stage [00:09:00] nonprofit organization. Our our aspiration is over time to build portfolios of solutions, R and D programs. In each of these areas. And so very, you know, you, you, you made reference to a couple of the specific programs. One is about being able to access many more data sets to mine, their insights by cross-linking across while rigorously preserving privacy. That's some of the whole set that's one very specific program, but, but think of that as just one program and what will eventually be a much broader portfolio in this area of trusting data and information. So part of what we've been doing as we started actuate in late 2019 was big thinking about our strategy, about the four broad societal challenges that we wanted to work in. And then we've also been doing a lot of work on we've defined a couple of specific programs, but perhaps more importantly for scaling the organization, we've been working through our [00:10:00] art. Our mess, our process and methodology to take, you know, the core idea here of course is our, our founding team has a lot of different experiences, but we met at DARPA and what we our inspiration is really to take what we know from that particular model for solutions R and D. And. Mine, the critical, the essential insights and translate them to these very different societal challenges, not national security, but the ones that actuate is gonna focus on. And, and that, so we've, we've been formulating the four areas, but also thinking through, so how do you get from the question of changing population health outcomes to what are the programs that could be high leverage opportunities to do solutions R and D for that objective? Yeah. And so, so there's, there's sort of like two steps. There's one is like going from like the broad area to a specific program. And then there's another, which is sort of designing the [00:11:00] program itself. And I'm interested in what, what w what do you actually do to design the program? Like what, what is, what does that look like? Yeah. Go ahead. The first two programs that we have built out and defined were developed, were invented and designed by my co-founder Wade Shen he was a DARPA program manager for about five years. That's where we met his areas artificial intelligence and data science. And if you work in that area, you can work on any of the world's problems. And he, he worked on an amazing array of different problem areas as well as. Programs that at darker that drove the AI and data science technology itself forward. So you know, DARPA is a building full at any moment in time full of it. It's got a hundred amazing program managers in it. Wade was one of the really exceptional one people even in that very elite crowd. And so you know, Wade can And this is how he [00:12:00] thinks about the world. As you know, we came together because we share these concerns about these major societal challenges and a passion for bringing this kind of solutions R and D to these problems. And then Wade is the kind of guy who can invent these programs, you know, like he can just go do it. He knows how to think about it. He knows how to go do the research and talk to people and line up a program that could really be very impactful. So we, we weighed spelt these two programs, partly because we wanted to understand what that looked like in these areas. And but you know, that's the, as we go forward, we're going to need a process that engages a community of different people. Because over time, we're going to want to build our cadre of program leaders who will define, and then execute the solutions R and D program. And by definition, they can't all be, you know, they all, they can't all be weighed, right. We need to be able to draw from the talents and insights and the passions. With of people who have all kinds of backgrounds technology backgrounds, deep research backgrounds lived experiences [00:13:00] on these problems. People who have, who really, you know, deeply understand how the systems work that create opportunity or population health or, or take away from those objectives. And so a lot of what we've been doing is figuring out. So that's the question I was, if you want to change the future of health in the U S so that instead of spending twice as much as other developed nations per capita on healthcare, and yet having dozens of other countries that have longer lifespans and lower infant mortality rates, which is just criminal for the world's richest economy, if we want a future where that is radically different, where we don't have a hundred million people who either have diabetes or at risk of diabetes, where we don't have. Can, you know, we don't have a public health system. That's thoroughly incapable of containing and disease. Like COVID-19 unlike many other countries around the world. If we want a different future than you know, That's the landscape. And how do you get from that broad set of what we want to, [00:14:00] what do you do about it? And I think what that process looks like, so it has a top-down part and then a bottoms-up part. So the top-down part is understanding that landscape it's, it's the kind of, you know, it's understanding what, how big the problem is. What is the nature of the problem what's who's doing what I mean, these are big complex systems, right? There are many, many, many different kinds of actors. Actors practices, culture that you have to understand. You have to have some notion of how all of those complex systems components are operating and interacting. And then you can start thinking about where there are gaps or opportunities, but still at a very strategic, broad level. And that's about it for top-down because then of course, the model emulating a lot of the power we found in the way DARPA works is then to flip it, to bottoms up. And so then we go find people who are experts. In some aspect of this, again, they might have deep research expertise, deep knowledge of the specific problems or the way the system works. What you want is people who either [00:15:00] know or are willing to go learn enough about what the boxes, and then be willing to live outside of it and figure out how to recast it in a different way. And And, and then, you know, similar to DARPA, there's a process of nurturing and coaching, but allowing these smart individuals to bubble and brew program concepts from, you know, like a couple of bullets on a chart eventually to a full executable program, you know, a process that I think even for someone who's super good at this take six months or a year. So that's what we're just starting to embark on. Got it. And so that's sort of the beginning of of programs. I'm also interested in sort of like, What you hope to happen at the end of them? Sort of you're, you're in a slightly different position that DARPA, which sort of has a, hopefully a way being customer in the DOD. That's one of the funniest ideas on the planet. I just love it when people say, Oh, [00:16:00] well, It's easy because DARPA has DOD waiting for it. All right, please. Yeah. Let's, let's talk about how, yeah, I, okay. So yeah, let's, let's talk about that. And, and yes. And then what do we do? Right. So at DARPA, I first of all, think about six decades of history at DARPA in. Two halves for across generations of that agency. About half of what it has done is prototype military systems, things that were just crazy, that the services would never have tried by themselves, but were very directed at a specific military platform or capability. The other half has been. Sparking core enabling technologies. And that was out of a recognition that if you build your new military capabilities out of just the same old ingredients, you're only going to get so far and you need some very disruptive core technologies. So what came out of military systems? Iconically, of course, it's stealth aircraft. There's a much, much, much longer list, but that's the [00:17:00] easy one that everyone knows. A lot of people know that story in the national security world. Of course, what came out of core, enabling technologies. Well, arguably the entire field of advanced material science, but also ARPANET and the internet the seeds of artificial intelligence, advanced microelectronics, Microsystems, huge numbers of technological revolutions. So if that's what's going on at DARPA the first thing to point out is that half of it and some of the most transformative. Core technology, things that have come out of DARPA did not transition to the world because DOD went and bought a bunch of it. Right. And so and, and so the transition for most of the core enabling technologies is out to industry to turn into products and services. And, you know, we've seen. We've seen many, many stories and how that works often, what it looks like is a project that darkened funds at a university and or company. And then those individuals beyond DARPA funding go forward, identify markets, raise capital, build businesses, [00:18:00] build product lines, build industries, changed the world. Right? So we, that that's that that's not trivial. In itself. And then, but I think I just want to also be clear that even for the half of DARPA, that's been about building prototype military systems by and large DOD is not excited about someone they'd start. I'll tell you just, just one story. When I came to DARPA, we had just started just before I arrived, we had started a program. A great partner manager had been a Navy officer. I was serving at DARPA and he said, you know, wouldn't it be great if the Navy had an autonomous vessel, a ship that could leave the pier and navigate across open oceans for months at a time without a single sailor onboard, not a remote control vehicle, but one that just had sparse supervisory control, radically different tools for the Navy, if something like that existed. And maybe we can actually do that. And the Navy got when the DARPA was. Trying to do this and the Navy thought, but I observed this. And what they thought was that is a [00:19:00] really bad idea. And they tried to shut it down there. Important element of DARPA is the Navy doesn't actually get to tell their people what to do. And my predecessor appropriately said, I don't know if she said, thank you. But she definitely said, we're just doing this. By the time I got to DARPA, the Navy had gone from outright hostile to merely deeply skeptical, which is pretty important because that's the stage. It was. People will tell you what, you know, all the reasons that they don't believe it. And they say, well, how is it going to meet call rags, which are the rules of the road for navigating, you know, in dense areas. And how's it gonna last that long at sea in that harsh Marine environment, they had the entire long, difficult list of challenges. So then we knew then, you know what you gotta do, right. So fast forward before I left DARPA I got to Christen the first ever self-driving ship. See Hunter that we put in the water. And at that christening ceremony, by that time, we were paired up with the Navy and the Navy was a partner with us for awhile. And I think now is taking the effort [00:20:00] forward. And you know, now we have a working prototype. Now the Navy can say, Oh, let me figure out. Do I want to use it to hunt sea mines? Is it a cheaper, safer way to trail quiet diesel submarines? You know, there's a lot more that has to happen to really figure out how you take this and move it forward. So that's a success story, but I think that stealth is another great example. These things were not only not embraced or asked for, or, or. Welcomed when they weren't delivered from DARPA, they were, you know, they were spat upon often. But it doesn't matter because if it's radically better enough and, and the stars align and you get like, I mean, a lot of things she can't control, but that is how big changes happen. And you have to be able to do those things, even when there isn't a customer standing there waiting for it. I appreciate that. And so, yeah. Do so, so like then let's how does that then translate for you guys for actually, yeah, so I think the way to think [00:21:00] about it for any, any, so look, I mean, anytime you're setting out to make. To spark a radical transformation. You, it's not going to happen unless you really think about the entire system of what it's going to take to, to create the change that you want to see in the world. And so let me just take one really specific example. One of our programs, Dave safes at actuate. But one of these is one of Wade's programs that he's built. The objective there is to use privacy technologies that are emerging, that are currently being used ad hoc to build a new architecture and infrastructure that would allow for multiple data sets to be provided on an encrypted basis. And then what would allow researchers or policymakers, anyone who wants to analyze the data and cross-link among those data sets for the insights that they hold would allow them to do that entire process while rigorously preserving privacy. And that includes. The CR the linking, the cleaning and the [00:22:00] linking, you know, all the sort of, or ugly data science stuff that has to happen before you can actually start seeing the insights. So it's a soup to nuts full system. That's the ambition of that program is to demonstrate something that's that's that's. Robust enough and flexible enough to handle many different kinds of data and data problems. So the future that we want to see is that instead of today, where research is, you know, you ended up doing research or policy in halafu, it's sort of a lamppost problem, right? You do a lot of interesting research with the data you happen to be able to get a hold of, or that you happen to have permission to link to other data, but all the really interesting problems what, what. Happens in K through 12, but that leads to different kinds of life outcomes. How has that to other environmental factors in a kid's neighborhood or the way that, that education and that child is going to end up interacting with the criminal justice system? How, how do all of those things tie to the progress of the [00:23:00] economy and jobs and the things that lift people up and allow them to pursue opportunity? That's you know, to answer those kinds of questions, you need 53 different agencies at state local and federal levels, and you need private company data. And you know, like it's all just it's it exists, but that doesn't mean you actually can get at it and start using it. So we want to see a future where you could answer those kinds of questions. Well, so what's it going to take the piece that the program will do when we're able to get it going is to demonstrate a prototype system that allows for radically different kinds of data owners to put their data together, you know, run some real examples and. And do applications show that are demonstrations of what this new data capability would look like, but that's probably not going to be enough. Right. And so the other things that need to happen you know, my dream is there's a future where there's NIST or other standard for the kinds of. [00:24:00] Procedures and processes that would allow the legal counsel of the firm or the organization that owns the data to say, okay, if we comply with this regulation, if we meet this certification, I can now sign off and know that I'm protecting the data properly, but I can, I can make that decision tomorrow, not in six months or a year, like it usually takes today. And, and, and then over time with, you know, with a lot of different players and. An infrastructure for regulation and certification, you can start to see how you could, you could have the kind of rich data future that, you know, w we all talk about these days, but actually isn't quite happening yet. So, so I think that, I don't know if that's a useful, for example, but what the pic, the general picture is. Think about all the entities, all the actors that are going to have to. To do something to change their minds, take an action. And you may not be, I mean, we're not going to go fund all of that. We're going to fund a piece that would allow them to change their minds. And that's really, our [00:25:00] objective is a prototype and demonstrations that cause them to say, okay, we can, we can now do something in a different way. Do you see encouraging them to change their minds as part of the program in that there's sort of like a very there's there's a spectrum of from just like demonstrating the prototype and then washing your hands of it too. Like. Push like knocking on their doors for years. And I assume it's somewhere in the middle. Yeah. There's a lot of leading horses to water recognizing that you can't make them drink. What I, what I think is really clear for many, many years of experience at DARPA and other places is that if you're not deliberate and thoughtful about. Who those players are, what would cause them to change their minds and then doing the active work to engage them all along the process. For sure. If you don't do those things, the chances are pretty, pretty slim. If you do them, you might have a shot. Right. And [00:26:00] and so I think we're as we're designing programs that actually we're being. Very explicit about that engagement process, which starts by you have a lot of conversations with people who are like, most often, they're like, yeah, sure. You're in fantasy land. If that stuff existed, it'd be awesome. I'm like, that's not the reality. And let me tell you what I really need. So that's at the beginning. And then as a program starts, you know, during the execution of a program, that's really when it starts going from. Just, you know, something that the program leader believes in to something that now is starting to be palpably real potentially. Right. And so you want to bring those. Decision makers whose minds need to be changed, but at least could be investors. They could be entrepreneurs. They could be policymakers. I mean, a whole different sets of who those, those, those adopters need to be the ones that are going to take it to scale. But the places where we can bring them to the table are you know, you continue to call them up and tell them what's going on. But. But you [00:27:00] create demonstrations and updates where you bring them to the technology or you bring the technology to them and you say, look, did you, did, you know, this was possible. Look what we can now do. And, you know, ideally they get dazzled and then they say, Oh yeah, but they hear the next three things. That would be a problem. And that tells you what you need for the next phase. So that's what, that's a parallel track to the three to five years of technical work that's going on in the program. That makes a lot of sense. And in terms of the technical work, do you plan on having it be mostly externalized to the organization? The same way that DARPA does. I would say w there there's a very important piece of intellectual work and management and leadership that happens with the program leader and that individuals tiny little team within actuate, very much like at DARPA. But you know, the vast majority, the overwhelming amount of the funding goes out to the, the companies, the [00:28:00] universities, the nonprofits who are doing the different components of R and D and. Testing and demonstrations and all the people who are doing all of that work. And that's for a couple of reasons. Number one you know, these are three to five year projects programs, and we w w what we want to do is we don't want to hire them all and put them under our roof for that period of time just as a practical matter. But the other really important thing is when the program is over. What you want is, you know, a successful program and w a program starts with a program leader who has this vision. Yeah, they are, they are, you know, they're calling people to try to do this really difficult, new thing, and. At the end of a program, what you want is that entire community that you've been funding and working with that, they get the vision. Not only that they built, they delivered it, right? Like they've actually built this thing and they become the most important [00:29:00] vectors for moving it out into the world and getting it. Actually implemented. So the world starts changing. And so for both of those reasons up front and at the back end I think that's, I think that's one of the powers of the DARPA model is, is tapping these amazing talents wherever they are. Yeah. So something that I've actually wondered about with the DARPA model, that I've never been able to find any good information on is what do you do when you run into a situation where You need, like there there's multiple groups that have been working on different pieces and there's like, is there ever contention over, who's going to take it forward or like, like, how do you, how do you sort of coordinate it so that the outcome is the best for the world where like, which might involve like like squashing someone's ego or something like that. I was like, shocked. I'm shocked. So are you thinking I would say they're somewhat different answers if those junctures happen [00:30:00] during a program versus after a program. So, you know, let's say you have a program that that had different university groups working on dunno some advanced chip for doing machine learning or whatever. And, and, and it, I mean, this just happened. I think that there were multiple very good research results, but then were commercialized in different ways by the performers. So at that point, you know, it's like, great. Let them drive it out. Hopefully they, they. But they may compete with each other. They might go after different market segments, but there, there are multiple shots on goal to commercialize something coming out of a program. And I would characterize that as something that DARPA would not particularly, I certainly wouldn't control, probably doesn't even have much influence. Conversely, if you're in a program at the early stages of a program, a lot of the that's a lot of what the core management Work is for the program manager at DARPA or the program leader as we're calling them at actuate [00:31:00] is, you know, so let's back up. Number one, you're trying to do something that achieves huge impact sad, but true that involves taking risks because all the low-risk things have already been done. And so the, the whole art of this. Business is how do you intelligently take and then manage and drive down and eliminate risks. And one of the, one of the really effective tools in the toolkit for managing risk is a to S to S to plant a number of different seeds. And to deliberately have competitive efforts that might, you know one of our programs at actuate, for example is built on the idea that we have all kinds of research that could be better at real-time incentives to help people make better. To develop healthier habits. So, you know, it, when we get that program going, we're going to deliberately have multiple teams who are working on different kinds of incentives, themes, and then a core [00:32:00] management challenge in a program like that is going to be, you know, you, you may choose to start four, but you, you know, at some point you're, you're going to want to down select and go to two. And what is the right point? When is it. Point where you want to say, you know, I'm going to put more of my eggs in these baskets. And so I think that that's integral to the design and then the, the day to day or week to week management of the program. And I imagine that there might be one more situation where at the look you're actually sort of building a system and you have different groups working on different pieces of like different components in the system. And so. What, what, how do you, how do you manage that at the end? Where it's like, okay, like at the end of the day we, we want the system. Yeah. That's exactly right. Yeah. And I, I let's say maybe just one small point at DARPA. DARPA's running 250 or 300 programs at any moment in time. Right? So full-blown huge agency [00:33:00] relative to the scale that we're starting at zero right now at actuary, but in the DARPA portfolio, you will find programs. You know, the self-driving ship program was a systems development program, Gantt charts, milestones, boom, boom, boom. Right on the very other end of the spectrum might be a very much more research oriented program. That's highly exploratory. There's a new physical phenomenon that looks like it could be interesting down the road, but right now you just want to have vibrant research and people pursuing the question in lots of different ways. So there, there are many, many models. Yeah. Somewhat in the middle is probably where is, is what I would characterize where actuate will start and what we're finding in the kinds of programs that we're exploring is over and over again. Here's the pattern there. Number one, there's a, there's a problem for which we think there's a radically better solution. That's possible. The reason we think it's possible is because not because of one new research result, but because there are a handful of different research areas that are advancing in interesting ways. But they [00:34:00] haven't yet those advances have not yet really been applied to the right problem or critically to your point, integrated together into a system that can actually follow the problem. They're just like threads or hopes. Right? Yeah. And so that becomes, I think this is a classic template. For solutions R and D program at DARPA or an actuate. So a great way to manage those kinds of programs is, should think in terms of different tracks of effort. And the first track is to advance the research itself. So it's applied research where you're, you're building on these, these threads and nuggets, which you're really aiming at the specific new capability that your, that the programs. The program's goal is to demonstrate that, right? So track one is applied research. The second track is building prototypes and that's often that's a different kind of performer. It's someone who can integrate the different pieces and you can, you know, you can imagine a process where every seat. Three or six months, there's a drop from applied research into building prototypes. Right. And so, [00:35:00] especially for software tools, this is like the classic way you would do it. So every three to six months to see what's coming out of applied research, that's baked enough to put it into the prototype. And so that that's. That's becomes a very good way to flow things. That's tracks one and two track three is now you got to figure out if this stuff is doing anything. So then it's, it's testing, evaluation and working, you know, trying to show that it works for the application or applications that you're going after. And while there are different tracks, they interact, right? Because as you're learning what works and as you take the integrated prototype, so an integrated prototype for. But tool to help individuals choose healthier habits throughout their days and their weeks. So it's going to integrate a whole host of these different advances that are coming from different areas of a lie, including incentives, as I mentioned before, but, you know, ideally every six months or so as the prototype strop to testing, you start getting real feedback about this, this combination of. [00:36:00] Sensing and coaching and personalized incentive. Is it working or is it not working? Right. And then, then you go through these iteration loops. So I think that's So, yeah, I mean, I think what, what, so what the program looks like when it's underway is you'll see some researchers, universities, or companies you'll see prototype developers, typically more companies there you'll see people who do the tests or the demonstrations. It could be a clinical trial. If it's health-related it could be, I mean, it could be whatever, whatever the form of the prototype or the application is. And then throughout the whole thing, and the management challenge is. You know, you have a plan and then reality is going to happen. It's going to be something different. So how do you keep that whole engine moving forward? That is, that is an amazing description. I really appreciate you going into those details. Cause I think that that's something that. People don't think about it enough is, is sort of like how, how to manage those tracks. I want to actually go back to something that you said earlier, which is that the people that you want sort of as [00:37:00] performers in the program are the people who can see where the boxes and then, and think about, think outside of it. And do you have any, any strategies for finding those people and, and sort of teasing that out of them? Yeah, I, I think I said it more in the context of program leaders. And then, and, you know, by the way, at DARPA, one of the best ways to go find great new program managers or potentially great new program managers. Cause you don't really know until you give them a shot. Is to find, go through the performer base. Right? And there, there, there at DARPA I found there were always, there were always performers who were very, very good at their piece of it and they loved their piece of it. And you have to have those people, but then once in a while, you'd see a performer who started seeing the whole picture and they could help the, you know, they would start being creative about like, we could go here. And when you start seeing that, those are the, those are the signs. So I have a set of. Criteria that I thought about in terms of [00:38:00] DARPA program managers. And it's very similar for Dar for, for actually future program leaders. Number one, it's people who are driven to make a change in the world which like, I mean, this is where I live and breathe, but it. Over time. It has finally dawned on me that not everyone gets out of bed in the morning to make the future a better place. All right. Like that's just like what the culture and the whole point of the exercises. They have to find people who are driven to do that. I'm always looking for domain expertise because you need to be deeply rooted and deeply smart about something that's relevant to the problem it's going to work on almost by definition. You won't be a domain expert on everything that it takes, because these are big systems complex. Thanks. So the next thing I'm always looking for is the ability to understand the whole, the big picture of the system, and then to navigate seamlessly, you know, from, from forest to trees, to bark, to cells, right. And then back up and you have to be able to do that whole thing. And that means you may know a little, a lot [00:39:00] about how you know how some aspect of behavioral science works in a very specific context, but you also, I'm also looking for people who can then extrapolate up to how might that and other advances to be harnessed, to, to move the world forward. Right. And that that's that's I would tell you that's one of the harvest characteristics to find, cause of course. W w you know, there are lots of people who have domain expertise, but that ability to navigate from systems to details is, is actually a very precious commodity that I always love when I find I'm looking for people who, the overall thing I'm looking for is people who have, you know, head in the clouds feet on the ground, because you need to be able to dream, but you actually have to be able to go execute. And in this case, execute by managing other people on projects. Yeah. You know, it's not an individual contributor role. And then the final thing that matters deeply is an ethical core, just because you know, that that's important for how you treat people on [00:40:00] a day-to-day basis. But it's also important because we're talking about really powerful technologies and someone who we need people who are willing to be explicit and thoughtful about the ethical considerations that they'll be weighing in. Yeah. That that's great. I want to change gears just a little bit and sort of talk and talk about money for a little bit. So, so, so you spent many years in venture capital, and so I assume you, you know, the, the, sort of both the upsides and the downsides of, of startups and for capital organizations and you decided to, to start as a nonprofit. And so, so I'd love to sort of understand the thought process behind that because I definitely, I, there, there's sort of a line of thinking that. You know, it's like, if it, like, if it can be done, it should be done as a company, as a, like a startup. And so I'm interested in why you, so I would say that [00:41:00] simple minded and, and to the extent you think that's, if that's your worldview, I would say the things I think need to be done, that I can make a contribution to cannot aren't companies. They're not there. There's not a visible market. And so it's not, it's not a company today. Some of the things we want to work on will part of getting them out to the world will involve markets and therefore companies, including startups, but you know, coming back to these major societal challenges that we have none of them are simply going to be solved. By companies, building new products, services, and profits. And I do think that some of the solutions will ultimately will include companies having really interesting new market opportunities. But it, you know, this is the stuff that the market doesn't do and, and. But, you know, th the, so if you think about us, R and D we spend about half a trillion dollars a year in the U S economy on research and development. [00:42:00] The majority of that of course, is companies doing product development and but about a hundred, I think it's about 140 billion a year that's that's federally funded, R and D and and, and the, the. But areas in which actually is focusing are places where they are not market driven opportunities and, and they are not, I think they are not yet the places where we have the federal R and D machinery and yeah. But so those things need to happen for our ultimate dreams to come true. Right. Is to make the difference that we want more. And, and ideally it seems like you, you'd almost sort of like pull both of those both of those leavers, like towards a certain direction, right? Like that's, that seems like a, a place that you could sit getting opportunities for them. Right. I think that's the biggest pull as you show them something that, that changes their minds. Yeah. And are you funding the organization as like actually as an organization, as a whole? Or are you funding [00:43:00] each sort of program? Like, are you funding it as a program by program basis? We're still at a seed stage just to be really clear, but we spent a lot of time on this strategic question about whether first of all, let's be really clear that what we're trying, we think philanthropy has an important role to play because of the fact that market and government are not. For various reasons, stepping up to the plate on these topics that said that what we're trying to do in the social sector is there isn't a template for it. It's not what philanthropy has, has done at least in the last, you know, Six or eight decades. Very interesting stories about Rockefeller foundation and the green revolution and how they, how they funded the research. But, you know, if you go back and read how they thought about it in the methodologies that they developed, it looks a lot like solutions, R and D and then those. Actually those human beings, those exact people went into whenever bushes organizations on during the second war. And, and [00:44:00] I mean, that's the template for solutions R and D is right. We have an existential crisis and we have things we can do about it. And it's all hands on deck and integrating everything. And. Building radar on the bomb. Right? So, so anyway, so, but it's been decades since part of philanthropy, I would say, was really seriously focused on this kind of solutions, R and D. So with that, that is the significant caveat. So everything we're doing is going to be a big experiment in the social sector question you're to get now to get to your question. That we spend a lot of time thinking about whether we should try to build a program, build a program, go raise money for it. Or if we should try to do something that's even harder, which is to raise a fund, to do multiple programs and build a portfolio we've settled on the ladder. And the reason for that is simply that, first of all, I think, you know, sometimes doing an impossible thing. It's better to do the more impossible thing that actually. Can make an impact. I think this comes back to risk management and we talked about risk management within [00:45:00] a program, but a lot of, you know, how to start have one or two things, every single decade that literally is changing the world. Well, it certainly isn't because all the programs succeed, it is because you have a portfolio. And because it's a very deliberately managed diversified portfolio, it's diverse in. Aspects of national security it's that it's targeting, it's diverse in the technological levers that it's pursuing, it's diverse and timeframes to impact. And so at the end of the day, we concluded that for actually to make a dent on any of these met massive societal challenges that we needed to be able to build portfolio. Yeah, no, that makes a lot of sense. And so teaching to do tracks again and just talk to you a little bit about, about the Pat, like your, your, your, your career, which has included some like amazing things. Like when, when you became the DARPA director, like how, [00:46:00] how did. You know what to do? Like did they, I'm sorry, this is a silly question, but as you say, it seems like such a big role. Yeah. I've been super lucky in the things that I got to do. But I th the luckiest day, I would say in my professional life was the day that Dick Reynolds, who ran the defense sciences office at DARPA in the 1980s. He said to me at a workshop of there that I happened to be attending. He asked if I wanted to come to Darko as a program manager, and I was 27. I had been out of graduate school for. A year. Oh, maybe I was 26 at the time. Anyway, I had only been out of graduate school for about a year. And I was in Washington on a congressional fellowship at that time because I had decided I wanted to do something other than research on the academic track, but I didn't know what that was. It's like on a [00:47:00] Lark, I went to Washington for a year, which was critical because even when you leave the trot, you know, th the, the path you are supposed to be on, that's when you don't know what's going to happen. But one of the things that can happen as amazing new possibilities occur. And that's what happened when Dick asked if I wanted to come to DARPA. So at a very early stage of my career, I landed at DARPA and it was the first place I had ever been. I mean, I had worked. Two summers at bell labs who put me through graduate school. I'd worked at Lawrence Livermore one summer as a summer student. I'd worked at Texas tech and the laser lab as an undergraduate. I'd done this graduate work at Caltech and then I'd been on at the office of technology assessment. And the honest congressional fellowship I got to DARPA and all of a sudden, it just made sense to me, right? Like everything that I thought and believed in the way I was culturally oriented, which was you go find really hard problems. And then the contribution we get to make as technologists is we get to come up with a better way to solve a really hard problem. And we get to [00:48:00] blow open these doors to new opportunities. I just, it just resonated so deeply. So I spent seven years at DARPA the last couple of years, which we're starting with micro at that time, it was the micro electronics technology office, which we spun out of the rail defense sciences office at that time. And I, I, you know, I loved it. It was, it was a crazy ride. Right. I got to do all kinds of things that were very, very meaningful then. And that, you know, for the 30 years, since then, it's been just. Such a delight to see so many things, but have come into the world that trace back to some of the early investments that we got to make. And I would tell you that while I loved it, everything else I got to do after DARPA and I treasure it and I needed those experiences, I never really got over being at DARPA. It was just like, it was my home. It was my place. It was what made sense to me. And So when I got the call in 2012 to go back and lead it I, you know, it was just a dream come true. And [00:49:00] when I got there, it was, you know, being a program manager and then being an office director at DARPA, which I had done in the eighties and nineties, and then going back as director, those are three very different jobs, but so there was a huge amount of learning and growth in every stage. But they are all. Lined up to this mission and vision of an organization. That's just like, I'm wired the way that DARPA is wired. So, so I, I have to say it's, it was the most satisfying job that I've had so far, I'm trying to make actually even more. It was very hard. It was very meaningful, but I have to tell you, it just felt natural. It felt instinctive and natural in a way that none of my other jobs really did. I have to say, I mean, you know, And they were all. Okay. And I think there are other jobs. I think I was good at their other jobs. I was horrible at that matter, but DARPA was the place where it just sort of, it just felt natural to me. Yeah. And, and so sort of to provide on that and, and in closing do you [00:50:00] think that there are any ways to improve on the DARPA model that you're trying to implement going forward? So we talk about this all the time. I mean, I think for small, if the work that we're starting an actuator can have anything like the kind of impact that DARPA has had in, and, and, you know, any subset of its programs Then I can die happy, right? Like if we can really make a contribution to these big societal problems, that's, that's, that's going to make, that's just going to be deeply meaningful to me. We've talked about some of the things that I think are difficult in the DARPA model. One of them is about the more radical the innovation and advanced the harder typically is to get anyone to. Change the way that they work in order to adopt it and get the benefits of it. So I think being we're, we're trying to be even more deliberate about how would you get decision makers to change their minds and implement in the design of our programs have actually, I mean, I think DARPA does that, but that's something we're trying to put [00:51:00] special focus on. I think DARPA's done a huge amount of work to make it easier to, they have legislative authorities and good practices about being able to hire people who. Many of them normally wouldn't consider public service for many reasons, but especially of course, low compensation levels. And while dark was not fully market competitive, we w we were able to move very quickly and had a little bit of a salary cap relief. So, you know, the nonprofit sector is not going to be the place that you make your billions obviously. But I think being outside of government has that advantage and something that we'll, we'll definitely take advantage of. And they're, you know, they're things that are simply not appropriate for the government in a market economy. To do. And so there, there are things that you can do for national security, but that, that unless we have a radical change in our thoughts about industrial policy, which by the way might be happening, I can't quite tell, but there are ways in which government has not [00:52:00] chosen in the past to work with industry or with finance that I think are less, you know, those are not as significant on a limitation for the work we're doing in the social sector. Nice. Excellent. Well, I want to be really respectful of your time. How can, how can people find out more about what you're doing? And like if they, if they think this is interesting, like what, what should they do to, to help out. Well, thanks so much for talking about this. I love the fact that you, that you care about these issues and you've done more than anyone I've seen from outside DARPA to really understand the agency. So that it's been so much fun talking with you, Ben, about that. I think you're going to provide the link to the issues in science and technology. And our website courses, if it's all brand new. So take a look and you know, we're so early right now, but I'm, I'm always looking for people who have a deep passion for these societal challenges who see new opportunities to do things that are radically better way. [00:53:00] And please reach out to us from our website. If you, if, if it resonates, we'd love to hear from you. Thanks for listening. We're always looking to improve. So we'd love feedback and suggestions. You can get in touch on Twitter at Ben underscore Reinhardt. If you found this podcast intriguing, don't forget to share and discuss it with your friends. Thank you.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.