Last week, we sat down with Dr. Balkan Devlen, author, professor, and superforecaster for Good Judgement, Inc., to record the first episode of our new podcast: the Global Guessing Weekly Podcast.
In the episode, we discussed Dr. Devlen's introduction to quantified forecasting and his path to becoming a Superforecaster, before discussing how theories of social sciences can be incorporated into predictions, why forecasting has little presence in academia, and reflecting on past predictions–among other topics.
Below is a transcript of our conversation, which has been lightly edited for clarity and repetition.
Clay Graubard: Hello everyone. Welcome to the first ever episode of the Global Guessing Weekly Podcast. My name is Clay Graubard and I am joined with my co-host Andrew Eaddy. Every week, Andrew and I plan to hold conversations about all things forecasting, ranging from interviews with forecasting experts to discussions about some of our latest predictions on our website.
In today's inaugural episode, we are pleased to be sitting down with author, professor, and renowned geopolitical forecaster Dr. Balkan Devlen. Dr. Devlen first earned his BA in international relations and affairs from Middle Eastern Technical University in Turkey before earning his PhD in political science and government from the University of Missouri Columbia.
Andrew Eaddy: Dr. Devlen has taught at the Izmir University of Economics in Turkey, and the University of Copenhagen in Denmark. Dr. Devlen is currently a senior fellow at the MacDonald-Laurier Institute, a Canadian public policy Think Tank, and is also the director of the Centre in Modern Turkish Studies at Carleton University in Ontario, Canada. Devlen is also Superforecaster for Good Judgment, Inc, and the author of a geopolitical forecasting substack called Hindsight 20/20, which you should all subscribe to after listening to this episode.
We can say a lot more about him, but we have to get to our questions. So without further ado, welcome Dr. Devlen.
Balkan Devlen: Thanks for having me. It's a pleasure to be here.
Clay: We'd like to start off with your background and how you first got started in forecasting, so could you talk to us about how you first got introduced to the world of quantified forecasting and what really drew you into the practice?
Balkan: One of the reasons why I went on to study International Relations back in the mid-1990s was to have a better sense of how rapid changes in the world are going to affect us all. It was about thinking about the future.
This was a time when massive changes were happening in Europe and elsewhere. I was growing up and going through college when the Yugoslav Wars of Succession were going on in Bosnia and elsewhere, and the genocide in Rwanda, and all other places. The idea of trying to understand these rapid changes and uncertainties in the future was the one thing that drew me to international relations. And, with a lot of people who are in a similar position, I was very much interested in science fiction and thinking about the future more broadly, but one thing that throughout the university and graduate school I noticed was the lack of rigor in thinking about the future and the mis-aligned incentive structures that one can see about accurately predicting or thinking about the future versus what Professor Tetlock called vague verbiage. So I was always thinking about how we can think in a more rigorous way about the future and try to help think about the future in this way to structure our actions today.
When the first ACE-IARPA tournament for geopolitical forecasting was announced, I saw it in an email group back in 2010 or 11 through email by Philip Tetlock. Part of my academic interest was political psychology, especially the role of individual policymakers, and I was always interested in bringing Game Theory together with political psychology to understand how people behave and what they could do. So I was very much aware of Philip Tetlock’s work on cognitive complexity and other things. When I saw that email I wanted to apply initially, but for the first season they only accepted American citizens as part of the requirement by IARPA but they opened up later on in the second season so I got in to be part of it later on. I then became a Superforecaster in the third season, I believe, and I maintained that in the last season of Good Judgment.
The primary reason why I wanted to do that was the ACE tournament, in which Good Judgment came on top, provided me with an opportunity to test my models of the world in real time. It provides feedback, and quiet, harsh feedback in terms of how I think the world works, and for me that that was the biggest attraction. Here is a tournament, a system in which I can throw at it how I think about the world and how things would evolve, and I can get feedback almost immediately. In academic work, as you guys know, it's you write about things maybe 10-15-20 years later on. It's not really an immediate feedback loop so the way to correct is a lot slower, and that's not necessarily an incentive structure at any rate.
That was my primary motivation. Quantified forecasting enables me to test whether the way I think about the world is actually accurate, and would help me to make decisions later on. And, when I'm wrong, I can go back and look: Right, why was I wrong? Was I wrong in terms of outcome for the right reasons? Or did I happen to get lucky and get the prediction, right or for the wrong reasons? That level of keeping yourself honest, and testing your models, in real time, over a very broad set of questions–I think we answer 400 questions or 500 questions overall in the tournament–give you a good good sense of where your weaknesses are, where your blind spots are. That was my primary motivation and getting involved with Good Judgment, and my involvement continued afterwards when the commercial spin off came out of that tournament.
Andrew: You mentioned your thesis just now. Clay and I actually had a chance to give a brief look at your thesis, we were able to find it online, which is interesting. And you're talking about game theory within the context of renegade regimes, which is very interesting for us as people who have studied a good amount of theory. We were wondering, in what way does game theory sort of impact the work that you do today? How does it interact with forecasting? You sort of talked about this live feedback: Is forecasting a way to sort of operationalize that game theory, or what's the relationship there?
Balkan: Very good question. In terms of a practical day to day, when I know answer questions for Good Judgement and similar platforms, I very rarely do any sort of explicit modeling with regards to game theory. But how it informs me is more about the way I'd like to think about actions and decisions. In essence, Game Theory is a very powerful tool because it forces you to focus on on two things: It forces you to think about the the incentive structure in a given domain, and from the structure, from the environment, drive certain predictions that you can see whether it can work or not, so in a way it's more of an intuitive or implicit use of game theory in most of my work. When I do use Game Theory, I tend to think: If I am in that person's shoes, and if I think about my preference ordering over a set of outcomes in this particular domain, how would I order that? If I order that, what would affect my ordering? It would be the incentive structures and where I will maximize my utility, and that utility doesn't have to be material, it could be ideological. And if I think that is the model, that gives me the opportunity to say whether that is a correct and accurate representation of that actor's preference ordering? And if that is correct, how would they act.
Most of this is not done necessarily in a paper and pen format. I don't draw the game trees and what not, but it gives me a sense of trying to think from the outcomes that I want to achieve: Moving backwards, what I would do if that was my end goal, and then structure that way in a subgame perfect equilibrium structure–not necessarily in a formal way. So working through game theory in my dissertation, and later on, provided me with that sort of mindset and approaching things primarily.
Clay: Sort of building on top of that, is that the same way you approach incorporating IR [International Relations] theory into forecasts relating to geopolitics? Earlier you were mentioning how IR lacks these sort of rigorous checks, and I'm just wondering if part of that has to do with theory formalization. [Kenneth] Waltz talks about how we're not worried about outcomes [in IR], but instead we are talking about general changes in behavior based on like third-level conditions and etc. So if there are limitations to how IR can be brought into [quantified forecasting], and if you do bring in theory could you maybe give us an example of a prediction and how theory was used to help you formulate the outcome?
Balkan: I'm a self declared realist, or at least a classical realist–increasingly more on the classical side rather than the Neo side the more I think and work through stuff–but the way I tend to bring IR theory in is this more sort of broad background understanding of the general dynamics in the world. In other words when I rely on IR theory, be it realist, thinking or others, I tend to take that as providing the constraining elements of a particular domain in geopolitics, rather than using IR theory to derive specific predictions about that issue. Now we can do that with more middle range theorizing, a lot of the neoclassical realist work tries to do that or some of the more specific rationalist theories from bargaining and so on, which then you can use. But to me, the way I use IR theory is more of a worldview component in what I think matters in international affairs: That is the theoretical component in my broad view.
In the way I understand and look at things, I would say that that's generally shaped by realism, in the sense that power plays a central role. When push comes to shove, it will eventually depend on conflict groups and one group's ability to harm others in a very physical sense. In that sort of understanding of the world, the civilization is a quite thin veneer and it doesn't necessarily have to be intentional sort of evil sort of structure that people go to war or conflict, but the incentives, the presence of security dilemma, the reality of group identities, how we identify ourselves, and our interests, therefore provides an opportunity to understand what are the fundamental dynamics or immutable dynamics of international politics. From there, I make certain predictions and forecasts based on that. Therefore, IR theory in essence plays the background, constraining framework for me to look at what matters when you're making a decision, and what would matter eventually, for the outcomes.
For my students, I generally give the example of Liechtenstein. I don't want to pick on Liechtenstein, but the leaders–the Duke, the Prince, whoever–could have world domination fantasies but will never do that because they don't have the capabilities to achieve such an outcome. But, if you're talking about Germany, or Russia, or China or the United States, well, that's a different story. Right? They have the capabilities to achieve that. And that way of looking at whether you actually have the capabilities to do things that you want to do, provides you with what is plausible, what is possible, and therefore how you should constrain your probable outcomes.
Clay: So then, does that change based on the actor that you're like looking at? So if the primary actor is the European Union, are you shifting the foundational theories you're using then you're looking at strategic partnership between Russia and China? Does that underlying theory and the range of outcomes, does that change? Because I had a professor who always made sure to say there's no such thing as realists, there are just practitioners of realism. Theory doesn't exist in a vacuum, but it is people implementing theory that actually matters. Therefore, does that have an effect based on the actor you're looking at which theories you're considering?
Balkan: Exactly. I totally agree with that. And again, I tend to see realism as more of a disposition rather than a coherent set of Kuhian or Lakatosian understanding of connected propositions and hypotheses and axioms. I tend to see it as somewhat of a disposition, and I think Richard Lebow puts it quite nicely: It's a tragic disposition, in the sense that, despite your best efforts, bad things happen, it doesn't necessarily need to have any reason or rhyme, and it's a more of a circular understanding. So as a disposition: Things can get bad because there are power struggles, and there will always be one among groups and so on so forth. When you come to the specific issues, then you will start switching between what works as a tool in understanding how we can solve that problem, right.
So for me, it's a lot more useful to look at various governance debates, from intergovernmentalism to super-nationalism, to understand the specific dynamics within the EU than just to impose some other external theory that would not necessarily make sense there. So those are tools that you reach out to make sense of what is going on, but it always comes with a particular disposition. People who say tend to have a more liberal disposition, in the IR theory sense, would see more opportunities for cooperation, while people like me tend to always be looking in the shadows to see what kind of threats are lurking there. I think that's a lot to do with personality, with upbringing, with the intellectual journey that you come through, etc. But those are more background conditions, and the way I approach the theory is not really different than different methods that I reach out to understand the problem, be it quantitative work, could be game theory, could be in depth ethnographic work, depending on what works in that specific circumstance.
Andrew: As you mentioned before, Tetlock's Super Forecasting, was our introduction into the forecasting space and most of our exposure to forecasting in general has been somewhat US-centric. Something that we found really interesting about your background is that you've operated in Canada and Denmark, Turkey, the United States. Has that geographic context at all altered the work that you do? Do you see some areas have different approaches to forecasting or different levels of acceptance to sort of these novel forecasting methods and other demographic contexts?
Balkan: Yes. Well, maybe two things there are. The US is definitely on the leading edge of this, and there's a reason why a lot of these things have been funded by various US government agencies. Let me put it this way: In a majority of places, thinking about the future is generally a bunch of guys coming around a table and talking–if it happens at all. And there's not necessarily an institutionalized way of doing it. It's generally winging it. Because the payoffs are not there for the majority of people who need to think about this. The US in that sense, is a different ballgame, and that has a lot to do, I believe, with the specific people who are involved in IARPA and DARPA organizations that actually push for this.
Still, even in the US when you look at the government level or the bureaucratic level, accurate forecasts are not up there in terms of priority. Things are changing, though. For example, in Europe, there's a recent commission-wide report that basically put foresight into a sort of a central component of European Union decisions going forward. There is a Vice Commissioner for Foresight in Europe as part of the European Commission. Now, the way they understand foresight is a bit different–I find it a bit wishy-washy, in the sense that it's all about plausible, etc. etc. etc.–but it is still a step. It's a step to try to think in a more disciplined way about different alternatives out there and how we can anticipate and change. For practitioners, you want to anticipate so you can take the actions to alter the trajectory in your advantage. So that is coming back, and big time in Europe as well. But you know, in Turkey it is almost non-existent. I was one of the handful of non-Americans that ended up qualifying as a Super Forecaster through the IARPA competition. I think 85% of participants were Americans as well. But now, for example, we have a broader, diverse group within Good Judgment, people from everywhere, increasingly. So I think that's, that brings a lot of useful perspectives. And it served me well, on several questions, to have a different perspective.
Clay: Do you think part of changing how much attention is given to foresight is in part due to how academia is structured? Particularly looking at what is considered acceptable theses, you're always told: Do things in the past, look backwards, has to be 5-10-20 years in the past. And I personally always, during undergrad, wanted to focus on automation, artificial intelligence, and how that would impact the international space. But that's forward-looking, and therefore I was told it was not the right place to do that kind of research. And given that most people don't get PhDs, and have enough time to make papers on the side, do you think that part of it is also just what's considered the bounds of acceptable academic research for everyone that doesn't have a tenure position and can make whatever journal articles they want–assuming they have an endowed fellowship, and all of those additional things as well. If you don't know if you'd given thought to that, as being a competition problem.
Balkan: I think that's definitely one of the one of the things. When you look at it, academia is one of the most conservative institutions in the world in terms of structures, "small C" conservative. We're essentially moving forward with a modified version of the medieval guild that is filtered through sort of the German understanding of what a university is in the late 19th century. It is still the same sort of way of thinking and organization. Right, so it is still a very, very slow to change structure and incentives.
If one thing that I get out of my education and background in economics and several things that I did is: Incentives are what matters fundamentally. Most of these contemporary economics is about incentives, and incentive design, and mechanisms, and all that kind of stuff. In academia, incentives are not necessarily about being–especially social sciences–about predicting accurately because that's not who you are talking to. You're talking to other academics, we're doing similar work, so it's more about the minutiae to detail a particular case or not. So the emphasis is more on looking back: At an explanation, on understanding, and an aversion–to be honest–within the social sciences, broadly speaking, about making a claim that you can be falsified later on.
If you speak in very broad terms, you can say: Balances [of power] will be formed. When? How? Where? Who knows! They will, eventually, you know. So you cannot be falsified and you're not being rewarded for being accurate. John Mearsheimer 1990s articles are a great example. He could be, you know, again and again–
Clay: –or [Samuel] Huntington.
Balkan: Exactly, or Huntington. You could be wrong again, and again, but it never impacts your credibility in the field. So people don't go there. There's no incentive. There's no upside for risking. If you don't get much of when you're right, what's the point? But looking backwards, is how it is generally structured and it's very, very hard to break the structure.
Certain fields, AI in essence, you're in an actual perfect, maybe sweet spot, that would enable you to think through it because it is one of those areas in which people think very carefully about the future. The UK has these two great centers on existential risk and AI plays a huge role in it, both the Future of Humanity Institute at Oxford as well as CSER (Centre for the Study of Existential Risk) at Cambridge. In that field, anticipating the future is the primary purpose so there is a lot more rigorous and careful work being done there. And, interestingly enough, a lot of it is coming from both Philosophy and, and more Engineering and Mathematical backgrounds, and very little, unfortunately, Social Sciences but things are changing slowly in the past five years. So there is a very, very slow change, but quantified forecasting is still a very minuscule aspect of what academia does and what it rewards. And people in Academia, because they're not rewarded for accurate predictions or anticipation, they're not doing it and it's very hard to change that.
Andrew: You talked a lot about sort of approaches to predictions. We were wondering, if just value for readers and watchers, is there a prediction that you can talk to that you've done in the past–without giving away you know, information about the client who it was for–and how you actually approached it for your work?
Balkan: I'll give you two examples: One where I completely bombed it and totally missed it because I think those are more sort of interesting. And on where I was different from the crowd and I ended up being right, and I'll explain why I think that happened.
Let's start with the second one. This was a question on the Turkish Presidential Elections back in 2018 and whether it will be resolved [will a winner be declared] in the first round or not? There were a lot of comments within the forum that was going on when I was forecasting the question casting. I was around 90%, saying that it will resolve in the first round, and Erdoğan will be president. And a lot of push back came from people saying: "This is what the poll suggests'', etc. A lot of rational arguments. But my approach was different, partly because I thought the way I approached it was not so much: What are the objective conditions under which someone votes one way or another? What do the polls say? But instead, what are the incentives for Erdoğan to make sure that things didn't go to a second round. Given the whole history, he cannot afford to lose, and he cannot even afford this to go to the second round. Given the whole corruption and political oppression debates and everything else that goes on, it's not your grandmother's election. This is a guy who cannot afford to lose. Therefore, he would do everything in his power, legal or illegal, to maintain that. So the results will be–whether they are accurate representations of people’s will or not–such that he will win in the first round which turned out to be the case. My approach was rather than having an interest in the outside perspective which is typically what Tetlock argues we should rely on–the base rate is how these things happen in general, but this is a very inside move.
The useful lesson there is: Those [the base-rate] are guidelines. It's about keeping the balance between inside and outside perspectives and knowing when you shift towards the inside. If everything follows the mean, if everything reverts to the mean all the time, there are no surprises. The point is to identify conditions under which you need to focus on the inside more than the outside view. So in that particular instance, it was beneficial for me to focus on the inside perspective.
When you look at these questions that I really sort of completely end up missing, one that primarily comes to mind is the Trump-Kim meeting: Will it happen or not? It ended up happening in Singapore. There, for example, I was weighed by the crowd. My initial thought was: Trump was a showman, he liked to do that, so he probably will. So my initial forecast was right. But then I engage with people, with their arguments. It's not gonna happen because of this and that, and given the conditions over there, they're all flaky and will not go. So I steadily decreased my forecast because I was convinced that they were right that Trump wouldn't be able to pull this off. And up until the last minute , the forecast showed that no, the meeting is not going to happen. And then, whoops! It flipped over, and we all got a horrible Brier score (measure of forecast accuracy) because we gave the event a 5% chance.
Again, this is a good example of how you look at balancing inside versus outside view and your own way of thinking with the crowd and the crowd engaged. I should have stuck with my own forecast because my primary framework for approaching high-level political decisions tend to be very individual-centric due to my background. I should have stuck with: This is what we know about his personality, his modus operandi. Then, the chances that he will try to pull something off should be more than your average American President, and I sort of suppressed that explanation and went because I was convinced that their arguments were right.
Clay: Have you learned of ways to sort of catch yourself? Because there are cases where you do come up with your own idea and then you read someone else's and realize the crowd is definitely right. I would imagine, on average, over a long-period of time, going with the crowd is better. So have you developed tricks for yourself, or a checklist to see if you should change to the crowd? Do you have criteria for that?
Andrew: Like a heuristic.
Clay: Yeah.
Balkan: What I tend to do, generally, if I'm faced with a crowd opinion that substantially differs from what I think happens, I flip the question. Would I be convinced if I have the opinion of the crowd, and the crowd has the opinion of me? I flip it and try to see which seems to be more convincing. Will I be convinced by the crowd if the crowd is saying what I think it is, and my opinion is what the crowd thinks it is? Maybe my arguments are not as strong as I think they are. Because once I take my opinion as someone else's, I tend to think about what the weakness is and how we can put holes into it. So I try to defend the crowd's opinion as my own opinion, and try to defeat the crowd's opinion as my former opinion. That generally is a good check, at least in my case, because it helps you to bring along the cognitive biases rather than fight against. It helps you to bring along the cognitive biases, because you are predisposed to prefer your explanation to others' explanation. It's easier to criticize others, than criticize yourself. So if you have to project your thinking onto others, and then criticize, it's easier to do that, and you will be more harsh. And that actually is better for you. So instead of fighting that urge, you actually bring it along, I find that quite useful: Flipping the tables, and then going at my own prediction as if it is the crowd's prediction.
Andrew: That's fascinating. We just have one more question, and then we're going to get into a couple rapid fire ones. The last question is: As we mentioned in the introduction, you have this Substack newsletter, Hindsight 20/20. We were wondering about what the impetus was for starting that newsletter, how it's been going, and where you'd like your own project to go moving forward.
Balkan: Excellent. Well, the impetus was not much different than yours for starting GlobalGuessing. Hindsight 20/20 was more of a public ledger for me to write some of my predictions down in a sort of longhand format, and organize or push myself to organize my thinking that connects thinking about certainty in geopolitics about the future, publicly so I could be held accountable for myself. That was the primary thing. Making it out there, obliges me to write about it right there, and seeing it there would force me to think more clearly about how my predictions go. So rather than keeping them in a private format, with Good Judgment and others, I want to put some of them out there in that way.
Clay: That's awesome. I'm personally subscribed to your substack, and there will be a link down below. Your most recent post, I really liked. That's the one that got me to subscribe. So, at the end of all of these interviews, we'd like to do a few rapid fire questions. Two of them are ones that we asked Regina Joseph. But the other one is, because I've heard the same word over and over and over again in the interview called incentives in predictions. I was wondering if you had taken a look at websites, including Metaculus, and if you thought they were approached, having community points, was a way to have proper incentives to push better forecasting. Or, do you think a better future is in platforms such as Kalshi, which allows money trading on event-based predictions?
Balkan: I'm agnostic. One of the reasons why I'm skeptical about prediction markets constructed with a money component is liquidity. The availability or the depth of the market. Even the top ones, like PredictIt, in terms of political prediction markets, are very shallow so you can move a lot of things. So the idea then becomes not accurately predicting the outcome, but arbitration between different views on the market to make money. So it kind of sort of diverges. Incentives aren't necessary. I am skeptical of making money through accurate predictions with a market format. I think prediction markets have a role, but I'm skeptical in terms of whether that money is a useful incentive.
When I look at people who get into this way of thinking, in Metaculus, in Good Judgement Open, and in other places that try to predict, I tend to see two incentives that drive people. One is intrinsic. People who do it not necessarily for the money, but to see whether the world works in the way I think it does. You drive some satisfaction from that.
The others tend to be driven by the competitive instinct. Forecasting is more about external validation, in the sense that it's a sport. You beat the competition! You're number one! You're in the top three! Your drive is to compete with others. Now, do you really need money for that? It helps, but there won't be much to come through. So putting the same level of effort on some other endeavor would bring you more cash flow, rather than a few $100 you're going to win trying to do this. It is more the social hierarchy component–I do this, I compete, I win. It's my competitive instinct that helps me to do it. And there, I think the community points might be a good incentive the way Metaculus does because it gamifies it that way. It provides a very public way of saying I am the top-dog here.
We are as a species, quite a competitive species. We like to do things as teams. There's a reason why almost every culture has team sports in one way or another. We like to create those sorts of hierarchies. And in essence, it could help, but overall how much it would attract others who are not driven to either one of these two drives? I don't know, and whether that's a loss in terms of being more accurate in the future–in other ways, would it matter when people are either intrinsic or extrinsically geared towards making accurate forecasts are not involved–would this be a loss? I don't know. That's a testable proposition. And so I'm agnostic in the long-term aspects of this, but I'm more partial to the gamification component rather than prediction markets making money.
Andrew: And just really quickly, I was wondering, do you think that somebody who's at the top of the charts on Metaculus, do you feel like that achievement would translate them to doing similar work to what you do in terms of forecasting as a job? Like, could they use that as sort of almost like certification of their forecasting abilities? Or do you think that the two skill sets are still there's a chasm there?
Balkan: The primary chasm there is convincing people that this actually matters for their livelihoods, for the future of their business, for their well being. What I see, generally at the policy level, both in business and on the government side, is people don't necessarily wrap their mind around why making accurate predictions earlier is going to be good for their business. It depends a lot on of course on who you talk to, If your income, if your salary is not necessarily depending on being 5%, more accurate than the other guy, you will not necessarily be interested in spending money on that. But, for example, say you're talking with an entrepreneur, or a founder who would be earning a lot more if he or she is slightly more accurate than the others, there's a lot of incentive to put money into that, because that gives you the edge. But if you're talking with an HR person whose job is not to screw up, rather than increase the company bottom line by 5%, why would you want the risk of being singled out for being wrong if you are being precise? A lot of people prefer to be vaguely right than precisely wrong. And that's the incentive structure in the broad, academia, government and business sector. It is always an uphill battle to identify why this is a good thing. Why you actually benefit–your organization, your company, your department–benefits from being more accurate about the future is always a bit of an uphill battle, I would say.
Clay: Alright, and to close it off, you will now be asked to make two rapid fire predictions, both of which were asked to Regina as well. Number one, and this one, given your name, I think you would have a little bit of extra expertise: What is the likelihood that Putin annexes more territory in Eastern Europe, including the Balkans in the next five years?
Balkan: Less than 5%.
Clay: And then what is the likelihood that we credibly detect alien life, defined as cellular life / proto-organisms, in the next 10 years?
Balkan: Hmmmm. That that would include things in Mars and in the various moons of Jupiter, and everything else?
Clay: Yes, but it has to be current not past.
Balkan: Oh, okay. So existing continuing life, rather than something that have existed a million years ago.
Clay: Let's do both.
Balkan: If it is the broader prediction, and includes both of the classes in the next 10 years, I say 10 to 15%, at least. If I had to give you a range, I'll give a range of 10 to 25%. I'll tell you why, very briefly, my reasoning behind it. More and more countries, and private corporations are sending probes and space missions so we are actually increasing the number of potential discovery sites. And Elon Musk's desire to go to Mars, etc, I think would also kick off, including Bezos and others who are trying to do the space race. So there will be an explosion of space exploration in the next 10 years, that increases our potential to do so. And I think there are already signs with regards to Mars particularly, as well some of the moons of Jupiter, that suggests there was life at some point because of water, etc. I think, 10-25% chance that we will find something that exists.
Now, continuing existing life, cellular structure, I will put that somewhere within the range of 2% to 4%. Partly because it's harder to detect: You got to be at the exact right. It might exist in one small part going on, and Mars is huge so you might not necessarily land in the correct spot and stuff like that. And it is a lot less likely naturally because of injunction, it's a lot less likely than the other one to happen. So it's, I would say it's less than 5% more closer to the frequency.
Clay: All right, well, we'll make sure to get back to you in five to 10 years. let you know how those predictions did. For all of our watchers out there, you can find Balkan over at BalkanDevlen.com, @BalkenDevlen on Twitter, and his Substack is h2020.substack.com. Anything else, Balkan, that you would like to plug for people can find you?
Balkan: No, thank you very much! Much, appreciated.
Clay: Thank you so much for all your time and your answers. It was a wonderful conversation. We definitely learned a lot.
Balkan: Oh, excellent. Excellent. I'm very, very happy to be here.
Clay: Thank you.
Balkan: Awesome.
Andrew: Thank you so much.
Clay: And make sure to tune in next week for the second episode of the Global Guessing Weekly Podcast.
Balkan: Who are you chatting to next week?
Clay: We don't think next week will be an interview. It'll probably be a discussion, most likely about our next episode of Metaculus Mondays. But then we are having a series of interviews we're trying to line up for the future so much more to come over here.
Thank you everybody!
Find Dr. Devlen
Follow Balkan Devlen on Twitter @BalkanDevlen
