✅ Juan Cambeiro, the Top Pandemic Forecaster on Good Judgement Open

✅ Juan Cambeiro, the Top Pandemic Forecaster on Good Judgement Open

Since beginning Global Guessing, we have noted that interdisciplinary approaches to forecasting have often borne positive results. Whether guests have come from the world of finance, academics, or biology, their ‘outside views’ have frequently contributed to their forecasting accuracy. And our guest this week, Juan Cambeiro, is no different.

In this episode 6 of The Right Side of Maybe, Clay and Andrew sat down with student, biostatistics student, and Metaculus summer analyst Juan Cambeiro to discuss his background in forecasting and work with Metaculus Pandemic.

After discussing Juan's background in forecasting, we dove deep into three recent forecasts he made covering Mortgage Interest Rates, US COVID Deaths, and the Tokyo Summer Olympics.

  • Two of the questions Juan got to the right side of maybe, while the third he ended up on the wrong side. We not only discussed what went well with his correct forecasts, but also explored potential sources of error with the third.

We also spoke with Juan about his information diet, how he goes about finding new sources, and the importance of forecasting for the world! Be sure to tune into this episode as you will learn a lot about good forecasting habits and how to perform well in tournaments.

Youtube

Apple Podcasts

Google Podcasts

Listen On Other Platforms

Just look up Global Guessing Podcasts to find the episode! New episodes might take 6-24 hours to show up, so make sure to subscribe to the podcast so you'll know when the new episode goes live.

  • Podbean
  • Amazon Music
  • Audible
  • Stitcher
  • iHeartRadio
  • TuneIn
  • Pocket Casts
  • Blubrry
  • PodcastIndex
  • Podchaser
  • Gaana
  • JioSaavn
  • Deezer
  • Podcast Addict
  • Podchaser
  • PlayerFM
  • Bullhorn

Transcript

Andrew Eaddy
Welcome to the sixth episode of The Right Side of Maybe: The podcast where we talked to forecasters about their forecasts, their thinking behind the forecasts, and how we can all learn from their process moving forward.

Today, we're joined by Juan Cambeiro. Juan is a recent graduate from McCauley at Hunter College, where he majored in biology with honors. He's done benchwork in RNA labs, and also has experience in epidemiology and biostatistics. He's a super forecaster, and placed first in the Good Judgement's FOCUS 2.0 Tournament on COVID-19. And he is also first or second in the three challenges on Good Judgment of him that he's working on. He's most active on Metaculus, and recently started working as a Summer Analyst, where he's also done part time work as a moderator for the Pandemic domain for the past year. We're gonna learn a lot more about Juan and his background and some of the forecasts he's worked on. But to start, we're just going to get some background information about one. But before we even get to that, welcome Juan to the podcast. So thank you for being here.

Juan Cambeiro
Thanks for having me.

Andrew Eaddy
So to get started, we were wondering how and when you first were introduced to quantified forecasting, you know, if you were sold on it immediately, or if it took some time to sort of get acclimated to the practice, and what interested you about forecasting in general?

Juan Cambeiro
Yeah, so I first heard about it a few years back, but I didn't really get interested or think much about participating in quantitative by forecasting until the fall of 2019. When I listened to Rob Wiblin interviewing Philip Tetlock on the 80,000 Hour Podcast, and yeah, they had mentioned, Good Judgment Open. So I was curious and going and started forecasting a couple questions.

And then, in early 2021, I learned about Metaculus from Rob Wiblin sharing Metaculus questions about COVID on Twitter, and got really excited about Metaculus. So that's how I got introduced to it.

Clay Graubard
And when you first got introduced to quantified forecasting, and you actually started doing it, did you initially just do it as sort of like a hobby, like "I'll spend 10 minutes forecasting here or there," and then slowly over time, as you started forecasting, you got much more serious about it and started working out your skills. Or, like, once you started forecasting, was this something that you wanted to get good at? And sort of what were those first steps that you took while forecasting to get your accuracy and calibration, better and more, you know, well aligned?

Juan Cambeiro
I think the path you just outline is exactly the case for me and that I started out pretty slow. I just started forecasting, just because it seemed interesting and fun. Not so much, because I thought it was like a very important thing to get better at just something that seemed cool to do. But yeah, I didn't spend all that much time on the questions I forecasted on, I just picked out a few questions that seemed interesting, I'd spend a couple hours on them a week, usually no more than like five or six hours a week.

But then it was COVID that really got me into, you know, like, even at one point obsessing about forecasting, especially in the spring and summer of 2020. So I spent a lot of time on forecasting, like instead of five hours a week, maybe four hours a day, if you can't like all the reading. But I don't think I could have gotten to a place where I could do accurate forecasting on COVID. If I hadn't had at least that introduction to forecasting and playing around with the fall of 2019.

Andrew Eaddy
So you just talked about some of the reading that you're doing. What sort of information was part of your sort of research, you're going through some of the early questions, you know, you mentioned this COVID forecast. Was there a process that you had to find the relevant signals for your forecasts come questions that you have, you're looking for answers to, and you do spend a lot of time following the news in general.

Juan Cambeiro
So for COVID forecasting, in particular, something that I found to be very useful, probably more useful than like reading any other news source or even all of the news sources combined, is to assemble like a list of about in my case, about 80 people on Twitter who provide really good analyses and insight into important sub topics on COVID. This like both gets helps me get a handle on important information much sooner than most news get to it and to get closer from the source, so I found that to be super valuable.

Clay Graubard
And how did you sort of like find those users on Twitter? You know, that's, that's something that, you know, we've done for some of our forecasts. For instance, on like the Donbass region, we created a Twitter list of people that seemed to, you know, do open source intelligence for the region. And many times for us, it's like, we'll stumble upon, like one good retweet, and then we'll find an account, and then, you know, find other accounts that those people have found. Do you sort of similarly just stumble upon these accounts? Or have you sort of honed a sort of practice in terms of trying to find these, these few sort of signal identifiers and good analysts on Twitter when there's, you know, throughout the sea of noise that exists?

Juan Cambeiro
Yeah, so that's one way of doing that I have employed like looking at the likes and retweets of analysts that I already trust. Otherwise, just like, you curate your feed to just almost entirely be about COVID, you'll just see, they'll just got content that aligns with your interest. And then I had to dig into, like past tweets that these individuals I've come across have made, kind of get a sense as to how accurate they've been in the past, what their track record might be, even if it's not a quantified track record, was like, for instance, where they write about masks sooner than most other experts. That's usually the way I go about doing it.

Clay Graubard
As you were taking in this very nuanced and analytical information on COVID, did you find that that had a different way in which you viewed the situation? Did you feel that you had a much more nuanced view of the pandemic? Did you feel like you were much more like well formed and like, knew where the pandemic was generally heading before others? And that gave you a different sense of mind? And basically, like, did you notice, like, beneficial side effects from you know, this information diet that was critical to your forecasts?

Juan Cambeiro
Yeah, like, in addition to these people, usually being the best source of information, they were usually like, well ahead of the curve. And like sharing the sharing the most important and relevant information, and sometimes even like parsing through the implications of that. So like that, and it doesn't even have to be people who are oftentimes as people who have any background knowledge on the topic. So for instance, the first person that comes to mind, which I had mentioned earlier, as the person who introduced me to Metaculus is Rob Wiblin sharing COVID forecasts, and January in February of 2020. I mean, just the fact that he was sharing all of these COVID forecasts really alarmed me in the sense that it was a signal to me that this was an important issue. And then I myself, like arrived at the conclusion, like in mid February that we were getting get hit hard by COVID. In the US. It was all buttoned up at a ball.

Andrew Eaddy
Just a quick question, I want to go back a bit. So we talked about your experience in biology and also bio stats for these COVID podcasts and different podcasting in general, do you feel like your background helped to prepare you or give you sort of like a leg up over some other people that might be entering the space? Like did your background bio stats, that all provide you with some, you know, information that you found relevant to your forecasting?

Juan Cambeiro
Less than you would think, especially since I didn't work on like viruses in particular, right. But I think the one area of COVID forecasting that might have helped with in particular is around the variants and like the mutation rate of the virus and the fact that very few people I mean, even I didn't really foresee the extent to which SARS-CoV-2 would evolve to both evade immunity but especially to become more transmissible and even that layer, but in terms of understanding the mechanics of how that's possible, like the biology undergirding, and the space of evolution that's left I think that's like the one area I might have a better handle on as a result of my background.

Clay Graubard
Before we talk more specific about your approach to forecasting, since you forecast it, have you sort of noticed, like, just other benefits that doing this process has sort of helped you? Has it increased your ability to pick up and learn new information faster? Has it moderated your political views a little bit and helped you have more nuance, because there's research coming out of Penn with Barbara Mellers, that has found that the process of forecasting helps moderate our views. And as I'm sure you're aware, every single time you forecast a question, most of the time it's it's it's a lesson in learning a lot very quickly and being able to process that information, place it into the future. So have you you know, noticed other benefits towards your analytical and, and reasoning skills as you forecasted more?

Juan Cambeiro
So, in terms of like actually adjusting my views, or my beliefs, as a result, I actually don't think forecasting is really influenced that very much. I think what it has influenced is how I search for new information and how I pick out what I think is most relevant, because that's exactly what you have to do when you're forecasting, especially on a question you don't know much about. So you're doing like some background reading and trying to figure out what the important factors at play are. So I think that's a skill that I picked up along the way, as a result forecasting that I can apply to situations in which I'm not forecasting.

Andrew Eaddy
Wonderful. And then also, just really quickly, I'm curious, did you happen to you mentioned that, you know, one of the first steps that you did for this forecast was to go on Twitter and find other people who had smart analysis? Did you end up teaming with anybody on these forecasts? And have you explored working with other forecasters in the community, especially now that you're a super forecaster, is that come up to you at all?

Juan Cambeiro
I do now to an extent. But I didn't at the time, just because I think I was relatively new to forecasting that I don't really, I mean, retro retrospectively, now I can see it like the value of teaming up with others, but at the time, I don't really see the need to do it. So I didn't, but I shut up. And I am at the moment, and I expect to continue to.

Clay Graubard
So now I think it'd be great if we move specifically in terms of your approach to forecasting and you provided us with three great forecasts that we want to walk through. But before we get to those specific forecasts, we want to just talk about your larger approach to forecasting. So you place first, and the FOCUS 2.0 COVID Forecasting Challenge, and have scored really well on Good Judgment Open COVID-19 forecasting. What do you attribute primarily to your success in COVID forecasting specifically? Has it been your ability to find good information from Twitter? Has it been your ability to determine signal versus noise? What do would you say has been your key ingredient in terms of being so good at COVID forecasting?

Juan Cambeiro
I certainly don't think there was like a key ingredient. It's more of a process right? Like the way one approaches forecasting and actually does it. So for questions for which I already know a lot about the topic, so for instance, when approaching a new COVID question, where I already know a lot about the particular situation, I can spend as low as like a few minutes on it, for my first forecast on it, even if I don't like consult the background information, or base rates, or whatever it is, just because I'm already so involved in forecasting similar questions, but for in terms of my general approach to approaching the question.

So first, if there's background information provided, I read that as well as any links that are provided. If after that, I can't come up with a good base rate. And I feel like I still might not have a good enough grasp on the topic, which I usually don't at that point, to at least a first pass at the question that I use in some combination of scanning an introductory article, usually the relevant Wikipedia page honestly, as well as like reading some recent news articles on the topic, and the the comments section for that question.

After that, at that point, I almost always have what I need to arrive at at least a somewhat correct base rate and other understanding to make a first forecast. And then this might be particular to me, but then I don't like the need to get late and put that forecast. I just kind of arrive at it. In my head, then I consult the community median. And I adjust my forecast before inputting if it's like way off compared to the median. This, like adjustment adjustment method, though really only works well for binary questions myself more date range and numeric ones where I can't really avoid seeing communities distribution.

Clay Graubard
So you normally over hide the Community Median until you've reached your initial base rate assumption, correct?

Juan Cambeiro
That's correct. Yeah.

Clay Graubard
How long do you think it takes for you to get to your initial point? Obviously, it'll differ forecast a forecast-some questions are obviously much easier than others, especially if they're similar to a question you've done before. But you know, if it's like a relatively new question where you're we're having to do background research to get to that initial forecast, do the sense on roughly how long that normally takes you.

Juan Cambeiro
Really depends on how much I know about the topic, but maybe, on average, 45 minutes to an hour. But that can vary a lot for like, half an hour to two or three hours.

Clay Graubard
And then it seemed like when you were sharing your forecasts, you rely on a lot of small and frequent updates. Is that is that true across the board or more? It wouldn't surprise us, given your skill that you update all your questions frequently in small batches. You know, that's what Pavel Atanasov's research found that being the best forecasters, that's what they do. Generally, is that something that you tend to do on most your questions is sort of small updates over time?

Juan Cambeiro
It is, yeah. Both as a way of getting me to seek out new information that might be relevant to that forecast, especially like new news updates. But also as a way, especially if it's a question I started forecasting on recently, as a way to just take a fresh look at the question, which I think is probably useful and reducing my bias and helping me catch things I've missed.

Andrew Eaddy
Awesome. So now Clay, unless you have other questions, I think we can get into the first of the three forecasts that we want to discuss today. And I thought we could start with the question, all three of them are on Good Judgement Open, about mortgages. And so I'll just read the question out loud, so that, you know, listeners know what we're referring to. The question, which started on March 13, of 2020 and closed in June of 2020, read: Before the 27th of June 2020,  will the weekly average contract interest rate, for the 30 year fixed rate mortgages with conforming low balances in the US, fall below 3%? So I mean, before we get into into some of the questions, I was wondering if you could maybe explain what that question is asking for the readers or what that is, you know, for people who may not be familiar with fixed rate mortgages.

Before 27 June 2020, will the weekly average contract interest rate for 30-year fixed-rate mortgages with conforming loan balances in the U.S. fall below 3.00%?
Outcome will be determined by data from the Mortgage Bankers Association Weekly Applications Survey press release, generally published five days after the week’s end at Mortgage Bankers Association. For the week ending 6 March 2020, the rate was 3.47% (Mortgage Bankers Association).Confused? Ch…

Juan Cambeiro
Okay, see, this is a good question to discuss, because it's an example of how I forecast on questions where I don't really have both, like relevant background knowledge and also I have difficulty in understanding what it means. And at the time, I don't really understand what it what a 30 year fixed rate mortgage, what conforming loan balance is really meant.

Essentially, it's something like a mortgage on a single family home that meets the requirements of Freddie Mac or Fannie Mae, with the requirement that the loan amount is under like $500,000, and most areas of the country except some urban areas, and that interest rate is fixed over time.

It's something like that, but anyway, that I like really have to know what it is. Not really. And I think the fact that I didn't really know what it is, and I didn't really have the relevant economics training to properly evaluate the question actually helped me in the end, because just looking at the base rate, which is all I pretty much did, and looking at the changes week over week are how I arrived at my forecasts.

Clay Graubard
So on this question, the initial community forecast, once it's stabilized after the first week was about 75% chance "Yes," that the weekly interest rates would fall below 3%. For context, what was your initial forecast on this question?

Juan Cambeiro
Yeah, so my initial forecast for the question, just pulling it up was 15%.

Clay Graubard
And then it reached a low of I believe at some point you had 0%, as well on that forecast, correct?

Juan Cambeiro
Yeah, toward the end, I reached zero. But even somewhere in the middle of forecasting on this, I reached a low of like, 2%.

Clay Graubard
So clearly, I mean, you got right off the bat, you were on the right side of maybe, you know, many times in these questions, it's people getting there, much quicker than the community. And as we can, then as if you're watching this video on YouTube, you can see it took about two months for the community average, to go below 50% and consistently stay there. So you got there about 60 days beforehand. What do you think you got initially correct in this forecast? And then conversely, what do you think the community on good judgment, you know, missed off the bat and took them two months to realize?

Juan Cambeiro
So I'll start off with the caveat that I think consulting the Community Media to make sure you're not way off it especially for questions where you don't have a like, very clear and confident explanation in your head as to why you're probably right and the community is wrong, I think that's something that you have to look at very closely. And it probably would be worth, like sharing your analysis and a comment to get feedback on it from other forecasters. Like if you got a bunch upvotes, and no one is challenging you, then you probably have more reason to think you're right. But if someone, if you get people challenging your reasoning, then that's a way for you to course correct.

But those aren't really things I did in this case, which is usually not my approach here, I just very heavily relied on the base rate. And the base rate was just overwhelmingly clear to me, and that the interest rate stayed within like a narrow range of point, five 5%, between about like 3.4% and 3.95%. We also started out much closer to 4%, than to 3%. And even in this like especially volatile time, the interest rate never increased or decreased more than 10 basis points week over week. So it would take at least a couple of weeks of sustained decline for its fall below 3%. And we were just never really close to that the timeframe of this question.

Andrew Eaddy
I see sort of an analogous situation here between doing he did this forecast and the way that some analysts look at stock markets where they have, you know, technical analysis and fundamental, the technical being sort of just looking at the movement of the stock and thinking about the future of it based on prior movements. And the other one being having a more macro perspective on the market. And, you know, deciding your your financial thoughts that way, it seems like you are leaning more on the technical side of things. We had a base rate and you're looking at the weekly movements to determine your forecast. Did you think at all about sort of the macro economic picture of the United States? And how that might affect this forecast was? Was it really more just the weekly movement that you're paying attention to?

Juan Cambeiro
I did to an extent, yeah, there were some really good comments in the comment section that we're considering that inside view, which was basically something like this, this huge negative demand shock would drastically lower interest rates, probably in this case to below 3%. But then, something else you have to consider if you start considering the inside view, and that a lot of people seem to have missed was that that seems like that huge negative demand shock and its effects on interest rates seems to have been largely canceled out by the huge purchases of mortgage backed securities by the Fed, right. So like, if you were considering the inside view, you would have to take both of those into account to reach the conclusion, which ended up being true that the interest rate would stay roughly stable and within roughly the historical range. But if you had just not considered the inside view, and just considered the base rate, you also could have gotten in there in that case, which just speaks to how valuable looking at the base rate is.

Andrew Eaddy
And would you say that just in general, this is sort of like a broad forecasting question. Do you think, you know, at least for you, it's harder to do these updates or like to find the base rates? Like for instance, like for this mortgage question. You know, I'm sure us mortgage data goes back a long ways and there's a lot of different reference classes you could use to think about this question. And did you find updating it harder to find or was finding that initial base rate more difficult?

Juan Cambeiro
In terms of time commitment, finding the base rate, and the right base rate is definitely the more time intensive task. Then the weekly updates are usually pretty easy in that you just look at how the interest rate has changed in this case, week on week. And it's just a good way of making sure your forecasts don't get stale, right. It's really not that time insensitive or that difficult to update them. If there is no, if there are no huge changes going on the situation. And in this case, for me, there weren't that I wasn't really looking at the inside view all that much. But for others who were like almost exclusively considering the inside view, there was a lot going on.

Clay Graubard
When it came to those like updates and looking at how the mortgage rates were changing week over week, were you looking at absolute change? So how close it was getting to three or the rate at which it was changing down. I'm guessing you looked at sort of both. But did you have a sense in terms of what you were weighing more at the time mentally?

Juan Cambeiro
I was definitely looking at both. I think that third factor was like trying to determine when it would hit a low. And that was probably the main thing I was looking out for. And in fact, I had misjudged when it when I thought it had reached a low but in fact, had not had just gone back up for a week or two and then got back down to a new low, but never really approached a low of 3% or below 3%. But yeah, I was basically looking for the the record low that would reach.

Clay Graubard
I want to focus on the other question that you got on to the right side of maybe right off the bat in before the rest of the community. So this question was asked on April 13, which was asking on April 13 2020: How many states will have reported 1000 or more COVID deaths out of the first of May 2020? And the correct answer ended up being more than 13 states. But if you guys are watching the YouTube video, initially more than 13 had around zero to 5% likelihood and only became the dominant forecast a day before the question closed. It was a massive sort of uptick, and yet your initial forecast was that the more than 13 would have about a 85% likelihood. Could you explain to us your approach to this question, how you achieved your initial forecast. And you know what you think you got right on this one?

How many states will have reported 1,000 or more COVID-19 deaths as of 1 May 2020?
As of 11 April 2020, three states have reported 1,000 or more COVID-19 deaths: Michigan, New Jersey, and New York. The outcome will be determined using data as provided by The COVID Tracking Project for 1 May 2020 as provided on 4 May 2020 after 5:00PM ET (The COVID Tracking Project). For the p…

Andrew Eaddy
And I'd say especially given the fact that as Clay said that started on April 13th, that was like really close to the start of the pandemic. How did you sort of curate your information? I'm sure there's a lot of information out there at that time.

Juan Cambeiro
Yeah, so broad contours of like my thinking at the time, really since mid February was that the response would be inept in the US and we wouldn't really put the proper interventions in time to prevent it from COVID from killing a lot of people in the US. So like my thinking in this question, for instance, started out with based on the current trajectories of cases, which are being undercounted by some significant factor will almost certainly reach more than 13 states, reaching over 1000 deaths based on both current death, but especially case trajectories. So that's how I started out.

Clay Graubard
Do you think that it was making that initial assumption of you know how this is going to unfold that the community sort of missed? That they were taking it more so as the current data with face value and not anticipating the undercounting and the that there wasn't going to be a marshaling of a large federal and coordinated response right away that you sort of spot out well before everyone else. Or was that just one of many factors that you felt you got right?

Juan Cambeiro
I think that's probably the main factor. But also, there was just at the time, a huge difference between the COVID situation and different states, right. So, for instance, at the time, New York and some surrounding states were especially hard hit but didn't necessarily seem like it would be enough states outside of the Northeast. That would be hit hard enough in time to reach over 1000 deaths in this time frame So it was really a matter of assessing the extent to which COVID would spread throughout the different regions of the US.

Clay Graubard
Re-looking at this graph, I'm going to guess the situation, given your forecasts and the community is that at some point, it became: Is it going to be 12 states? Or is it going to be 13 states? And that was sort of the big disagreement, because you were saying over are 13 or more states for a while and then closer to resolution, you sort of switch that it would be between 11 and 13, which to me indicates that there was like a state on the border. And then just before resolution, it realized that that was going to tip over. And so do you think that was correct? Cause to me, it looks like that first looking at this community graph that they kind of just got it wrong forever. But another interpretation is that if there was a state on the border that the initial interpretation of the community was wrong, but then throughout the majority of the question, they just were sort of straddling this the state on the border. Do you think that's a fair assessment? Or do you have another interpretation of what was going on here?

Juan Cambeiro
I think that's right. Yeah. And if I recall correctly, this question ended up resolving as more than 13 because it reached 14. And it reached 14, because Ohio up until the end, it was like a toss up. So essentially, my initial assessment was that Yeah, well, like based on the current trajectories of the different states, which I had, like all on a big spreadsheet, we'll probably end up reaching 13 maybe 14, 15, 16., That assessment really did change as the trajectory of deaths in Ohio in particular that I ended up seemed like seeming like it was on track to hit over 1000, but ended up hitting that threshold because of updates to its backlog. So they were filling in gaps that had occurred in previous days. And that's how they ended up reaching over 1000. But there was a point toward the end where I readjusted back toward the community meeting, and both because it didn't seem like Ohio was on track to exceed 1000 deaths.

And also because I really started analyzing, depending my forecast on these analyses and forecasts by this team at Los Alamos, which put out a COVID model for all the states of the US. And at this time, I was looking through the histories of these models and Los Alamos had the most accurate forecast, the most accurate model for forecasting seasonal flu in the 2018 2019, flu season. So I adjusted heavily toward what their forecasts are saying, which is that will probably fall somewhere in the 11 to 13 range. So those two combined, asked me to adjust back towards the community median. And then at the very end of the timeframe of this question. Ohio did fill in those backlogs, and it seemed like they were going to exceed 1000 deaths. And they did.

Clay Graubard
You mentioned COVID modeling right there. I'm curious, given your experience forecasting COVID, and looking at these models: What value do you think that these models give? You know, there's been a lot of reports in the media about what all these models show, very rarely are there updates saying how these models have actually fared. And there's been research, you know, coming out of people that we've had on the show, you know, Regina Joseph, Pavel Atanasov, showing that machine learning models and stuff generally underperform human forecasters, and you're an extremely successful human COVID forecaster. What sort of role when understanding the pandemic, do you think machine models have? And what role do you think human forecasters like yourself and community forecasting platforms like Good Judgment Open and Metaculus have in terms of understanding the trajectory of the pandemic? You know, you have Virginia Health Department, they've opened up a series of questions on Metaculus, trying to understand through human forecasting how the pandemic is going to shift so clearly, there's a sense within the public health community that human forecasters have a seat at the table. How do you sort of view that two components when it comes to understanding the pandemic, and given the history of the pandemic and pandemic forecasting, do you have a sense for which one is better overall?

Juan Cambeiro
So at the beginning of a pandemic, I think that's where human judgment Oh, forecasting is most useful in that there are too many uncertainties to plug in, or computational modeling to be all that helpful. Although it can still be broadly helpful and like laying out the space of what's possible. And then human judgmental forecasting can try to quantify all of these uncertainties and arrive at tried to be a little more granular than the computation of forecasts.

Then the computational forecasts become more valuable once they're, once they get a handle on the extent to which cases are being under counted. What the delay between a case being detected, and the death occurring is what that time lag is. And then some models. So I'm thinking of Youyang, whose model in particular here, ended up doing extremely well. And that's where computational forecasting can really come into play, I think, once the pandemic once a pandemic really gets going.

And in that case, human judgmental forecasting becomes more valuable on the margin for questions that computational modeling would, again, have a hard time left. So for instance, trying to get a grapple on the future evolutionary space, in terms of mutation rates and variants, which, at some point in the future, computational modeling will probably be able to do at the moment cannot do all that well.

Clay Graubard
But even like, where computation modeling does well, does it still not make sense for human forecasters to sit on top of those outputs, still as a process? You know, like models are still in many ways humans automating their forecasting process. So it would seem like if you have good well-calibrated, unbiased and un-noisey forecasters, that having them sit on top of model outputs, almost as just, you know, making sure that the outputs make sense in terms of a larger understanding that there still is a place for for humans, even once we're in a world where the analytical and automated models can still put out good outputs. Would you agree or do you see it differently?

Juan Cambeiro
I do agree, conditional on the resources being made available for like having the manpower to do that. Right. But that's, for instance, what they do and what they're modeling, right. Yeah, I think ultimately, that's going to be the best approach for infectious disease forecasting, I think, but it would necessitate the manpower to look over all those forecasts and adjust them appropriately. But for just like a couple of like forecasters doing as a hobby, it's too much to look at forecasts for like all 50 states, for instance, that are updated daily.

Andrew Eaddy
I want to look at this forecast quickly, from a personal perspective, earlier, Clay mentioned that forecasting, you know, on a kind of Barb Meller's research, has the ability to moderate views, political or otherwise. Do you feel like working on this forecast so early in COVID helped you to sort of compartmentalize your own anxiety about the disease as you sort of did the forecast? Like the act of sort of objectively looking at the signals and understanding the severity of it versus, you know, potentially just listening to what the news was saying.

Juan Cambeiro
So I'm actually not sure, like, if anything, forecasting on COVID made me more alarmed early on about COVID and more anxious about the prospect of COVID hitting humanity hard in the very near future. But what it did help with was guiding like my personal decision making. So I like in February, I stopped attending, attending like large lecture classes. I tried avoiding mass transit as much as possible. But the one mistake I made was not to embrace the use of masks at that point. I didn't until sometime in early March, but yeah, in terms of guiding my own personal decision making it helped.

Clay Graubard
And I'm guessing also like when during like the initial, "Oh, the lockdowns will just be like a very short thing and may only be two weeks," I'm guessing that you had a different sort of sense of how that was going to unfold and we're a little bit less naive as the sort of lockdown measures came into, came into place as well. In some ways, it's bad, you know, saying it being more anxious about the pandemic, but in some ways, it makes you less long term anxious, because you have a clear understanding of where things are, are probably going to be headed as well.

Juan Cambeiro
So I mean, I think, for instance, to that point, when most of Europe and North America was under stay at home orders and mid to late March, it was already clear at that point that COVID would become an endemic. Right, that was never going to go away. We're not going to stop it with contact tracing. I think the key uncertainty at that point was simply like, what do we have vaccines and time to inoculate as many people as possible to protect them against it before it reached them.

Clay Graubard
So we've talked now about two great forecasts where you got to the right side, and maybe before everyone else. And now we want to shift to your third and final forecast, which we're really glad that you shared, because this is the forecast where you're on as we now like to say the wrong side of maybe. I thought it'd be great if you could sort of walk us through, you know what you got wrong. So for context for our listeners, this was a forecast that was asked in June 2020, which closed may 1 of this year 2021, which was asking: Before May 1 2021, will it be officially announced that the Tokyo 2020 Olympics and or Paralympics will be canceled? The correct answer was no. And that's what the community got. But you viewed this question differently. Could you walk us through how you initially approach this question? And, you know, the fact that you ended up getting this question wrong, and why the you felt that given your strengths in COVID forecasting? You know, why this question was an outlier.

Before 1 May 2021, will it be officially announced that the Tokyo 2020 Summer Olympics and/or Paralympics will be canceled?
Coronavirus concerns already forced the postponement of the 2020 Olympics and Paralympics to summer 2021 (ESPN, Olympic Games, Paralympic Games). Various concerns such as cost, vaccine availability, and international travel safety have some concerned about the rescheduled ga…

Juan Cambeiro
So there were two key failings on my part here, both on the extent to which COVID would spread in Japan, and also on the political response and political reality, and how that would guide decision making by Japan's government and the IOC.

So on the COVID, front, I really didn't understand at the time and still really don't understand how Japan has been able to avoid the worst of COVID, and that they never implemented, stay at home orders. They have a really elderly population. They had a lot of contacts with China at the beginning of the pandemic. And yet, and even with new variants arising, they still haven't been all that hard, compared to most of the rest of the developed world. So that that was a key failing of mine that I didn't, that I really thought that, especially once the B117 emerged, and its transmissibility was characterized-right now the variants called alpha. I thought for sure it would hit Japan hard since they did not have the mechanisms in place to be able to implement stay at home orders. And then in turn, I thought that being hit really, really hard by COVID, at that point would mean that in terms of the Olympics being politically feasible, that that would that they wouldn't be at all politically feasible at that point.

And then on that, and yeah, in terms of political feasibility, opinion, polling, like poll after poll would show that the Olympics going out with the Olympics was very unpopular among the Japanese public, at one point reading, reaching like 80% of the Japanese public not wanting the games to go ahead. And I just thought that that would obligate The Japanese government, not literally, but in terms of wanting to maintain their popularity, especially given that there was a new prime minister in place, by expected would want to solidify his standing by being popular. But that that end up being the case and that I think, the the political motivation in mind was that it would be bad for Japan's image to the world to cancel the games. That's what ended up seeming like the decisive factor more so than like, opinion polling.

Clay Graubard
So it sounds like you know, you were considering the domestic political constraints and the public health constraints, but you're saying that right. You missed out the geopolitical you know, reputation constraint. I'm also interested, what you didn't mention in, you know, how you approach this question was examining the economic constraints of canceling the Olympics. And I was wondering if that's something that you considered for listeners who are unaware. And I think, you know, something that I wasn't aware until I read the media reports, but that it's up to the Olympic Committee to cancel the Olympics, if the host country, if Japan in this case, were to cancel the Olympics, unilaterally, that they would be on the hook for, I think, 10s of billions of dollars that they would owe the IOC. And so not only, you know, was that the reputational aspect, but there's also an economic consideration here at play. Did you? Was that something that you considered in your forecast? Or do you think that's one signal and I would argue an important signal that you that you missed in your forecast?

Juan Cambeiro
So I had considered it to a degree, but I underestimated how, how important ended up being both in that I did it. So I knew, for instance, that was the IOC that ultimately had to make the decision to cancel the games or to postpone them for a second time. And I kind of figured there were to be some financial ramifications for the Japanese government. I didn't really know what those were until later on. And as you said, it's actually in the billions of dollars. So that's an additional factor, I thought to consider early on.

Andrew Eaddy
Were you doing any COVID forecast at the same time that you're doing this Olympics forecast? And did any of the learnings from those in terms of just how COVID was progressing affect your Olympics forecast at all?

Juan Cambeiro
Yeah, so the single worst question I did on and the Good Judgment Project 2.0 Focus Tournament was also on Japan, like on case numbers by the first of July 2020. And I thought it would definitely reach the highest then, but it didn't. And, again, this just ties back into my lack of understanding and continued lack of understanding as to how Japan has managed to avoid the worst of COVID.

Clay Graubard
One question I had about these forecasts, and particularly this Japan forecast is, it looks like you did better on the forecasts that were harder. So the mortgage forecasts the crowd Brier score, which is how accurate they were was 0.55, which Good Judgment have a different Brier score system. But I think a .55 is the equivalent of of seeing a 50-50 likelihood, so they didn't really know what they were saying. When it came to the the COVID deaths forecast, community Brier score a 0.442. Again, not much insight coming from the crowd. But when it came to the Olympic forecast, it was 0.2, which means there was some, you know, foresight coming from the community. Do you have a sense that you do better on harder questions? Do you think maybe you viewed your you know, you're very good at COVID forecasting, which therefore means you generally are different than the community or you're more likely to be different in the community, because you are better than them. And so you can't be having their forecasts all the time. Do you think that played into anything going on here where you were sort of less receptive to the difference between the community because you're so good at COVID? forecasting?

Juan Cambeiro
I think this ties back into the discussion about the extent to which you should assess the question based on the base rates and extent to what you should adjust that based on the inside view. Right. So for the mortgage interest rate question, I did pretty well by pretty much only looking at the base rate. And I think if I had looked at the inside view, I would have ended up doing worse because I would have adopted the view by many others that this huge, negative demand shock would cause it to fall below 3%. So in that case, a question in which I don't have a lot of background knowledge in which I didn't look that much into the economics at play, considering the base rate works really well.

Or the second question on how many COVID deaths across different US states that's on the, on the other hand, is like a question in which I was following the content matter really closely playing around spreadsheets where there isn't really a good base rate to look at. And in which I felt comfortable looking at the inside rate, like almost exclusively and being pretty confident that it for this question on the Olympics, I think I would have done a while to not try to arrive at too much of my own inside view that went against the grain of the inside views of others for forecasting on the same question, when I didn't have a comprehensive enough understanding of the political considerations at play.

Andrew Eaddy
And then also looking at the graph from this forecast, it looks like the only time that the Yes's outweigh the No's was like last summer, it looks like July to September almost. Do you think that that was, you know, in large part due to the fact that people are being swayed by, you know, their their own personal beliefs that maybe they thought COVID would be done by the summer, and then when it wasn't gone by the summer? You know, they felt like this be going on, you know, forever? And so they had sort of a pessimistic view on the question. Why do you think that that last summer was like, you know, the only real period of you know, where those two positions switch?

Juan Cambeiro
So, last summer, was when the community thought that there was a better chance that the Olympics would actually be canceled, right. So I think they expected COVID to continue be a big consideration, and the most relevant consideration to the extent to which would actually cause Japan and the IOC to cancel the Olympics. So that was throughout summer of 2020. And then sometime, in the early fall of 2020, the community switched over to thinking that was more likely that the Olympics would actually go ahead. And I didn't update nearly as fast in that direction as the community did.

Clay Graubard
Great. So now, I think we want to move along to our last sort of concluding questions, I think that was a great examination of those three forecasts, the two that you got to the right side of maybe, and the one that you got to the wrong side of maybe. And it's just really interesting to listen to your your thought process on those. So to start off, I was wondering, you know, what is the most challenging part of the forecasting process for you? And conversely, like, what is the most enjoyable? It could conceivably those two could be the same given that the forecasting process can be quite difficult, and so some people obviously enjoy going through those challenges. So yeah, for you, what do you find the hardest and the most rewarding parts of forecasting?

Juan Cambeiro
I think they're both the same thing, which is learning about an area in which I don't already have at least some background knowledge. I like learning about new topics that I have no background in. But it's quite a challenge. And that, like, oftentimes, if I'm reading, like some article on it, I don't know what the terms mean. So I look up those terms. And then I just fall into a rabbit hole of like reading more and more about the topic, but that's something I genuinely enjoy. It's just difficult in that it's both time intensive. And oftentimes, like the content matter, might be quite difficult itself.

Andrew Eaddy
I was curious if you use any of these forecasting skills in your personal life? If it has any impact on, you know, decisions that you make on a day to day basis, even if it's just sort of, you know, he said that sometimes you can make forecasts in just a few minutes, if you have a lot of knowledge about the subject, you know, do you make any of those quick ones?

Juan Cambeiro
So, the clearest example, which I think I had already mentioned, of using forecasts for my own personal decision making was with respect to COVID especially and February and March 2020, right. Apart from that, I think, forecasting has influenced my career decision making to an extent that shifted from one area to another within the bio sciences based on like the broad contours of predictions as to what will end up being more important in shaping the future right.

Then in terms of like my day to day life, I don't really employ forecasting all that much, just because it's something I have a lot of control over. So instead of like saying that I think there's a 20% chance, I'll actually get this project done by the end of the week, I'll just like, finding motivation to go ahead and do it. Right. So in terms of day to day decision making doesn't really come into play. But for those other two areas it does.

Clay Graubard
And then sort of Finally, this is a two part question. One is like, what sort of tools do you use when forecasting? Do you use a Guesstimate? Have you looked at things that are being developed at Ought? Have you used Metaforecast, that's been developed, which lets you search through forecast across multiple different platforms? So do you use any sort of tools when you forecast? And then more broadly, like what are your recommendations for both new and experienced forecasters who want to get to the right side of maybe more often or earlier and with the right amount of confidence?

Juan Cambeiro
Yeah, so in terms of tools, so for COVID, for instance, I found myself playing around with spreadsheets a lot. And I did use Guesstimate. Sometimes I did find it valuable for some of the questions on COVID. I have used that a forecast or before that just like Googling questions I'm forecasting to see if it's on any other platforms. So I have consulted, like, similar iterations. And the same question on other platforms.

Yeah, and then in terms of, like improving, forecasting skills, I think the key by far is to practice right. And then learning from your mistakes and reviewing your mistakes, like I remember very distinctly one of the worst questions I've ever had. I got completely wrong. And forecasting was one of the first questions I forecasted on when I first joined the judgment open. And it was something about I think, relating to the makeup of Taiwan's parliament, and that I consulted like some of the background reading. And there was this article by The Economist, which is probably my number one news source. That said that the KMT, which is one of the two main political parties will probably end up gaining control of the legislator. And I just deferred to them and forecasted that they would probably gain control of the legislator for a long time, even against the Community Median. And I ended up being very wrong on that. So I think like one lesson is usually like the background reading gives you useful information that you should know to dine, make your forecasts, maybe not necessarily the source of the forecasts themselves.

Clay Graubard
Awesome. Andrew, any more questions for one?

Andrew Eaddy
No, I think that's great. Yeah. The only other question would be where can people find you on? You know, any exciting projects that you're working on that you want our listeners to know about? Yeah, feel free to shamelessly plug away.

Juan Cambeiro
Yeah, so I'm pretty active on Twitter. And follow me Juan_Cambeiro. I have recently come on full time for the summer as an analyst at Metaculus, so if you have any interesting ideas for like new questions to put on Metaculus, if you want me to take a look at any of the questions you want to put on Metaculus or have any cool project ideas, feel free to reach out to me. And I'm working on some projects related to biosecurity in particular. So stay tuned on that front. Very exciting.

And yeah, it's something I just wanted to, like emphasize is that like forecasting is just something that's fun and interesting. And maybe somewhat valuable for the intelligence community specifically, which is basically what I thought when I first started forecasting.

I just really want to make the point that it's actually something that's really important, neglected, and tractable for pretty much all fields but especially for those in which actively shaping the future is a key objective.

I mean, like for anyone like me, who subscribes to long-termism, accurate forecasting, or at least knowing where to find good forecasts is central to guide decision making at all levels. And it's like the logical next step after forecasting is to use forecasting to go decision making to like shape the future in the best possible way. So that's something I'm really excited about continuing to work on. And that Metaculus has actually started addressing in a holistic way by partnering with nonprofits as part of its Causes framework. And it's like a new exciting direction in forecasting generally, that I'm really enthusiastic about. And so, so correct.

Clay Graubard
So spot on, and very well said, and something that Andrew and I couldn't agree with more. If you guys are listening to the show, and you want to follow one, you'll find a link to his Twitter down in the description below. Definitely go follow him. You make some great content out there, if you just want to follow, get some good COVID information forecasts I couldn't recommend following him more, so make sure to check that out in the description below. I think that this was a phenomenal episode. And I certainly learned a lot. And I'm and I know our listeners will too. So thank you so much for giving us all this time and for such a wonderful podcast appearance.

Juan Cambeiro
Thanks. This was a lot of fun.

Clay Graubard
All right, and this was the sixth episode of The Right Side of Maybe. Thanks everyone. Bye bye.