In this week's special episode of the Global Guessing Weekly Podcast, Andrew and Clay are joined by Regina Joseph and Pavel Atanasov of Pytho: A boutique R&D shop that use decision science to improve predictions and decision making.
In the seventh episode of GGWP, Pavel and Regina discuss the method and design of their patent-pending Human Forest system which combines data-driven base rate automation and collective human insight to deliver on key objectives on which machine algorithms and human forecasters can fall short. Pavel and Regina also talk to us about their recent forecasting competition where they put their Human Forest system to test and pitting human forecasters against machine models to predict COVID-19 clinical trials.
Watch on YouTube
Listen on Podbean
Listen on Apple Podcasts
Find Our Guests
Pytho

Twitter: https://twitter.com/pytho_io
LinkedIn: https://www.linkedin.com/company/pytho/

Regina Joseph
Twitter: https://twitter.com/superforecastr
Pavel Atanasov
Twitter: https://twitter.com/PavelDAtanasov
The First Ten Minutes
Clay Graubard: Welcome, everyone to the seventh episode of the Global Guessing weekly podcast, the podcast on all things forecasting and geopolitics. This week, myself and Andrew are joined by two major people in the forecasting space Pavel Atanasov and Regina Joseph, of Pytho.io.
Regina has been with Global Guessing in the past when she joined us for an interview back in January, but we are especially excited to have both of them on today to talk about the results from one of their recent projects: Human Forest.
For a little bit of background, Regina and Pavel are part of Pytho: Pytho is a two person boutique R&D shop for forecasting where they use decision science to improve predictions and decision-making. They've spent a decade working on this research and have won three IARPA forecasting tournaments and two NSF awards. They are co-inventors on a patent and a patent-pending project, and are the co-authors on numerous publications. Today, they are here to talk to us about their most recent project, which is the subject of their pending patent, Human Forest.
Human Forest is a new system that they've been developing, which combines data driven based rate automation, and collective human insight to deliver on key objectives on which machine algorithms and human forecasters can fall short. Human forest is designed to provide more accurate valuations, forward looking risk management, better resource allocations and multipliers to progress, innovation.
Welcome to the show, Regina and Pavel. It would be great if you guys could give us a little bit more insight into the background, and the concept of human forest, as well as Pytho itself.
Regina Joseph: Thanks so much. It's great to be back. It's nice to see you both, and thanks so much for having us both on so that we can talk to you about what we've been working on for the last couple of years.
Human forest is a really good example of how Pavel and I work at a 50/50 level because it really combines ideas that I have with ideas that Pavel has. The origins of human forest began when Pavel and I were members of the IARPA HFC, or Hybrid Forecasting Competition, research program. We served on a team that was based at University of Southern California's Information Sciences Institute. The nature of that research program was focused on hybridizing the best components of human predictive accuracy with machine models of predictions, and to figure out what's the optimal way to hybridize those two things into a whole, that would be more predictively accurate than then each individual factor. One of the things that I had been thinking about and developing for that program was the idea of the elicitation platform or forecasting platform. There are many different ways you can build those things, and as I think Pavel and I have shown over the last 10 years, you can organize these so that they do different things. What I was interested in a hybridized forecasting environment was being able to reduce the cognitive burden on the individual forecaster.
Forecasting is not an easy thing. It requires a lot of effort. I think the goal in building an optimal forecasting platform has a lot to do with reducing the cognitive burden to enough of a degree that the user is getting useful information. One of the most important bits of useful information in a predictive sense is the base rate. Especially if you're in a time tournament, where questions have a very short period of time to elapse, being able to get to that initial forecast as quickly as possible is the coin of the realm. What I was interested in was in developing a user interface, user experience or UI UX, in which that element is actually baked into the user experience. That became a big part of what we were doing on the HFC team that we were on the sage team. And that obviously became part of the basis of what became human forest.
I'll let Pavel take over where we started to layer the ideas that we were having about how to elicit better forecasts.
Pavel Atanasov: At HFC, the idea that Regina developed was that it may be useful to just show people some historical information without showing them a model because people wouldn't be able to access it otherwise or it will take them more time. By shortening that process you can add a lot of value. That related to what Kahneman and Tversky have spoken for a long time: The idea there's an inside view and an outside view, where the outside view is thinking about base rates.
But thinking through base-rate is non-trivial because you have to first come up with a reference class and there's many reference classes that you can come up with. If you think of an election, what is the right reference class? The last 10 presidential elections in the US? Is it all the elections in the world? There are many reasonable reference classes you can come up with. What we became interested in is how good are people at picking up reference classes that give them good base rates. We know to some extent algorithms sort of do that. The random forest algorithm looks at different random subsets of the data and tries to come up with a reference class–they call it a classification tree–and then any one tree maybe only partial information, and it's noisy, and not very accurate but when you combine all these trees into a forest you get a good prediction.
Our thought was: Can humans do that? What if a bunch of humans build their classification tree, your reference classes, and then we combine that into a forest and that would be the human forest. We had just met with a friend of mine from college who had developed a random forest model for clinical trial development and I have been working on a project like that with Jonathan Kimmelman getting experts to predict trials on oncology and urology. We thought, what if we combine all the ideas together, all the work that they had done on the modeling, all the work that we're doing on hybridizing forecasting, and explore how do we get better at forecasting clinical trials and how do we teach people to get better at picking up reference classes that are predictively useful. There's the applied part, and there's the basic psychology of forecasting and how people relate to the outside view and that was very attractive to us and it was attractive to the National Science Foundation as well and we were able to get two grants to just study this.
Regina Joseph: What was also really interesting to both of us was at the time when HFC began it was really at the beginning of the trajectory upwards in this kind of monolithic idea that machine learning and machine models will always be better, especially in terms of predictive analytics. People were observing the IBM Watson story, Watson itself was claiming that Watson was a predictive system.
But in our experience, directly the research that we were experiencing, we were seeing in action just how poorly these models were performing in arenas that were situations where there was a low data environment. In other words, it just didn't have very many use cases upon which you could train the model effectively. In those kinds of environments, humans were really doing much better. They were outperforming the machine models, and in many cases by a lot and these were models that were developed by people who are considered some of the best modelers out there.
We were coming into the conversation with Sauleh Siddiqui at the time he was saying we've developed this random forest model at Johns Hopkins–which was the university that he was with at the time, now he's at American University--and we were having lunch in DC and we're saying "oh it be great for us to work on something together" and so that competitive edge sort of came into the conversation where we thought, well we've seen in HFC humans with machines and how that was working, what would have been really interesting was humans versus machines and to do two things. One which kind of challenged that status quo thinking about how machine model, machine learning is THE answer in prediction, when we were observing in real time this is not the case in very specific arenas. And two, to be able to test in real time, this kind head to head competition, make it fun, make it interesting. And it was a kind of perfect opportunity for us.
Due to our time limitations, we are unable to transcribe the rest of the podcast. Continue the podcast on YouTube, Spotify, or Podbean. Thank you!