Obscuring the scenarios
Most predictions would be better if they came in the form "if X, then Y"
I’ve noticed on social media that people like to make bold, very confident-sounding predictions, even about events that are very uncertain. Of course it makes sense when you think about the incentives we have when we write on social media. We want likes, engagement, and all that, and hedging your bets isn’t the best way to get people interested in what you’re saying.
The curious thing is that this seems to happen even with accounts that are concerned with what will actually happen and trying to make good predictions. Maybe more people are poasting than I realize, and I’m just way too earnest.
The example that comes to mind is from this morning, when I saw people debating whether an AGI takeoff would be fast (“foom”) or not. Suffice it to say that there were very confident, minimally supported views on both sides. The issue was especially glaring on this topic because no one knows what will happen, and we don’t even know which variables will determine the outcome. We have multiple layers of ignorance to deal with. We’re not even wrong.
The best answer right now, in my opinion, is “it depends.” But that’s boring, so let’s leave that part unsaid. We should consider what will be required for self-reinforcing improvement for an AI or AGI. If it requires designing and manufacturing more advanced chips, building better data centers, or conducting months-long training runs, those to varying degrees rule out a super-fast takeoff. In contrast, if an AI could just come up with algorithmic improvements or generate new data to improve itself, that does potentially lend itself to a fast iteration cycle. There’s still uncertainty - how long does it take to design new algorithms, how much testing is needed, how much data is required - but in general, it seems like activities that happen in software can be iterated on much faster.
I don’t think this is a particularly blinding insight, but in Twitter debates, people often don’t talk about nuances like that. They just advocate for their take.
Anyway, like most of my writing, this is basically a note to myself. When making predictions, I should think about the different scenarios instead of just making a point prediction. “If X, then Y. If J, then K.” And so on. And I should try to talk about those determinants of the outcome too.

