Are solutions to AI risk and global catastrophic risk so different?

I could be wrong, but it seems you two may be advancing the implicit assumption that building solutions to global catastrophic risks will add no significant value to X-risk solutions. It seems plausible, however, that both types of risks will be mitigated by building up human capacity for coordinated problem-solving and cooperation. Might it be important to consider this counter-premise?

1 Like