Where does analysis of power and incentives in AI play into the AI alignment roadmap?

To understand the likely hypothetical futures we will end up with and the probability distribution of what form / shape A(G)I might take. (Esp if people’s timelines are short.)

Are folks building their strategy from this kind of analysis?