What's the upper bound of A(G)I risk with or w/o consciousness?

A) is it even truly important for AI risk (whether AI has consciousness or not)
B) what types of behaviours and resulting risks are eliminated if AI lacks consciousness

Ideal speakers: Hareesh & Nate