Optimization goal that leads to alignment

Is it possible for an AI model / hypothetical AGI to not ultimately be a “maximizer”? If AGI must be a maximizer, then which optimization goals can we force it to have that lead it to be “aligned”?

1 Like