how does LLM like gpt4 influence the research of AI alignment?
|
|
1
|
50
|
May 6, 2023
|
What is the single best action to take now?
|
|
0
|
35
|
May 6, 2023
|
What institutional structures do you think should be governing AI?
|
|
0
|
35
|
May 6, 2023
|
Where does analysis of power and incentives in AI play into the AI alignment roadmap?
|
|
0
|
36
|
May 6, 2023
|
What's the upper bound of A(G)I risk with or w/o consciousness?
|
|
0
|
33
|
May 6, 2023
|
Thoughts on the prospects for a global AGI non-proliferation treaty? (as an end goal of a Pause; needed until AGI until alignment is solved)
|
|
0
|
23
|
May 6, 2023
|
Situation analysis & paths forward
|
|
0
|
32
|
May 6, 2023
|
What if we can never solve alignment (50-100 years)?! Should we just not develop AGIs?
|
|
0
|
29
|
May 6, 2023
|
What's the difference between alignment and governance?
|
|
0
|
60
|
May 5, 2023
|
OSS vs Walled Gardens: the responsible path?
|
|
0
|
36
|
May 6, 2023
|
What should people do if the idea AI will kill is all sounds completely ridiculous to them? What arguments or videos or people would you point them to consider?
|
|
0
|
20
|
May 6, 2023
|
Do we need to reinvent mathematics to solve aligment?
|
|
0
|
33
|
May 6, 2023
|
How do you see alignment policies / procedures interfacing with open source development of generative AI?
|
|
0
|
28
|
May 6, 2023
|
If Super Intelligence agents can destroy the Sun and Solar system, why have we not see time travelling super intelligent agents, Yet?
|
|
2
|
40
|
May 6, 2023
|
AI generated morality
|
|
0
|
30
|
May 6, 2023
|
What if there’s little marginal utility to another 500 IQ points?
|
|
0
|
25
|
May 6, 2023
|
How do we quantify risk? Is idiosyncratic risk and systematic risk useful framings?
|
|
0
|
54
|
May 5, 2023
|