Should GFM R&D and its outputs be conducted closed source (i.e openAI) or open source (i.e stability AI)?
What are the risk profiles of the two approaches?
Is there a better middle-ground between these extremes? If so, how do we define it?
Should GFM R&D and its outputs be conducted closed source (i.e openAI) or open source (i.e stability AI)?
What are the risk profiles of the two approaches?
Is there a better middle-ground between these extremes? If so, how do we define it?