I’m now concerned that this proposal is out of scope for Manifund because it involves political advocacy, which I’m discussing with team in the Manifund Discord. But I will take this opportunity to make the case for the proposal as it is was written above as of end of day 7/7/23.
I left my job at Rethink Priorities to pursue moratorium advocacy because I observed that the people in the AI safety space, both in technical alignment and policy, were biased against political advocacy. Even in EA Animal spaces (where I most recently worked), people seemed not to appreciate how contingent the success of “inside game” initiatives like The Humane League corporate campaigns (to, for example, increase farmed animal cage sizes) depended on the existence of vocal advocacy orgs like Direct Action Everywhere (DxE) and PETA stated the strongest version of their beliefs plainly to the publicly and acted in a way that legibly accorded with that. This sort of “outside game” moves the Overton window and creates external pressure for political or corporate initiatives. Status quo AI Safety is trying to play inside game without this external pressure, and hence it often at the mercy of industry. When I began looking for ways to contribute to pause efforts and learning more about the current ecosystem, I was appalled at some of things I was told. Several people expressed to me that they were afraid to do things the AI companies didn’t like because otherwise they might not cooperate with their org, or with ARC. How good can evals ever be if they are designed not to piss off the labs, who are holding all the cards? The way we get more cards for evals and for government regulations on AI is to create external pressure.
The reason I’m talking about this issue today is that FLI published an (imperfect) call for a 6-month pause and got respected people to sign it. This led to a flurry of common knowledge creation and the revelation that the public is highly receptive to, not only AI Safety as a concept, but moratorium as a solution. I’m still hearing criticism of this letter from EAs today as being “unrealistic”. I’m sorry, but how dense can you be? This letter has been extremely effective. The AI companies lost ground and had to answer to the people they are endangering. AI c-risk went mainstream!
The bottom line is that I don’t think EA is that skilled at “outside game”, which is understandable because in the other EA causes, there was already an outside game going on (like PETA for animal welfare). But in AI Safety, very unusually, the neglected position is the vanguard. The public only just became familiar enough with AI capabilities not to dismiss concerns about AGI out of hand (many of the most senior people in AI Safety seem to be anchored on a time before this was true), so the possibility of appealing to them directly has just opened up. I think that the people in AI Safety space currently— people trained to do technical alignment and the kind of policy research that doesn’t expect to be directly implemented— 1) aren’t appreciating our tactical position, and 2) are invested in strategies that require the cooperation of AI labs or of allies that make them hesitant to simply advocate for the policies they want. This is fine— I think these strategies and relationships are worth maintaining— but someone should be manning the vanguard. As someone without attachments in the AI Safety space, I thought this was something I could offer.