and my story of the Leverage ecosystem
Nov 2022 edit: In the days, weeks, and months after writing this post, many people from the former Leverage project reached out to me. All expressed appreciation and many expressed care and sorrow, both for my difficult experience and for what we lost/how we failed. I think it’s useful context to know what some of the others who were there thought of my account. (I’m not sure how everyone feels about it, but this is the direct feedback I’ve received.)
These are quotes from each of the people who messaged me (except one that was complicated and personal and didn’t have something that would make sense to quote):
“I wanted to tell you that I really liked and really appreciated the essay you wrote about Leverage. It was so clear and such an amazing translation of all the weirdness we experienced into something that was understandable from the outside. I could never have done that. Thanks for doing it.”
“Hey Cathleen, saw your leverage post. Great work! Really puts a lot more perspective out there on what we were doing back then.”
“Wanted to say that I really enjoyed your piece on Leverage. It captured so much about what makes Leverage complicated, but also why it was, and still is, a project worth fighting for.”
“I appreciated your write up a lot. Thanks for doing that.”
“i don’t think i’ve thanked you for writing your post, but i did read it at the time and was grateful you wrote it. thank you for defending us. your post also helped me remember a lot of what happened at Leverage - brought up lots of fond memories, and it was good to just remember more of what Leverage was like.”
“I just wanted to write because I read a portion of your post on Leverage, and I wanted to say that I both really appreciated it on a personal level, and I also thought that it was really, really well-done.”
“Thank you for writing this. I've still not quite finished it, but it's really great, and I'm very glad you wrote it.”
“<3 <3 only read 1/4 of your thing so far, wow so long!!, but am very glad you wrote it so thank you
(partly bc it helps me understand leverage better + also bc it shows the good of leverage and I can share it with my family to help them understand what leverage was like)
[…]
I’ve read the rest and I’m really, really, really glad you wrote it <3 <3 <3 a huge positive effect for me has been gaining a huge amount of access to my buried feelings from the leverage period! pretty crazy.”
- Preface
- Sharing these things sucks, but I don’t see a better way forward
- The Leverage ecosystem
- What we were trying to do
- How we approached the problem
⏳ This post is long: depending on your reading speed, it’s about the length of watching one of the LOTR movies or Avengers: Endgame, or hosting an afternoon bbq in the park with your friends. It will not be as diverting though, and I encourage you to take advantage of the two intentionally placed intermissions. That said, I think it is fair to expect that there is currently no other activity that you could spend this amount of time doing that would better help you to understand this corner of the world.
Preface
“In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.” [Note: if the above link doesn’t work, please read the section titled “We want to move forward with our lives” to better understand the context of the request]
This is an attempt to help people understand the early iterations of Leverage, Paradigm, and adjacent orgs/projects — basically the era spanning from late 2011/early 2012 through June 2019. It’s emotionally trying for me to go back into all this and my plate is currently filled with time-sensitive work, so I apologize for any disjointedness in the storytelling — it was written in little bits and pieces over a few weeks of “oh and also” moments and then edited together later (with a fair amount of additional relevant/necessary backstory) in an attempt to make it coherent enough to follow. It’s quite long and because of how it was created, you’ll have to read it all to fit the pieces together, but I think it fills in important gaps and should give a lot more context to people who are looking to understand what happened.
This isn’t really meant to be a comprehensive account; it’s primarily just the response that rose to the surface as I reflected on some of the recent online discussion and confusion about Leverage and Paradigm.* I should note that I haven’t managed to read most of the comments on Less Wrong, Facebook, etc. and I haven’t gone to twitch or twitter or anything, so I don’t know how to predict how relevant my comments will be or how much I might be repeating what others have said.
* Especially if you haven’t been part of these discussions (but maybe even if you have been), the entire first section of this document, where I provide a lot of background context for what was happening at Leverage and Paradigm, might feel weird – that’s because I initially wrote the later sections as a direct response (albeit piecemeal) to what I’d seen happening online, but when I had finished, I realized that the things that I had laid out and the points that I wanted to make might not make sense without a lot of background understanding that most people don’t have, so I reorganized my writing and went back to fill in the missing context, but with much less certainty about what would be useful/necessary to say.
So here we go.
For those of you who don’t know me, a little personal background:
I was a part of this broad effort, which I’ll generally refer to as the project or the Leverage ecosystem, from 2012-2019. I first learned about the project from someone I met at the Singularity Summit in 2011; in early 2012 I started as a volunteer at Leverage Research before being offered a paid ops/admin position in 2013. In early/mid-2015, in an attempt to improve the overall project, I adopted the goal of trying to help Geoff more directly, providing him with various types of coaching and strategic counsel, and I co-founded Paradigm with Geoff and a few others at the end of that year. From 2017-2019 I was a member of the Leadership Team (as the head of operations), a coalition that spanned a number of subgroups and separate projects and organizations, some of which I served as an officer. I had close personal relationships with a handful of people from the Leverage ecosystem, including Geoff, and while I am no longer in contact with many of them, I still care about them and want their lives to go well.
I think we attempted something that was brave and commendable, while also being novel and very hard to parse from the outside. It was high-risk, high-reward, and while I generally endorse the enormous amount of effort we put into trying to make the project work, my lived experience was often somewhere between painful and horrifying. I’ll offer some possible explanations for why the environment was so bad for me, but whatever the cause, the result was that I was largely crippled by the time the project wound down. Two and a half years after leaving, I’d say I’ve done a lot of healing and I’ve also been able to adapt my life to accommodate the various ways in which I still struggle. Despite all this, as you’ll see from my writing below, I still think it makes sense to try difficult things.
I guess for people who are worried about collusion, I should also note that since the restructuring/dissolution in mid-2019, I have only had one brief interaction with Geoff (in July of 2021) and have had no direct or indirect contact otherwise. A few former members of the Leverage ecosystem continue to be important daily fixtures in my life, and I have occasional friendly interactions with a half-dozen others from that era.
Sharing these things sucks, but I don’t see a better way forward
I’ve had mixed results trying to convey parts of my experience in the Leverage ecosystem to my closest friends and family, and it would never have occurred to me to post about it on the Internet. Maybe it’s the kind of thing that becomes a one-liner in a TED talk some day — one of many failures that paved the road to success — but for now it’s still a pretty raw and complex heartbreak.
It feels like the kind of story that should be shared in a space that has built trust, where hard things can be talked about without the fear of snap judgment, where people are willing to put themselves in someone else’s shoes and feel the discomfort of finding something relatable in their own experience in order to generate an empathetic response.
I don’t currently believe that I have access to a place like that. Certainly not on Less Wrong or Facebook or any online forum that I’m aware of. But it feels important to give more context to the discussion that’s been happening in these places, so I’m willing to try to share things that might be useful, knowing that they are unlikely to be received with the thoughtfulness that I would want.
My hope is that by sharing more about how the Leverage ecosystem worked and what it was like, we can improve the quality of the discussion around it, or at least make people think twice before coming to cynical conclusions based on relatively little evidence.
It’s painful for me to share all of this, and it’s also inconvenient. I have important work to do on my current project that a lot of people are counting on me to do carefully and quickly, which is already pretty overwhelming for me in my current state, and which unfortunately can’t be paused while I figure out what might make sense to say publicly that will steer people towards (what seem to me to be) more accurate beliefs.
I think this post alone probably isn’t enough to allow people to come to informed views, because the project was so strange and novel, and I only saw my part of it, through my own eyes. But maybe this will give enough background that other people can use it as a reference point and won’t have to build out as much context when sharing whatever slice they feel like sharing about their own experience.
It’s hard for me to guess how all this will land and so I’m reluctant to encourage others to go through a similarly difficult process until they’re ready, but I do grant that it would probably be better if there were enough information/perspectives out there that at least an honest and curious observer could get some accurate understanding, and the more outlandish framings and accusations that surface from time to time could be moderated.
The Leverage ecosystem
(I don’t know how authoritatively I can speak for the various individuals and projects that spanned so many years, but I’m going to do my best and hopefully others will add their perspectives over time to flesh out the full picture and correct me where I’m mistaken--in this first bit, I’ll try to lay out the basic premises and then I’ll say more about my experience)
This wasn’t a group that grew out of the Rationality community or the Effective Altruism movement -- I wasn’t around at the very beginning, but the lore has it that after years of doing his own work in academia with various collaborators, first in philosophy and then in psychology, Geoff Anders decided to form a group focused on solving problems in the world. He originally tried to work within the MIRI community but was rejected, and so he built out the original group of volunteers independently. It developed adjacent to the AGI safety community in 2011 and after receiving its first $100k grant to support an initial year of operations, largely switched away from being volunteer-based, and brought on a few full-time collaborators on small stipends (some of whom helped to found and grow the EA movement beginning in early/mid 2012). They moved from Boston to New York and rented a brownstone in Brooklyn to house the team in a central hub and serve as a cost-efficient live-work space (think start-up founders living together and using their house as an office).
I generally think there was a lot of overlap in values and ideology between all of these groups which were trying to reduce suffering and existential risk and broadly improve the state of things.
What we were trying to do
When I first encountered the group, it was clear that something different was going on. They were pretty much crap at any of the conservation behaviors that in my circles meant that you cared about improving the state of the world: they didn’t recycle, they’d turn on the kitchen sink and then walk away to go get something, they’d hold the fridge door wide open while they tried to carefully answer a question about the limited types of goals they’d seen in people’s motivational setups -- I don’t even know if they were registered to vote! But that was almost symbolic of their determination to not flinch away from the size and complexity of the problems that humanity doesn’t seem to be on track to solving: they weren’t pretending that taking public transport and reusing shopping bags would handle problems of such magnitude, they weren’t resigned to never solving them or in denial that they existed -- they were spending all their attention genuinely trying to figure out whether there might be any counterintuitive ways that they could end up successfully addressing these very real and very hard problems.
I think the MIRI people (back then Singularity Institute people) have this too -- where most people don’t think that the risks from AI are that big of a deal, the MIRI community thinks that they’re real and are working on solving them with what seems like a very counterintuitive approach (at least from the standpoint of mainstream society). It’s maybe worth noting that while their views and goals and strategies may not have changed much over the last 5-10 years, their relation to the rest of society has shifted dramatically as people have started to have concepts for advanced AI, AGI, and AI safety, and are closer to being able to recognize and even appreciate MIRI’s efforts as something other than nerds who’ve been sucked into a crackpot delusion.
(And just so the conservationists don’t spend the rest of this post cringing, or worrying that these were signs of being uncaring or self-important: when those early individuals found out that their behaviors gave this Californian goosebumps, they were perfectly happy to pay attention to saving water or electricity, not just as a show in front of me, but generally, whether or not I was around.)
When I refer to these big problems, I’m talking about things like unfriendly AI and other existential risks like nuclear or biological war, or climate change; the big EA causes: poverty and all the issues surrounding poverty including malaria and other easily preventable causes of suffering and death for humans or animals; but people were also concerned with many other issues like aging/anti-aging, civilizational collapse, or barriers to humans getting along with one another or being able to have deeply connective, empathic experiences. It seems like they weren’t confident that MIRI’s strategy would work and end up solving everything with friendly AI, so they had set out to figure out what else might be possible. There was an interest in finding an alternate route, or perhaps a diverse set of routes, to fixing many problems in the world, and those solutions didn’t have to be based on physical or computer technology. One of the people involved used the term “vintage utopia,” which was the idea of a world where all of the technology stays exactly the same as it is today, just that everyone somehow gets along with each other and is physically and psychologically healthy.
Something important and perhaps unique about their approach was that there wasn’t any particular limitation on what kind of problem was fine to try to address -- if it’s a big problem that stands in the way of flourishing for humans or animals, it was fair game. Some organizations in this reference class limit themselves to tackling issues and impacts that are measurable, or problems that are on track to being solved. There are often good reasons for this, but it seems to me that there are likely some intractable problems in the world that are super important and that society might be completely bungling, and if that’s true, then we should try to figure out how to identify which ones deserve our attention and develop strategies to address them.
A lot of people go through life believing that the government will handle things. And/or that humans are resilient. Or that the planet will heal itself.
But what if we’re not on track to having things turn out ok? What if things are already going off the rails?
The people that I met in that New York brownstone had basically decided to try as hard as they could to have the world actually end up in a good state.
It was clear that there wasn’t going to be agreement across the board on what exactly constituted a flourishing world, but it was also clear that there were plenty of things that are straightforwardly good (e.g. causing humans to generally be mentally healthier through resolution of past psychological trauma or preventing AI from destroying the world) and so we should try to do those things.
How we approached the problem
or “the convergence on a plan to train effective, cooperative, innovative people”
From the start, we acknowledged that the world’s problems were too big and complicated for our small brownstone crew to handle. We also quickly came to believe that the bottleneck was not money -- plenty of people from tech startups who had made hundreds of millions if not billions of dollars were actively trying to deploy their capital in order to dramatically improve the world, and they weren’t getting traction. Even now there are plenty of high profile examples from Bill Gates struggling to eradicate polio, to Elon Musk challenging the UN to show how they could end world hunger with $6B. Is funding the bottleneck for anti-aging research? For AI safety? For preventing nuclear holocaust? For healing psychological trauma?
Prospective funders reported that the bottleneck was a lack of promising projects. As far as we could tell, the lack of good projects is a result of a lack of effective, benevolent, and coordinated people.