and my story of the Leverage ecosystem
Nov 2022 edit: In the days, weeks, and months after writing this post, many people from the former Leverage project reached out to me. All expressed appreciation and many expressed care and sorrow, both for my difficult experience and for what we lost/how we failed. I think it’s useful context to know what some of the others who were there thought of my account. (I’m not sure how everyone feels about it, but this is the direct feedback I’ve received.)
These are quotes from each of the people who messaged me (except one that was complicated and personal and didn’t have something that would make sense to quote):
“I wanted to tell you that I really liked and really appreciated the essay you wrote about Leverage. It was so clear and such an amazing translation of all the weirdness we experienced into something that was understandable from the outside. I could never have done that. Thanks for doing it.”
“Hey Cathleen, saw your leverage post. Great work! Really puts a lot more perspective out there on what we were doing back then.”
“Wanted to say that I really enjoyed your piece on Leverage. It captured so much about what makes Leverage complicated, but also why it was, and still is, a project worth fighting for.”
“I appreciated your write up a lot. Thanks for doing that.”
“i don’t think i’ve thanked you for writing your post, but i did read it at the time and was grateful you wrote it. thank you for defending us. your post also helped me remember a lot of what happened at Leverage - brought up lots of fond memories, and it was good to just remember more of what Leverage was like.”
“I just wanted to write because I read a portion of your post on Leverage, and I wanted to say that I both really appreciated it on a personal level, and I also thought that it was really, really well-done.”
“Thank you for writing this. I've still not quite finished it, but it's really great, and I'm very glad you wrote it.”
“<3 <3 only read 1/4 of your thing so far, wow so long!!, but am very glad you wrote it so thank you
(partly bc it helps me understand leverage better + also bc it shows the good of leverage and I can share it with my family to help them understand what leverage was like)
I’ve read the rest and I’m really, really, really glad you wrote it <3 <3 <3 a huge positive effect for me has been gaining a huge amount of access to my buried feelings from the leverage period! pretty crazy.”
- Sharing these things sucks, but I don’t see a better way forward
- The Leverage ecosystem
- What we were trying to do
- How we approached the problem
- My experience of Leverage
- Early self-directed experimentation
- Allowing for wide-ranging plans and goals
- What to do when society is wrong about something?
- The Paradigm era
- Background stressors
- Expertise assessment and attribution
- A diverse group splits up into diverse groups
- How it ended
- What came out of it
- Early basic research is very difficult to appreciate from the outside
- Intermission - end of section I
- What happened?
- What was hard
- Deciding to remain in a difficult environment
- It’s hard to know when to call it quits
- Harms due to the scarcity of similar projects
- Harms from the surrounding community
- We didn’t know how to address people’s discomfort
- Further notes on having ambitious plans
- How should the efforts at Leverage & Paradigm be viewed?
- Are lofty goals to be sneered at?
- Is there no legitimate reason to try so hard?
- Or just no legitimate reason to think your efforts might pay off?
- Is it unacceptable to toil towards goals that you are unlikely to reach?
- Leverage’s trajectory & uniqueness
- Should people be trying to “cancel” Leverage, Paradigm, Geoff, and the surrounding orgs?
- Recognize and try to limit desperation in yourself and others
- What to learn next?
- Intermission - end of section II
- Why this is hard to talk about
- I don’t want to fight over narratives with my friends in public
- Novel organizational structures and the trap of dismissing them as “cults”
- Weird experiments and terminology result in sensational claims and rumors
- We lost our friends and our lives fell apart
- We weren’t (and still aren’t?) sure if it’s good for society to share some of our discoveries
- It sucks to deal with people’s misunderstandings
- Illegible or unknown causes of trauma
- We disagreed about lot of stuff and probably still do
- Personal hurt and healing
- Conflict with some EAs and Rationalists and the role they’re playing
- Re: risks of sharing information
- We want to move forward with our lives
- Final notes
⏳ This post is long: depending on your reading speed, it’s about the length of watching one of the LOTR movies or Avengers: Endgame, or hosting an afternoon bbq in the park with your friends. It will not be as diverting though, and I encourage you to take advantage of the two intentionally placed intermissions. That said, I think it is fair to expect that there is currently no other activity that you could spend this amount of time doing that would better help you to understand this corner of the world.
“In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.” [Note: if the above link doesn’t work, please read the section titled “We want to move forward with our lives” to better understand the context of the request]
This is an attempt to help people understand the early iterations of Leverage, Paradigm, and adjacent orgs/projects — basically the era spanning from late 2011/early 2012 through June 2019. It’s emotionally trying for me to go back into all this and my plate is currently filled with time-sensitive work, so I apologize for any disjointedness in the storytelling — it was written in little bits and pieces over a few weeks of “oh and also” moments and then edited together later (with a fair amount of additional relevant/necessary backstory) in an attempt to make it coherent enough to follow. It’s quite long and because of how it was created, you’ll have to read it all to fit the pieces together, but I think it fills in important gaps and should give a lot more context to people who are looking to understand what happened.
This isn’t really meant to be a comprehensive account; it’s primarily just the response that rose to the surface as I reflected on some of the recent online discussion and confusion about Leverage and Paradigm.* I should note that I haven’t managed to read most of the comments on Less Wrong, Facebook, etc. and I haven’t gone to twitch or twitter or anything, so I don’t know how to predict how relevant my comments will be or how much I might be repeating what others have said.
* Especially if you haven’t been part of these discussions (but maybe even if you have been), the entire first section of this document, where I provide a lot of background context for what was happening at Leverage and Paradigm, might feel weird – that’s because I initially wrote the later sections as a direct response (albeit piecemeal) to what I’d seen happening online, but when I had finished, I realized that the things that I had laid out and the points that I wanted to make might not make sense without a lot of background understanding that most people don’t have, so I reorganized my writing and went back to fill in the missing context, but with much less certainty about what would be useful/necessary to say.
So here we go.
For those of you who don’t know me, a little personal background:
I was a part of this broad effort, which I’ll generally refer to as the project or the Leverage ecosystem, from 2012-2019. I first learned about the project from someone I met at the Singularity Summit in 2011; in early 2012 I started as a volunteer at Leverage Research before being offered a paid ops/admin position in 2013. In early/mid-2015, in an attempt to improve the overall project, I adopted the goal of trying to help Geoff more directly, providing him with various types of coaching and strategic counsel, and I co-founded Paradigm with Geoff and a few others at the end of that year. From 2017-2019 I was a member of the Leadership Team (as the head of operations), a coalition that spanned a number of subgroups and separate projects and organizations, some of which I served as an officer. I had close personal relationships with a handful of people from the Leverage ecosystem, including Geoff, and while I am no longer in contact with many of them, I still care about them and want their lives to go well.
I think we attempted something that was brave and commendable, while also being novel and very hard to parse from the outside. It was high-risk, high-reward, and while I generally endorse the enormous amount of effort we put into trying to make the project work, my lived experience was often somewhere between painful and horrifying. I’ll offer some possible explanations for why the environment was so bad for me, but whatever the cause, the result was that I was largely crippled by the time the project wound down. Two and a half years after leaving, I’d say I’ve done a lot of healing and I’ve also been able to adapt my life to accommodate the various ways in which I still struggle. Despite all this, as you’ll see from my writing below, I still think it makes sense to try difficult things.
I guess for people who are worried about collusion, I should also note that since the restructuring/dissolution in mid-2019, I have only had one brief interaction with Geoff (in July of 2021) and have had no direct or indirect contact otherwise. A few former members of the Leverage ecosystem continue to be important daily fixtures in my life, and I have occasional friendly interactions with a half-dozen others from that era.
Sharing these things sucks, but I don’t see a better way forward
I’ve had mixed results trying to convey parts of my experience in the Leverage ecosystem to my closest friends and family, and it would never have occurred to me to post about it on the Internet. Maybe it’s the kind of thing that becomes a one-liner in a TED talk some day — one of many failures that paved the road to success — but for now it’s still a pretty raw and complex heartbreak.
It feels like the kind of story that should be shared in a space that has built trust, where hard things can be talked about without the fear of snap judgment, where people are willing to put themselves in someone else’s shoes and feel the discomfort of finding something relatable in their own experience in order to generate an empathetic response.
I don’t currently believe that I have access to a place like that. Certainly not on Less Wrong or Facebook or any online forum that I’m aware of. But it feels important to give more context to the discussion that’s been happening in these places, so I’m willing to try to share things that might be useful, knowing that they are unlikely to be received with the thoughtfulness that I would want.
My hope is that by sharing more about how the Leverage ecosystem worked and what it was like, we can improve the quality of the discussion around it, or at least make people think twice before coming to cynical conclusions based on relatively little evidence.
It’s painful for me to share all of this, and it’s also inconvenient. I have important work to do on my current project that a lot of people are counting on me to do carefully and quickly, which is already pretty overwhelming for me in my current state, and which unfortunately can’t be paused while I figure out what might make sense to say publicly that will steer people towards (what seem to me to be) more accurate beliefs.
I think this post alone probably isn’t enough to allow people to come to informed views, because the project was so strange and novel, and I only saw my part of it, through my own eyes. But maybe this will give enough background that other people can use it as a reference point and won’t have to build out as much context when sharing whatever slice they feel like sharing about their own experience.
It’s hard for me to guess how all this will land and so I’m reluctant to encourage others to go through a similarly difficult process until they’re ready, but I do grant that it would probably be better if there were enough information/perspectives out there that at least an honest and curious observer could get some accurate understanding, and the more outlandish framings and accusations that surface from time to time could be moderated.
The Leverage ecosystem
(I don’t know how authoritatively I can speak for the various individuals and projects that spanned so many years, but I’m going to do my best and hopefully others will add their perspectives over time to flesh out the full picture and correct me where I’m mistaken--in this first bit, I’ll try to lay out the basic premises and then I’ll say more about my experience)
This wasn’t a group that grew out of the Rationality community or the Effective Altruism movement -- I wasn’t around at the very beginning, but the lore has it that after years of doing his own work in academia with various collaborators, first in philosophy and then in psychology, Geoff Anders decided to form a group focused on solving problems in the world. He originally tried to work within the MIRI community but was rejected, and so he built out the original group of volunteers independently. It developed adjacent to the AGI safety community in 2011 and after receiving its first $100k grant to support an initial year of operations, largely switched away from being volunteer-based, and brought on a few full-time collaborators on small stipends (some of whom helped to found and grow the EA movement beginning in early/mid 2012). They moved from Boston to New York and rented a brownstone in Brooklyn to house the team in a central hub and serve as a cost-efficient live-work space (think start-up founders living together and using their house as an office).
I generally think there was a lot of overlap in values and ideology between all of these groups which were trying to reduce suffering and existential risk and broadly improve the state of things.
What we were trying to do
When I first encountered the group, it was clear that something different was going on. They were pretty much crap at any of the conservation behaviors that in my circles meant that you cared about improving the state of the world: they didn’t recycle, they’d turn on the kitchen sink and then walk away to go get something, they’d hold the fridge door wide open while they tried to carefully answer a question about the limited types of goals they’d seen in people’s motivational setups -- I don’t even know if they were registered to vote! But that was almost symbolic of their determination to not flinch away from the size and complexity of the problems that humanity doesn’t seem to be on track to solving: they weren’t pretending that taking public transport and reusing shopping bags would handle problems of such magnitude, they weren’t resigned to never solving them or in denial that they existed -- they were spending all their attention genuinely trying to figure out whether there might be any counterintuitive ways that they could end up successfully addressing these very real and very hard problems.
I think the MIRI people (back then Singularity Institute people) have this too -- where most people don’t think that the risks from AI are that big of a deal, the MIRI community thinks that they’re real and are working on solving them with what seems like a very counterintuitive approach (at least from the standpoint of mainstream society). It’s maybe worth noting that while their views and goals and strategies may not have changed much over the last 5-10 years, their relation to the rest of society has shifted dramatically as people have started to have concepts for advanced AI, AGI, and AI safety, and are closer to being able to recognize and even appreciate MIRI’s efforts as something other than nerds who’ve been sucked into a crackpot delusion.
(And just so the conservationists don’t spend the rest of this post cringing, or worrying that these were signs of being uncaring or self-important: when those early individuals found out that their behaviors gave this Californian goosebumps, they were perfectly happy to pay attention to saving water or electricity, not just as a show in front of me, but generally, whether or not I was around.)
When I refer to these big problems, I’m talking about things like unfriendly AI and other existential risks like nuclear or biological war, or climate change; the big EA causes: poverty and all the issues surrounding poverty including malaria and other easily preventable causes of suffering and death for humans or animals; but people were also concerned with many other issues like aging/anti-aging, civilizational collapse, or barriers to humans getting along with one another or being able to have deeply connective, empathic experiences. It seems like they weren’t confident that MIRI’s strategy would work and end up solving everything with friendly AI, so they had set out to figure out what else might be possible. There was an interest in finding an alternate route, or perhaps a diverse set of routes, to fixing many problems in the world, and those solutions didn’t have to be based on physical or computer technology. One of the people involved used the term “vintage utopia,” which was the idea of a world where all of the technology stays exactly the same as it is today, just that everyone somehow gets along with each other and is physically and psychologically healthy.
Something important and perhaps unique about their approach was that there wasn’t any particular limitation on what kind of problem was fine to try to address -- if it’s a big problem that stands in the way of flourishing for humans or animals, it was fair game. Some organizations in this reference class limit themselves to tackling issues and impacts that are measurable, or problems that are on track to being solved. There are often good reasons for this, but it seems to me that there are likely some intractable problems in the world that are super important and that society might be completely bungling, and if that’s true, then we should try to figure out how to identify which ones deserve our attention and develop strategies to address them.
A lot of people go through life believing that the government will handle things. And/or that humans are resilient. Or that the planet will heal itself.
But what if we’re not on track to having things turn out ok? What if things are already going off the rails?
The people that I met in that New York brownstone had basically decided to try as hard as they could to have the world actually end up in a good state.
It was clear that there wasn’t going to be agreement across the board on what exactly constituted a flourishing world, but it was also clear that there were plenty of things that are straightforwardly good (e.g. causing humans to generally be mentally healthier through resolution of past psychological trauma or preventing AI from destroying the world) and so we should try to do those things.
How we approached the problem
or “the convergence on a plan to train effective, cooperative, innovative people”
From the start, we acknowledged that the world’s problems were too big and complicated for our small brownstone crew to handle. We also quickly came to believe that the bottleneck was not money -- plenty of people from tech startups who had made hundreds of millions if not billions of dollars were actively trying to deploy their capital in order to dramatically improve the world, and they weren’t getting traction. Even now there are plenty of high profile examples from Bill Gates struggling to eradicate polio, to Elon Musk challenging the UN to show how they could end world hunger with $6B. Is funding the bottleneck for anti-aging research? For AI safety? For preventing nuclear holocaust? For healing psychological trauma?
- not to say that money can’t help; in many cases it can! The situation with polio is so much better because of the resources that have been poured into it, and I’m sure that billions of dollars could prevent a lot of suffering and death from starvation this year, but I guess what I mean is that if we just optimize for adding more funding to all of the important issues in the world, it seems really unlikely to suddenly (or even gradually) cause humans and animals to start thriving (though it would be great if it did!)
Prospective funders reported that the bottleneck was a lack of promising projects. As far as we could tell, the lack of good projects is a result of a lack of effective, benevolent, and coordinated people.
It doesn’t seem like there’s a lack of benevolence or altruism -- there are plenty of people who want to solve these problems, but either they can’t figure out what to do, or what they try never really works, or they settle for tackling smaller issues that they think they can actually resolve.
There are a few abnormally effective people: Elon Musk is the celebrity poster child of effectiveness -- love him or hate him, it’s really impressive what he’s been able to accomplish. If you randomly selected a friend you went to high school with and challenged them to any one of Elon’s feats, they most certainly wouldn’t be able to figure out how to do it. Why is that? It’s not like he went to a special academy where they taught him how to be effective, otherwise we’d have scores of people like him.
As a society, we just don’t know how effectiveness works. Even more fundamentally, we don’t know why we are who we are. Is it nature? Is it nurture? Even if you believe that experience shapes you really dramatically, is it the experiences when you’re a toddler that matter most? Or the ones in high school? Collectively, we don’t have a way of thinking about this, but it seems like a crucial question to be able to answer.
So we oriented ourselves towards the goal of understanding effectiveness. Can we actually figure out how minds are shaped? Can we figure out why some mental configurations can cause people to be extremely effective/productive when most people don’t reach that level? Can we use that knowledge to produce a large number of highly effective people? And can we get them to work well together?
We didn’t start out believing we could do that. But when looking for counterintuitive ways that we might be able to handle a lot of the big problems in the world, this avenue emerged as a contender. It’s an issue that the rest of society is resigned to never solving and/or is pretending doesn’t even exist.
I think Geoff was already very interested in this direction when he started putting the initial team together (given his background in psychology and his prior time spent puzzling about how to tackle hard problems) but I’m pretty sure even he was skeptical about the possibility of massively boosting people’s effectiveness on a large scale through psychological interventions, and was looking at other ways he could use his psychology theory to make progress in case that didn’t work.
You may have heard that we were aiming to create “2,000 Elon Musks.” We wouldn’t actually want Elon clones; he’s combative and erratic and quite difficult to coordinate with (which incidentally seems to be relatively common for abnormally effective people who emerge seemingly randomly in the wild), but imagine 2,000 highly effective benevolent people who can get along with others. In the context of tackling problems that don’t currently seem on track to being solved, that seems pretty compelling.
As the project scaled up, we didn’t place all of our hopes in psychology and training. We recognized that we needed to understand how societies work. And even just to be able to continue to function and coordinate effectively, we’d need to understand how small and mid-sized groups work. We also knew that things can’t all just be figured out by thinking and planning; you have to try things out and bump into reality in order to learn how things work under real-world conditions.
So our efforts ended up being broad: at any one time we’d be pursuing many different avenues of investigation, and even within our psychology and sociology research areas, people were given a lot of autonomy to be able to explore a wide range of possible interventions.
There’s a thing here that feels important: we were able to give that much autonomy because the people trying to figure these things out actually cared about the end result. It wasn’t just for the love of research or the satisfaction of discovering new things or displaying skill and accomplishment (it was often unrewarding work), and certainly not for the paycheck -- but because they deeply cared about what we were trying to do. People would sometimes angrily and fiercely debate because they cared so much. We didn’t have to try to set up weird incentives to trick people into putting in effort. We just found people who were very intrinsically drawn to trying to make this possibility work.
We had high standards and high demands for epistemic rigor (more on this below). We filtered for people that we felt could be taught to have high epistemic skill even if they didn’t start off with it, and we, at the same time, were acutely aware of the importance of entertaining weird counterintuitive interpretations as part of the early-stage scientific process.
We weren’t going after marginal improvements to existing solutions. We recognized that if we wanted to end up with different outcomes, with different tools that were more effective in helping people become healthy and effective individuals, we couldn’t just rely on the existing body of knowledge in psychology or related fields.
There were no limits on what you were allowed to explore. You could research all sorts of bizarre-seeming past traditions and generate new theories and interpretations of whatever it was you were observing, of whatever type you wanted.
There was a recognition that in the process of forming new ideas one would often go through sometimes extended periods of wonky theory that would likely get entirely thrown out later in favor of something more honed that emerged from that first wonky thing. We encouraged people in the group to be willing to go through this theorizing process of starting off with theories we were pretty sure were wrong and iterating on them to reach counterintuitive conclusions that were right. Social permission to be a wonk* was an important part of our research process.
* Important distinction between wonks and crackpots: a crackpot has no way to tell what is accurate and what is not, allowing them to spin weirder and weirder narratives about what’s going on forever. A wonk (as I’m using the term in this piece, this was not a Leverage term) entertains weird ideas, sometimes for extended periods of time, but is able to check them against reality/the evidence, so ends up tracking the truth. If you don’t have access to a wonk’s evidence stream and don’t understand their epistemic process, they may seem like a crackpot.
From the outside (and even sometimes from the inside) this would look like unproductive delusion, but in fact it was intentional and managed theoretical exploration. And it led to an enormous amount of what many in the group came away believing were accurate and groundbreaking theories of how the mind works and how a personality is shaped by life.
I should note that for people who have heard about some of the stranger things about our project (specifically from the area that’s being referred to as “intention research”), it might be hard to reconcile those stories with the claim that we had good epistemics. I think the correct explanation is that the theories that came out of that final year were from early in the research cycle, which was cut short by the dissolution of the project. I don’t think anyone involved will claim that we had settled on any agreed interpretation or paradigm of what we were investigating. We can’t judge the outcome of the process as coherent or not because that research cycle never completed.
So when you see me say things like “we still don’t know whether the process that was kicked off by this research collaboration is going to have been worth it” it’s because we did discover a bunch of really interesting stuff and we can see that if the research is distributed and/or continued, we might actually make substantial progress toward the objective of making people mentally healthier and more effective (and it’s also possible that the people from the project who benefited from this research may go on to have large positive impacts).
My experience of Leverage
I spent many many years in the thick of things with Leverage and then Paradigm, and I poured myself into helping the project succeed in any way I could. For various reasons, some obvious, some still unknown to me, I had a pretty mixed (and sometimes really terrible) experience while there and have been slowly recovering in the years since. But I still believe that it was a worthwhile thing to have attempted and I’m proud of what we accomplished and I hope that many/most of the individuals will continue to try to make a lasting positive impact on the world.
Initially Leverage was just a small nonprofit with very limited funding – we received grants primarily from a handful of successful tech entrepreneurs who were interested in Geoff and interested in our research and who were looking to use their capital to make bets on fringe projects like ours. I volunteered my time for the first year before being offered a stipend like the other members of the project and officially joining the group. I was invited to join as a “skill hire” (based on evidence that I would be able to usefully contribute given my level of expertise) instead of as a member of the core research group, because at the time of my offer, they hadn’t been able to verify whether or not I had a history of significant dedication to self-improvement.* Basically, most people who joined were thought of as candidates to self-improve a lot and that was a big part of their objective, and some people were brought on for their existing abilities, in order to help run the operation along the way.
* The basic idea with the importance of verifying that new hires were genuinely interested in self-improvement was that in the course of trying to solve difficult problems, we were going to encounter things that we didn’t already know how to handle, so to actually have a chance at accomplishing difficult things and to prevent people becoming demoralized, we needed people who were onboard for gaining new skills and new understandings of things.
I feel like the word “nonprofit” is misleading, despite being true. I had spent a lot of my early career working in the nonprofit world, and maybe those experiences were of late-stage nonprofits and the difference was that this was an early-stage nonprofit. It had the energy and urgency of a startup (and the typical close-coordination of co-founders living and working together), but instead of focusing on product-market-fit or runway or staying “lean,” people were doggedly focused largely on solo or pairwise research, finding more researchers, and long-term planning.
I had explored enough career paths at this point to recognize that most of my opportunities to positively affect the world were limited by the organizations or institutions that were already established or by my own ability to affect them from within, and I didn’t like my prospects. Government agencies, nonprofits with limited scope, politicians, think tanks -- I couldn’t find any employers that matched my level of ambition while also being self-reflective and self-critical and thus willing and able to adjust and pivot as they proactively learned more about the shape of the problems in the world (there are a lot of constraints out there).
So when I found this small group of people who shared my goals but were relatively unfettered, I tentatively threw in my lot with them. I think my greatest hesitancy came from them being both very different from me (difficult to sync with and unlikely to fulfill my social needs) and not seeming to be interested in being a team in the sense of everyone helping out where they could and relying on each other to carry part of the load; most of them were very independent and it felt like over 50% of the people were there not to help Geoff with his particular plan, but to use Geoff in order to gain skill themselves for their own world improvement plans. In the early days, the pitch was that you could join the project with a small stipend (with the idea that our budget could be primarily used to get people out of the rat-race who would want to work on world-improvement activities full-time if they didn’t have to work at a normal job) and the only real requirement was to think hard about the best thing to do and then do that thing to the best of your ability. But only a limited type of thing rose to the top of people’s lists by default (generally rather abstract topics like “how can you know what you underlyingly believe?” but sometimes things like growing the EA movement or looking for other multipliers), and that didn’t cover all of the more immediately useful/necessary pieces for keeping the group going e.g. logistics and admin and funding and systems for coordinating, and so I picked up the slack in those areas where I could.
Even as the years went by and the project grew and became more complex, only a couple people ever deliberately tried to gain skill and help out in those areas, so we were often over-burdened and falling short in those domains, which added to a certain level of overall strain not just for the people doing that work, but for everyone relying on them.
Whether or not you have a clear understanding of your surrounding infrastructure: how it works, who maintains it, how decisions are made, or what the constraints are (budget, limited space for conflicting/competing use-cases, considerations for having visitors in a dual purpose live-work environment, safety policies, legal constraints for international employees or requirements for research nonprofits, rules set by the landlord, PR concerns, etc. etc.), all these things still affect you. If we fail to raise money, if we fail to find space to accommodate growth, if we don’t get around to updating the website, if we can’t accommodate your diet, if we don’t recruit people to help out in your area, or if we need to clear out half the building to host a training program, these all add stress. And similarly, if we don’t have enough time to invest in system design, there will be added strain from interacting with slack or scheduling or reserving meeting space or cars in choppy unreliable ways or you’ll have uncertainty about how to make requests for various things (or even whether you’re allowed to make requests).
A lot of my early efforts went into saving people time and stretching every dollar by finding ways of fitting more people into whatever building we were living and working out of,* or doing grocery shopping and cooking huge pots of chili, or setting up systems to share cars and request or offer rides. In some ways I was completely over-qualified for the work but in some ways I wasn’t up to the task. Supporting a live-work team is complex and challenging and centers around people more than objects.
* a note on housing:
In the beginning, there were just a few people; they got a house together in NY with common spaces to work in -- it was easier to brainstorm together and stay synced up as people had new ideas and needed feedback, it was more cost-efficient, it cut down on commute times, and it provided a central place for interested people to stay during their visits.
When relocating the team to California in 2013, we got another big house and converted it into 10 bedrooms -- at that point we started experimenting with more structured ways of coordinating and sharing knowledge via regular meetings and presentations.
After out-growing that space in 2014, we moved to a big 13 bedroom, 4-unit apartment building in an area that had plenty of rental properties within easy walking distance to accommodate the growing team.
In 2016, once we had more funding and had embraced presentations, meetings, and one-on-one training as a replacement for staying synced up by just spending lots of time together, we encouraged people to move out of the main building. We wanted to reduce interpersonal conflict (we hadn’t originally factored in “would you be a good housemate?” into our recruitment criteria) and lower the burden on Ops, while making room for workshops, visitors, and new hires who could benefit from a few months of being more centrally located and having more logistical support.
Some people found nearby houses or apartments, but we had a hard time getting others to move out. For the next few years, the ops team actively engaged in housing searches in order to help coax new waves of stragglers out of the building. At some point we had everyone tour an entire empty complex that we’d come across where everyone could’ve lived separately in one or two bedroom apartments and we could’ve given up the main building, but it turned out that people didn’t want to switch away from having our big shared spaces with shared logistics.
By 2019, the team was spread across at least a dozen external houses and apartments in addition to the main 4-unit apartment building and we were working out of three different buildings comprising five separate office suites.
In the early years, we were a completely flat organization with no one officially in charge of anything or anyone: big decision-making (like recruiting) was done by consensus* and, despite (perhaps unfounded) expectations that certain things would generally be handled (like fundraising and the paying of bills and stipends and filing of tax returns), we hadn’t even established any kind of norms around recognizing or deferring to anyone’s expertise. And we also had no system for handling the work that people didn’t want to do (I didn’t buy groceries because it was my job; I went to the store because the fridge was empty and I wanted to help) or how to handle people saying that they were going to do something and then not following through. Unsurprisingly, this produced a fair amount of ambient conflict as we added people to the team with different (and sometimes strong) opinions about how we should live and work together.
* Assessing new hires:
As the team grew, so did our track record for making solid numbered arguments for a candidate failing or passing specific recruitment criteria. We also diligently adjusted the criteria as we learned more about what qualities would cause someone to be successful in our environment. Consensus became unwieldy and unnecessary, so people started opting out of hiring decisions and eventually the responsibility for assessment fell to a remaining handful of people who were willing to put in the long hours needed for these technical evaluations. They also took on the constant work of updating the criteria to account for what we’d learned, in order to make good hiring decisions. (Our turnover rate was super low, and I think that’s because we developed really good models of which people would actually want to do this crazy thing with us).
And another side-note:
As you can probably already tell, we intentionally didn’t adopt outside business norms and structure from modern society.
This decision drew a fair amount of criticism from the outside:
Q: why do you live with your coworkers?
A: it’s the most efficient way to share knowledge and to stay synced up.
Q: why do you work all the time?
A: we think we have a chance at dramatically improving the world and as long as we believe it’s possible, we’re motivated to continue trying to make progress.
Q: why do you argue so much?
A: because each of us brings something different to the table and the ideas have to stand on their own merits, regardless of who they come from.
Compounding this is the fact that most of the people who joined the project were people who had not had much success or experience in the traditional work world. To even find the project you’d have to be looking pretty hard or get lucky by bumping into the right sorts of organizations or events and then, at least for the first few years, you’d have to live a lifestyle where you’d be willing/able to live on a stipend in a big shared apartment/house/apartment building — this filters out most people busy running their own startups, most people working at regular jobs, most people with established families and mortgages, and leaves you with a pool of often-young people coming directly from various stages of academia who haven’t latched onto something yet, or people who aren’t the career type and maybe don’t like having a boss — musicians, actresses, philosophers, designers, as-yet unsuccessful entrepreneurs, theoretical mathematicians who want to change the world, etc. So they generally didn’t come from workplaces or households where they’d needed to recognize and abide by standard operating norms in order to be successful -- this was probably useful for the range of the experiment, but had its drawbacks.
I remember a year into living in our first house in Oakland while the group was still relatively small, someone asked me if I knew where the garbage and recycling bins were kept -- they’d been living there for an entire year and had never taken the trash or recycling out (while regularly cooking in a kitchen whose trash needed to be taken out every couple days), nor had they ever taken the bins to the curb or put them back after trash pickup.
Linens were also a contentious issue: we had plenty of towels and sheets, but people would just come get fresh ones without washing their dirty ones, such that then we didn’t have any clean sheets or towels for guests. We couldn’t persuade them to stick to handling their own sets (I think maybe they meant to, but many had trouble with concrete tasks like that?), so the solution was to set up a laundry bin next to the shelf of clean linens so that people would at least be willing to forfeit their linens to be washed, and I took over that part of the process for them.
And to be clear, while this was frustrating, I get that we all have different strengths and I know that I have areas that drive other people crazy. I think we did a pretty decent job of trying to fill in the gaps for one another.
Early self-directed experimentation
In a general pursuit of learning more about effectiveness (with the basic idea that for the huge problems society faces, we’ll need a lot of benevolent, motivated people who are more effective than the average Joe), we experimented with all sorts of things not strictly limited to psychology.
Just taking one individual as an example: between 2013 and 2014, they spent months trying out different polyphasic sleep cycles (as part of a large opt-in group experiment), different types of scheduling (as a two-person collaboration), one-on-one skill transfer (as a two-person collaboration with a different member of the group), and different types of management/facilitation (as part of a group-intervention/experiment). They helped to organize the first and second EA Summits as well as the first EA Retreat, ran the first (and only?) EA Burning Man camp, as well as a Burning Man camp organized under the principles of capitalism (as another experiment in managing groups and fairly incentivizing and compensating those who actually do the work that no one wants to do). They even tried building a small for-profit venture to learn about motivation and incentives and the difficulty of various things.
(This is a tangent, but for context in reading through the rest of the post, it might be useful to take a moment to consider how busy this person would’ve been, and then multiply that by the number of people in the project and the number of years we worked together. There was really a lot happening. And then add in all the additional stuff that comes from keeping everyone in sync and securing funding and facilities and acquiring visas and finding and onboarding new people and resolving conflict and cultural issues, etc. etc.)
As we directly learned things about what worked and what didn’t, we slowly added pieces over time, shifting expectations for the people who were already part of the project and having a new proposition to offer new arrivals (and new criteria to filter them by). Through this process, by the end of 2014 we had started experimenting with collaborations and task-forces on specific areas which we called “teams” (e.g. methodology, training, coordination, operations, memetics, etc.), and by the end of 2015 this process resulted in the creation of Paradigm, an angel investor supported startup which I’ll talk more about in a later section.
Allowing for wide-ranging plans and goals
I think by the start of year two, even before I was an official member of the team, it was taking a pretty serious toll on me — I have various ideas about why it was such a difficult environment for someone like me in particular. I think one important piece is that because we were evaluating candidates on a specific very clear set of criteria, despite hiring by consensus, we filtered way less for cultural/social “sameness” than maybe any other effort I’d been a part of. It felt kind of miraculous (and maybe ultimately misguided) to have so many people from such different backgrounds with such different personality types and social norms all working towards their own individual version of world improvement.*
* (I mentioned the diversity of areas of study above and I’ll list some of their plans below which hint at ideological diversity, which was maybe influenced by the fact that ~30% of our team was born outside of the US, with people originally hailing from at least 13 different countries and at least that many regions of the US – despite being based in the SF Bay Area, for a while we had more Canadians than Californians)
And because psychological-self-improvement was such a core part of people’s life and work, the stakes felt a lot higher. This wasn’t a normal job with an office and normal working hours and a clear way to judge whether people were pulling their weight and then a process for firing them; this was a research facility where the researchers were their own subjects and the feedback loops were tight and frequently opaque, whether they were working solo or in pairs. It was an incredible environment for learning and discovery, but also a very vulnerable one with ambiguous signs for personal or collective progress and constant evidence of each of our own internal failings and psychological messes that normally get swept under the rug in favor of professionalism and productivity, but in our case were quite relevant to our research as well as our underlying sense of how difficult/possible our task of creating highly effective people was.
I don’t recognize or relate to many of the public claims that have been made about particular Leverager’s ambitions and I think that’s probably because those were goals/plans from some subset of people who I didn’t have much contact with. Some people believed that AI was the most important issue, some thought it was avoiding civilizational collapse; some people believed that it would take many generations to make significant progress in improving the world and were making plans for how to pass knowledge down, some were slowly coming ‘round from believing that there was no way to bridge the politicized divisions between the far right and the far left, some were focused on training in hopes of having benevolent people who could overcome these various problems and perhaps most importantly, coordinate effectively with a large and diverse group of people.
For whatever reason, we ended up recruiting very few people who I would consider particularly conscientious and who shared my more practical/less abstract worldview and areas of expertise (and resulting concerns).
This lack of people “like me” led to me spending the better part of a decade trying my very best to hand oars to people in an effort to row in the same direction, where it’s possible that they didn’t even agree that we needed a boat. The group also skewed pretty hard toward more classically masculine traits (even though we were more gender-balanced than most in terms of actual number of men and women) and as someone who gets a lot of value out of small thoughtful signs of recognition and care, my tank was often running on empty.
What to do when society is wrong about something?
It might be too hard to bridge the inferential gap on this one, but I think it’s worth trying:
Let’s say you have society, with all its implicit expectations, some of which are good and supportive and fit together nicely, some of which are harmful and shaming and incongruous with pursuing your goals while also being deemed good by society.
Let’s say you also have a friend group, and they all agree that e.g. the standards for female beauty in the US are ridiculous. When you hang out with them, you don’t need to paint your face, you don’t need to iron your hair, you can wear shoes that would be practical for being able to spontaneously go on a muddy romp with said friends — you feel accepted and relaxed and you feel like you’re with “your people.”
But when you wave goodbye and get back in the car, you’re no longer in that safe space. You might worry that you’ll be spotted by someone from the Parent Teacher Association on the way home, or you decide not to pick up some milk at the grocery store because of how you might be judged. It’s not that you’re always under this stress — most of the time, you’re putting in the time and effort to meet society’s standards, which you understand and are perfectly capable of doing. It’s not ideal, certainly not as liberating as how you can be with that particular friend group, but it’s something you've had ingrained in you and that you accept about the world and your place in it.
Now let’s imagine that you get together with those friends and you propose that you start a business together — now you can relax your standards for beauty and grooming not just when you are hanging out, but essentially 24/7 (except when interfacing with clients or perhaps other professionals). And maybe now that you’re spending so much time in this protective bubble, it chafes a bit more when you need to go through the whole rigamarole to present yourself in the way that society expects.
And as you get more comfortable with yourself in your more “true” form, you’re maybe coming to be able to see the strangeness of the way that you used to think it was so important and natural for you to put so much effort into how you looked. Maybe you still like dressing up for date-night, but otherwise resent the pressure to perpetually maintain this façade of modern femininity.
But maybe your partner’s thoughts aren’t evolving in the same way — society is still squarely the arbiter of goodness for them, and society will judge them not only by how well they perform, but by how well you play your role as an object of desire that they have claimed for their own. (They might not endorse this — they care about you, even consider themselves a feminist, and want you to be comfortable in your own skin — but they’re in the societal Matrix and they can’t change their underlying desire to fit in).
And maybe the members of the PTA also notice you’re not putting in as much effort — you feel justified on an intellectual level and don’t feel as much underlying shame as you would’ve on that day you were driving home after hanging out with your friends — but they’re uncomfortable/uneasy, both about you doing something that doesn’t fit the mold, and also about the possible implications about what it means about the societal rules and the way they’re still going along with them.
It’s not clear to me what the solution is:
- Should people blanketly play along with the things that society demands of them?
- (How far should they go? Nose jobs and boob jobs? Or at least high heels and face paint?)
- Should people fight to change societal standards so that everyone can be liberated at the same pace?
- (If so, what is the right strategy other than noncompliance and trying to shake others out of the Matrix?)
- Should people seek more complete refuge in protective bubbles?
- (Choosing a partner in the bubble and maybe creating a school and a PTA within the bubble to reduce friction with people who are still actively trying to meet the standards of broader society?)
(Bubbles sometimes get a bad rap, but they can be powerful tools for good — since leaving the project, I’ve cut my world down to about a dozen people (family and a few friends who are chosen family), carefully selecting for only the ones who don’t need me to be any different than I am right now, even if I never fully recover. I don’t like to imagine what life would’ve been like for me if that hadn’t been possible. My bubble gives me enough space to exist, despite what anyone on the outside might think of me.)
I think that we were like that friend group, a safe-haven for people who were aware of many of the suboptimal pressures of society and who were more able to be themselves within our bubble. At the beginning, when it was just a handful of us, maybe we were just more able to think about out-of-the-box solutions to problems that society would have us generally believe either aren’t problems or are problems that will naturally be solved by something like: education → technological innovation → solutions to everything. Or being able to squarely look at the way that some people are just obviously more effective than others (which isn’t really a narrative that’s societally sanctioned).
I think it was good and maybe necessary to create a bubble that allowed people to escape a lot of the normal societal constraints that they would have to deal with by default. I think freedom to be a wonk was a really important piece of our ability to do good research, and it would have been quite difficult to manage that as an individual trying to fit themselves in with societal expectations of what science looks like and where the limits are on what is acceptable for a researcher to observe and record. (The societal guardrails on how to do science probably stop a lot of crackpots from being able to confuse the population, which is good. But they also probably stop intelligent wonks from being able to do their thing within the normal system, which is bad.)
But as the group grew, sometimes a new person would bring some negative thing from society with them, threatening the people who were counting on the bubble as a safe space (or some new person would join and bump into something negative in the bubble that was already there). And because we hadn’t figured out how to cleanly fix or remove those deeply ingrained societal pieces in ourselves, there could be large effects in individuals even if they didn’t intellectually buy into that particular piece.
Sticking with the beauty standards/societal expectations of women example: I think we did an impressive job of limiting certain types of sexism in the Leverage ecosystem (I’m not sure exactly why, but I’d suspect it was due to a combination of Geoff having very little tolerance for a number of isms, the explicit recruitment criteria that cut out a lot of the implicit biases that might favor one gender over another, and the broad range of acceptable areas of focus). We had many driven, intelligent, thoughtful women in the group who were widely respected due to their obvious effectiveness – the clear majority of the people steering the project were women, the majority of subgroup leaders were women, I’d argue that most of our top researchers and trainers were women, and most of our clearest examples of people significantly leveling up were women (with a fair amount of overlap in each of these categories).
I think basically all of the women in leadership positions would agree that the way that society often has professionalism for women tied up in their sexuality is B.S. (some might find an aspect of it aesthetically pleasing – I’m imagining something like the secretaries in Mad Men or the Emirates flight attendants – but I don’t think they’d be on board with it as a norm for them to try to meet). But while we shared a distaste for the societal standard, we didn’t share a vision for how professionalism and sexuality should relate.
For me the ideal was essentially desexualization in the workplace, which harkened back to my years at women’s colleges where your self expression was almost entirely divorced from your status as an object of desire. It was wonderfully liberating and seemed to remove an entire subliminal/subterranean layer of competition and judgment from our interactions and relationships.
For others, the way that they eschewed the standard link between professionalism and sexuality resulted in a desire to show up as a fully sexualized person, not hiding or shelving it, despite the professional context.
Their ideal bubble of safety would allow them to be sexual people regardless of the what role they were playing in life (not acting inappropriately/forwardly/nonconsensually in a work situation, but also not constraining the signals/displays of their hotness exclusively to romantic or explicitly social circumstances) -- but this then conflicted with my ideal bubble of safety which involved everyone being able to keep their attention on the task at hand and everyone being able to have close caring interactions without needing to guard against sexual overtones. (This isn’t a new topic of conflict – high schools across America are constantly trying to figure out how to balance these competing goals, in a society that makes it very clear that women should be judged by their value as a potential conquest/mate).
So we again run into the problem of a diverse set of people with diverse norms and paths and goals, all interdependent. If we were just regular co-workers, the issue wouldn’t be that big of a deal, but because we’re in this bubble together, the stakes are higher on trying to figure out which view makes more sense or whether there’s a compromise to be made. And it’s not just a straightforward answer like “why don’t you do your thing and they can do their thing?” because humans have enormous effects on one another and almost everyone has their own plans that relate strongly to sexuality – women key off one another in determining whether sexual desirability is a dimension they need to be competing on/optimizing for and men tend to also respond strongly to cues, sometimes just being distracted by desire, sometimes being threatened by the shame or guilt caused by having their attention pulled away from their work (especially if the woman in question isn’t available), and some also get into a competitive status-driven frame on completely unrelated topics.
It could be that the best solution would be to segregate the bubbles by gender or sexual orientation as they do in some educational environments – this was discussed amongst some of us who were trying to identify and solve problems in the project, and maybe if we had kept going, we would’ve settled on a plan like that, but as it was, this was just one of many problems where we didn’t yet have an answer/didn’t know what to do.
I raise this not as a key object-level issue that we grappled with (despite it being important to me in particular), but as a pretty concrete example of the complexities and pitfalls that come from creating a space where people are able to sidestep traditional societal norms and pressures. This side-stepping is often straightforwardly good, but it foregoes the benefit of broader society having weighed in and having provided some guidelines/guardrails (that are imperfect and often stifling/repressive, but at least relatively easy to follow), and instead forces individuals to hash issues out with one another more directly.
Given that our bubble was organized around something broad enough to essentially encapsulate the majority of all issues: cultural and social, but also epistemic, philosophical, moral, etc., this meant that a lot of things needed to be analyzed and understood before we could settle on an acceptable resolution and add those values to our community ethos. This is part of what I’m gesturing at when I talk about the stakes being high; I think the type of unresolved conflict I describe in my example above was pretty rampant across many domains, and I think different people were able to handle it to different degrees (e.g. people with more implicit models or objections might have a harder time conveying their concerns, and people who were more attached to traditional societal norms might struggle to articulate those ingrained objections beyond “but that’s just not how it’s done”).
Many people had a strong vested interest in the bubble adopting norms and values that were good for them as individuals (for both their social comfort and their social status) and it’s hard to judge what kind of negative effects this dynamic had on the whole group.
But FWIW, I think that the community that we created is of the type that would’ve been able to exist in the social climate of the early 1800s and be much more likely to recognize the evils of slavery, and wouldn’t have been sucked into the insanity of McCarthyism in the early 1950s – I think the insulation from mainstream society gave us the ability to see many things more clearly and was/is a good thing, but also presented many challenges both within the group and in interfacing with people and organizations outside of our bubble.
As far as I can tell, in the very beginning, there was a ton of transparency. When I first visited the Leverage website (before it was even at leverageresearch.org), it was full of interesting detailed information about their research and plans and even had a designated area on the front page that was being updated weekly to describe what they’d accomplished and what they were currently up to.
I found it really exciting and compelling, and I think that was intentional: broadcasting information like that is a great way to attract talent that’s likely to be a good fit.
But near the end of 2011, an individual from the Rationalist community who had been exploring the idea of “intelligence amplification” raised concerns about the possible world where the research at Leverage might actually produce powerful effects. If that happened and all the steps along the way had been broadcast on the Internet as they currently were, there would be no guaranteeing that the technology would only be used by benevolent actors.
I think Geoff took those concerns seriously; after doing some thinking and weighing costs and benefits, the group dramatically pivoted and adopted a principle of caution around things that might be problematic to share.
But limiting information-sharing came with a bunch of downsides: it made recruiting and PR much harder and produced cultural, personal, and interpersonal problems.
We worked hard to try to mitigate those problems with a policy that was quite simple (in hopes of preventing people from potentially rounding off a more complex set of directions to “don’t say anything about anything”) and we set up an approval system via skype (and later, slack) for people to ask questions or request permission to share things so that the process would be quick and easy. This public anonymous comment from a person who was on the team paints a picture of what we were going for, and I’m glad it worked at least in this one case:
I didn’t find the information policy I signed overly stringent. I’ve signed confidentiality agreements with multiple normal for-profit companies (that aren’t affiliated with Leverage, EA, or Rationality), and this policy was less restrictive than those. It allowed for personal blogs as well as sharing Leverage training techniques and research piecemeal (without approval required). It required permission before publishing the organization’s research online or starting an extended training / coaching relationship with anyone. It also prohibited sharing personal information about hires or information a trainer learned about a client during training / coaching. These rules seemed sensible to me. I had two different outside-of-Leverage romantic partners while I worked at Leverage, and I saw an external counselor. I discussed my experiences at Leverage (and Leverage’s research) with both and didn’t feel I was in violation of the information policy. (source)
But psychology is weird and I don’t doubt that some people felt much more restricted than was warranted by the policy itself. As a concrete example, I know of at least one case where someone felt that the existence of any information sharing policy would be an unacceptable barrier to them being connected to any romantic partner they might have who was not also part of the group.
But the downsides notwithstanding, I think it made sense (and still makes sense) for us to be cautious with the distribution of our research. Notably, that does not mean not sharing anything. Even at the time we were doing the research, we sometimes chose to go ahead and share full detailed written documentation of theories (I can think of two main examples). And now that we have the clarity of hindsight, I think it’s even easier to deem a bunch of stuff innocuous. For all that stuff, the challenge just becomes how to explain it in a way that will actually make sense.
(I’ll say more about different kinds of blocks to a broader kind of transparency in a later section on the risks of sharing information.)
The Paradigm era
Raising a seed round for Paradigm, a for-profit startup, marked a new era. It signaled that we had found a promising line of research that we believed (and angel investors tentatively believed) could be a profitable business model with enough continued R&D. The planned business model was basically: you receive training* from Paradigm over multiple years, and in exchange you agree to share a portion of your future earnings (though, only if you ended up earning more than some amount per year; the number thrown around at the time was $250k). Our thinking was that the kind of ambitious people we wanted to bring in would sometimes earn very little money working on important things that aren’t profitable/marketable, but in some cases they would become millionaires or multi-millionaires as part of their world-improvement efforts. So we offered investors the opportunity to financially support that happening in exchange for a cut of the eventual proceeds. One way to think about it is a venture-funded school/incubator paid for by alumni-giving, similar to a coding bootcamp or something like Y Combinator, which aligns the incentives for helping people succeed.
A note on the Paradigm business model and its relation to Reserve:
We never really got to the point of implementing this business model in full force, since we were pretty much still in the R&D phase when Geoff made the call to wind things down, so most people that were a part of Paradigm were never asked to sign an agreement for a cut of their future earnings. The exceptions were one entrepreneurial training recipient close to the end, and a dozen or so Paradigm staff members who started Reserve in 2017-2018, a stablecoin project that is still growing and thriving today, though with very few people from Paradigm left on the team.
One public post states that Reserve has a “weird financial agreement with Geoff” — to clarify: early Reserve employees who had received significant Paradigm training while part of the Leverage ecosystem had an agreement with Paradigm that was based on the business model described earlier. Geoff is a shareholder in Paradigm, along with several other co-founders, employees, and investors, so it will be a multi-party decision what to do with the cut of tokens that Paradigm will receive from former members of the project. Separately, Geoff put a small amount of personal time into Reserve and will receive tokens for that (along with dozens of other advisors who helped in the early days), but not as many as the Leverage ecosystem folks who actually worked directly on Reserve.
I don’t really like describing these kinds of private financial arrangements in public and I don’t think that Reserve, Geoff, or Paradigm should have any obligation to disclose any of this, but I don’t like the idea of people being upset about how much money Geoff is going to get based on a false impression and I don’t like the implication/insinuation that Reserve has not been straightforward in allocating tokens.
I’ll talk more later on about the use of words/concepts that don’t quite fit novel circumstances — they’re useful for getting people to understand the basic idea, but they need to be clearly flagged as referring to something which is “not-actually-that-thing.” One word this applies to is “training” and “trainer.” I tried to come up with a better descriptor for this post, because it comes up a lot, but anything more precise or less concrete sounded euphemistic and weird. E.g. “psychology facilitator” or “mental life coach” both seem to have more connotations to cancel than “trainer.” So I’m going to stick with the word trainer (which is what we used) but I’m going to try to paint a picture as a way of clarifying what I’m referring to.
In the Leverage ecosystem a trainer or a set of trainers had a unique role and relationship with their trainee, that maybe most closely matched an athletic coach or a physical trainer – but instead of trying to help you become physically fit and agile, our trainers were trying to help you become more mentally healthy, sharper, and more effective. Just as with an athletic coach, a trainer often wouldn’t need to be a better performer than you in any particular area, but they could still help you change your psychological setup or become more effective, guiding you in doing something that they might not be able to accomplish themselves. In fact, trainers would rarely have expertise in any given area that the person was trying to improve in. The thing that they were skilled in was helping their trainee to grapple with problems themselves. Sometimes by using psychological tools/methods to untangle things (debugging), sometimes through somatic methods of identifying or resolving issues (bodywork), sometimes by just talking things through and sharing a different perspective or coming up with a better plan for handling a problem, and sometimes by actually giving them new/better tools or understanding within the trainer’s area of expertise. (I’m mostly referring to general training here, but there were also trainers who helped people gain skill in specific domains where they had specialized knowledge such as sociology, philosophy, theorizing, introspection, etc.)
I heard one person reluctantly describe the general trainer role as almost that of a parent (though I’m also reluctant to add this because people’s perception of the role of a parent probably varies widely, and in practical terms we typically related to one another as peers), basically: there to help the person along the way, removing obstacles where possible, attempting to be a positive influence, while allowing them to grow into their own person; wanting them to be successful and happy and taking some amount of responsibility for them turning out to be a good human. And I guess I’ve known some teachers to have this kind of attitude towards their students, but that would come along with an idea of perhaps the student being in class in a compulsory way and the teacher needing to keep their distance and maintain a relatively narrow focus and not play favorites or give individualized attention — none of which applied to the people who were part of the Leverage ecosystem.
Training was generally a coveted resource and was only received or offered by people who wanted to be helped/wanted to help, and the particular pairings might look quite different based on the goals of the trainee and the skills of the trainer. I do think it’s important to keep in mind that everything was quite experimental and many general trainers were psychology researchers who were not only motivated to help boost their teammates, but who were also motivated by their interest in gaining experience and knowledge of how different people were configured and what things might help in different situations.
Early on, there were attempts at assigning rank to trainers as they gained skill, e.g. by essentially having them earn merit badges once they had worked with at least 10 different people for a total of 100 hours in a single modality (maybe sometimes measured by how many issues they’d resolved?). At some point someone organized a sort of trainer bootcamp with lessons and challenges, some people receiving points and certificates and things – trainers can use their techniques to help others, but individuals would often study or develop different training practices with the goal of being able to apply their skill to themselves, rather than assisting others. Over time we were using so many techniques that skilled trainers were sometimes separated by which techniques they were primarily using (often highlighting the newer/more cutting edge ones or the ones that they were developing by studying with internal or external practitioners). People would sometimes gain a reputation for particular types of skill, such that they would be brought in for help in cracking particularly difficult cases. This reputation is also what eventually led to some more experienced trainers emerging with their own set of people working on developing training skill under their tutelage.
Over the course of time I personally had more than a half-dozen trainers (starting in late-2016 after about a year of participating in piecemeal workshops as we were experimenting with more systematic skill acquisition through the development of Paradigm training). Trainers usually approached me with the offer to take me on (partially to help me and partially to gain experience in whatever technique they were practicing) and made a personal commitment, sometimes for a set period of time so that I could count on their continued support. Often I would have only one or possibly two working with me, maybe meeting once or twice a week. At one point I had three or four working with me solo or in pairs (sometimes a more experienced trainer overseeing a novice, or two more experienced trainers with different skill sets working in tandem). At the peak, when we were trying to find a way through some of the really difficult interpersonal issues immediately prior to the dissolution of the project, they were meeting with me multiple times a week and then coordinating with me and my other trainers via slack or shared documents, tracking what progress we’d made in the discovery or resolution of issues, and what our plans were for what to tackle next.
It was fine working with lots of different people and approaches, especially in the context of them wanting to help, but in some cases trainers stopped working with me in a way that was pretty hurtful, and if there were issues you couldn’t figure out how to resolve on your own, it could be stressful being reliant on training relationships with your peers.
This could be exacerbated if e.g.:
- You didn’t have visibility into the issues yourself.
- The issues were implied to be barriers to being viewed as a good collaborator.
- You had a complex personal relationship with your trainer.
This was an imperfect system run by imperfect humans, and I will discuss some of the pitfalls of our training relationships later on.
In this new era, as Paradigm was getting started in early 2016, we were all under more natural pressure/direction from the evidence that we had accumulated about what research avenues would move us forward in the development of our training, and we also had more concrete external expectations because instead of donors just looking for research progress, we had investors who also expected eventual returns.
I think there were growing pains for people who weren’t adept at what we’d ended up focusing on (but who wanted to continue being part of our group and our mission) but who also weren’t able or willing to contribute in other ways (e.g. fundraising or recruiting or day-to-day operations).
But up until this point, we’d never had to cut anyone from the team, and we hadn’t built out a model for how to handle people no longer having a productive and valued role. As a flat organization essentially comprised of a bunch of independent leader-types, we had generally just individually pivoted on our own, and it wasn’t clear what to do if someone didn’t find something useful to pivot to.
Firing people is hard even under normal circumstances. You don’t want to signal to your friends and teammates (who have sometimes been in the trenches with you for years) that they don’t have value or that they have no place in the project, but you also want to use your resources in the most effective way that you can. You also don’t want people pushing themselves too hard in order to be viewed as productive, just to keep being “part of the team.” As an attempt to address this, we separated out being funded from being part of the project; so that even if it didn’t make sense to use limited financial resources to continue paying you, you wouldn’t be cut off from the continued research progress or the social connections that you’d formed. For some obscure reason (and probably not thinking that we’d be writing about this publicly in the future), we called this the “no rabbit left behind” policy, and hoped that by making clear that you could still be a part of the bubble of community and ideas, people who had started to come to depend on being part of that social group or receiving training as part of their life plan would have less fear of the scenario where they weren’t performing well enough to warrant a paid position. And we also set aside funding to provide people with 3 months of runway, to make the defunding transition (or decision to leave, if that’s what they determined was best for them) easier.
But defunding people is not that much easier than firing people, and to complicate things, there had been a few cases over the years where people were relatively unproductive for long periods of time and then came out of their rut and became top contributors (as well as the other way around), so it made decisions like this more fraught. In addition to not wanting people to feel demoralized, we also didn’t want to give up on someone’s ability to turn things around. So, we actually didn’t defund people very often at all.
Despite having this policy and the relative infrequency of defunding people compared to any normal company, there was still a high degree of anxiety over being defunded for many on the project. It turned out (makes sense looking back) that being told you weren’t good enough to move the project forward but would still be allowed to hang around in your free time was not an acceptable outcome for most people. If you think about it, that makes a lot of sense: you’ve joined a group that is trying to solve big problems in the world, and so naturally status is allocated based on who seems to be leading the way in making that actually happen. The rest of society doesn’t recognize your group’s value since it’s so fringe, so you double down on the idea that the group’s internal status hierarchy is the one that matters, instead of the one that says that what’s important is to make a lot of money or hold a high position in the government or go to a good university or raise a family. But then that group puts you at the bottom of their status hierarchy. Not fun.
Another thing that changed after we completed our fundraise is that we finally had enough money that we could budget for reasonable salaries and we didn’t need to share/provide housing anymore. So we encouraged people to move out of the central 13 bedroom building and get their own apartments nearby -- which became an expectation for all new staff after completing their initial 3-month trial period) and we were also able to acquire dedicated office space (we literally had a party where we all just unboxed office supplies like it was Christmas).
At the time it felt like a real boon, but I’ve since wondered if it was actually bad to have a more normal-seeming work environment without changing our underlying outside-the-box ethos. I think as the project progressed, these more standard professional pieces made it harder for individuals to clearly keep track of what we were doing, what our priorities were, what our constraints were, and even where the line was for what part of people’s lives the project should take responsibility for.* It also made it easier for people to join the team who had a lower personal tolerance for weirdness than the people who joined in the early years when most of us were sharing rooms and potential recruits would sometimes just end up sleeping on the floor in one of the common areas during their visits.
* I think this type of environment-induced confusion about what should be someone’s personal responsibility isn’t that uncommon -- the fact that health insurance in the US is often tied to people’s jobs is a strange artifact of history, but it changes the relationship that people have with their employers, who I’d argue shouldn’t be the ones managing the physical health or personal financial decision-making or risk management of the people they employ.
I don’t know that we ever really had much of an “employer/employee” relationship on the team, but some of the same principles apply: having a live-work environment muddies the waters, as does providing catered meals and cars for people to use and access to a concierge doctor and personal finance workshops and fitness classes and tax return studyhalls… in my mind, I was just helping the group in ways that seemed like they would benefit from, using economies of scale when possible, keeping people healthy, allowing people to worry less about these life necessities and instead spend their time in ways that seemed more useful to them. But while I think the individuals in our group were less likely to subscribe to the default assumptions of work-related social norms and structures, I think when they did encounter things that didn’t fit their expectations, it was more jarring for them, maybe because they hadn’t yet built out their own framework for how to think about what part of their lives were by default theirs to manage (with these extras just making things easier) and what part of their lives they had maybe inadvertently delegated responsibility for to the project (and by extension, the leaders within the project). (If you’ve become accustomed to having lots of things provided for you, I think it can be hard to navigate the cases where those things aren’t actually what’s best for you: it might feel like you need to take on the daunting task of convincing the entire group (or the leaders within the group) to change how they’re approaching something, rather than simply building out your own personal solution.)
And I think this part took a toll on me as well, as I was often trying to strike a balance on my own, or with my small ops team, and there weren’t structural pieces to allow us to clearly define/defend where our areas of responsibility (and assumed authority) started and stopped. As I mentioned before, navigating the complexities of a live-work environment with a diverse group of people can be really difficult. Some people want to forge their own path and not coordinate with anyone and some people are very happy to have the vast majority of the logistics and system design handled for them. Some people want to do all their own cooking, some people are happy to find a fridge full of prepared food. Some people want to leave the kitchen dirty until the end of the day, while some people may find that unacceptable and end up cleaning up after other people every day when they break for lunch. Some people want to be taken care of. Some people don’t want to be “momed.” Some people are happy as long as they’re able to focus on their work, some are fine as long as everyone’s treated equally, and some need to be sure they’re getting special treatment. And some people find any kind of intervention to be tyranny.
I’m someone who tries very hard to have things go well, cares a lot about fairness (probably more than would be ideal--it’s often an easier path to spend extra resources on the squeaky wheels), and when there isn’t enough to go around, I tend to make trades that cut against my time/energy/resources as the easiest puzzle piece to move (or reshape) in order to get things to fit. This is partially in an effort to be helpful and minimize dustups, but is also a (relatively subconscious) strategy to make sure that there’s no question that I’m pulling my weight -- I’ve found that it can be easier to convince people that you are cheap to keep around than it is to convince them that the things that you do are important (but if you yourself believe them to be crucial, you may have to lighten one side of the scale to make sure you can keep playing your role), which is one reason that I’ve spent as much of my career as a volunteer as I have. Money is a useful tool, but it also confuses things -- I am lucky enough to have spent most, if not all, of my working life doing things that I would continue to do even if I weren’t paid for it, but I’ve been quite surprised by how differently people treat you if they think you’re being paid by someone for your work--maybe being less sure about your motivations? I work best as part of a team that works together closely, giving lots of support and feedback and encouragement and gratitude -- this particular environment was a pretty constant struggle for me.
I think my concept of “team” causes me to pitch in a lot more than people who see themselves either as pursuing their own independent goals, or who see the world through a more transactional lens (I find both categories tend to do a pretty bad job of estimating how much other people or systems are helping them—like the people who ignore the benefits they get from interstate highways or a literate population or medical advances or the Internet when making claims about what they’ve done “all on their own.”)
While I suppose my most concrete responsibilities lay in managing the financial and legal aspects of the relevant entities and also in overseeing the ops team, because of my “teaminess” attitude, I ended up wearing many hats. People would come to me with health concerns (sometimes to be connected to the team doctor but often just because they knew I could help them triage (I don’t have medical training, I’m just the kind of person who accumulates a lot of practical knowledge about common things that go wrong and have to be dealt with)). When we had international students who wanted to intern with us, I served as their fiscal sponsor on their visa applications. In the early years, if people didn’t have the cash to cover travel to visit their partners in other countries awaiting visas, I loaned them money to cover their flights (and I’m still not quite ready to write off the loans that haven’t been paid back yet). For one person who needed a dependable representative, I took on the responsibility for their family trust. One member of our team was deaf, and we would use my phone number on forms so that I could answer calls for them.
I’d also be called in if someone broke a glass* or spilled wine on themselves at an event or if people couldn’t figure out how to get the surround sound to work when switching from the video game console to projecting a movie from their laptop. Or if someone had ripped the only pair of jeans they felt comfortable in and needed me to mend them. And some of my favorite incidents involved being brought in to find a stubbornly lost object (always a crowd pleaser).
This might be a good illustrative example in case you or others are either thinking of this as an easy task or thinking that most people have this level of concrete understanding and will to execute — it actually takes a lot of skill to manage cleanup in a situation like this without risking anyone getting hurt in the process or afterwards:
- Glass shards can spread way further than most people assume—15’ maybe? If the glass was dropped in the kitchen, there will likely be a few shards in the far corners of the neighboring room, which might get stepped on immediately or might get stepped on the next day by someone completely unsuspecting.
- If people try to help (which they definitely will), the chances that someone will cut themselves or acquire a glass splinter is almost guaranteed.
- If you don’t have a shoe rule, some people are in immediate danger of stepping on a (potentially almost invisible) piece of glass.
- Getting glass embedded in your skin is worse than other types of splinters because 1. It’s pretty much invisible and 2. the glass is often sharp enough that it might work its way into your foot, rather than out, especially under pressure from walking around as usual.
- Even with my aura of authority, I have frequently ended up needing to switch into first aid mode after explicitly asking guests at a party to stay where they are and to *not* attempt to pick up the glass pieces while I go get the proper tools.
- It is very difficult for a well-meaning human to stand idle when there are large pieces of glass on the floor that they feel perfectly capable of collecting. But then by the time I return, someone has a cut (that they’re embarrassed about and will maybe try to hide from me).
- Practically: I recommend a swiffer with a couple paper towels installed to do a first pass and get all the big pieces into a dustpan (you can also pick up the big pieces with gloves) — you can also layer a few paper towels and push them around the floor with your hands— and then a full second sweep of the areas with a fresh couple of paper towels that are damp, throwing them away instead of rinsing and repeating like you might normally).
Totally possible to me that for some, it’s not worth worrying about the tail end of shards that made it into the other room or the ones that might be caught on the bottoms of people’s shoes or that wouldn’t be picked up without the damp paper towel stage.
But this is maybe a good example of the way that I care *a lot* about being able to maintain an environment for people to live and work where they don’t need to worry about the possibility of there being tiny shards of glass on the floor or in the carpet. And not everyone is going to agree that that should be a high priority.
Maybe you think that’s an appropriate standard, but you also think that 40+ people should also all have these models and this level of dedication, and that they should/will also be willing to take the social hit of being “that guy” at an event where a guest drops a glass and everything needs to be put on pause while the glass is completely cleared from the area.
We encouraged people to handle issues that they were well-equipped to deal with, and we encouraged people to rely on those of us with more skill and experience when the circumstance warranted. If I weren’t around, it would be fine for someone to do their best to clean up carefully, but I would want them to send a quick note to our slack ops bot (the “Ops Fox”) or to our dedicated email, so that one of us would be aware that there had been an incident that might need our team’s attention in the morning.
In general I’m happy to help and I recognize that I had a lot of qualities that made it make sense to ask for my help or for me to offer it to cause better outcomes. But as the team grew, the load of all these small extra things became overwhelming — I eventually had to put a sign on my bedroom/office door to redirect visitors to slack or other channels because people would knock more frequently than I could answer and then without a follow-up, I’d sit there in a meeting worrying that maybe someone was hurt, when maybe the person just couldn’t find their favorite cereal in the pantry. (And of course there were always the more critical diverse tasks of helping with planning and strategy and recruiting and project management and training senior staff and supporting new entities as the project diversified, but those were less obviously tasks that could’ve been handled by any number of other people).
I can only really speak to my own experience and struggles, but in observing startup environments over the last decade, I think that this might be a common type of issue: when people have useful skills, it’s tempting to call on them and it almost feels unnatural to exclude them from a situation they’d be well-suited to resolve. In teams, without a clear division of labor or a strong culture of pitching in, people can get overwhelmed pretty quickly. Unless you’re working closely with someone, or overseeing their workload, it can be hard to track how much bandwidth they have available. When you’re in a group, it can be easy to default to modeling their busyness only by tracking how often you yourself are helped by them, rather than considering what happens if you multiply that by the number of people on the team. If you don’t put effort into keeping the group synced up on how full people’s plates are, that can easily lead to people in higher skill or leadership positions being overburdened in a way that might be opaque to their teammates. Without this background visibility/understanding, from each individual’s perspective, they don’t have a reason to have an empathic or even sympathetic response to the ways that the people in leadership positions might be falling short. And of course it’s different for different people: maybe it’s problem solving, maybe it’s strategic advice, maybe it’s mediating conflicts, or helping someone with psychology work.
There’s a way in which people tend to glorify leadership positions when in reality those people are essentially in a customer service role but with the added pressure of being the ones responsible if things don’t go as planned (not just for the project, but for any individual whose plans didn’t work out within the project).
And I think you have to be careful about getting into a Giving Tree situation (where the tree gives to its friend until it’s just a stump), especially if you don’t factor in the uneven giving and receiving that can emerge if you have a team that accommodates quite varied levels of contribution to the whole. It’s also difficult to navigate when your project is running on a long time scale. If you’re helping someone with their psychology (which can be really draining work), you might not expect to see the kinds of results that would cause them to be able to lighten your load for a number of years. So I would predict that there was an extra layer of pain for the people who were more focused on helping others during their time with the project, with the hope that people (maybe in particular the ones who had spent their time and energy investing mostly in themselves or their own pods) would eventually be able and willing to give back in a way that would perhaps balance out the investments that some individuals had made in supporting the group at the expense of their own advancement.
There’s another thought that’s nearby about something like “the lack of gratitude” or “lack of acknowledgement.”
There might be something broader happening here where certain types of work aren’t valued in the same way as other types. This is not quite handled by “low status work vs. high status work” but there’s overlap.
I expect this to broadly follow gendered stereotypes of men’s work and women’s work. And with those, the further cultural things built in like women needing to make their work seem effortless. So it ends up being a lot of work that people who *don’t* do, can’t really recognize or estimate the cost of on their own, and then even if they do look, the amount of effort that goes into that work is also downplayed by the ones doing it.
Despite the almost constant overwhelm, there were always others also trying to keep the whole thing afloat and I was also fortunate to often have one or two teammates cut from the same cloth to help with the more concrete parts of the project. During times that we could spare the funding, we also made a number of skill hires in creating a physical ops team (that were selected on a completely separate set of recruitment criteria and considered to be regular professional staff with professional standards, a strict management hierarchy, a traditional 9-5 M-F work week, a no fraternization policy, etc.). They helped manage things like food and facilities, and I’m still proud of the zillions of solutions we came up with over the years. Tiles for the car keys to find them even if someone left them in their jacket pocket instead of putting them back, car training videos and tests for being cleared to safely drive the cars, recycling bins with cutouts to make it hard(er) to put trash in, an infinite supply of forks and plates so that we wouldn’t run out even if people hoarded dishes in their rooms or offices and/or never ran the dishwasher (turns out that a surprising number of coordination problems can be solved by just getting an enormous quantity of practical objects), gym equipment safety training and calendar reservation systems, installing keyless entry to handle people losing keys to the building, etc. etc.
But it was rough going. And while I’m sure the ops people of the world are shaking their heads in recognition, I think the level of difficulty here was much higher than normal because of our relatively extreme diversity coupled with a lack of hierarchy and there never being a sense that your “job” or your “status” in the group would ever be affected by how cooperative you were. Other than environmental solutions, the only tool we had was our own understanding of human motivation and persuasion (which in my case is largely limited to trying to explain things to people and eventually including an explanation of how bad it felt personally to spend my Sunday mornings -- my only scheduled time-off -- cleaning up after other people before being able to make breakfast in my own kitchen).
(And like, clearly I don’t really think that people should feel oppressed by being asked to store personal food in a different fridge than communal food, but I can’t just make their upsetness go away by pointing out that I think it’s unreasonable or irrational, and I also can’t make them feel like I fully approve of them when I really wish they would change their perspective, and so disharmony ensues).
I realize that some of you may feel like this is off-topic. It’s maybe airing grievances, which you were hoping for, but it’s not sensational in any way and it’s not directly addressing any of the claims that have been made about Leverage. But here’s the thing: while it was very hard for me to spend years surrounded by people who fell short of my expectations in this relatively practical way, I don’t think it was just about the messes or forgetfulness. I was getting evidence that this group of people who I was relying on to level-up and discover important truths and work hard to build up to a better world were nowhere near as conscientious as I needed them to be. And this was upsetting not just because of the way that it impacted my Sunday morning routine, but because of the larger implications for my plans. And in a similar way, I’m sure that other people were threatened by the fact that I cared so much about how they acted and how they did or didn’t factor in other people to how they navigated our environment, because that didn’t fit in with their plans. So in thinking about what happened, I think it’s important to consider these ever-present background stressors -- not as something that was created intentionally to mold the group in some nefarious way, but just as a natural consequence of us all being very different and codependent. There was not really an option of just ignoring the other people the way you may do in a normal workplace where most people aren’t all that relevant to you in the big picture.
I should also note that obviously the disagreements weren’t restricted to the mundane -- I think that’s the easiest to talk about because it’s something we’ve all experienced. But these disagreements extended to all sorts of different parts of the project (even for people who hadn’t made explicit plans, they had implicit ones that would get messed up by someone else’s conflicting action). For example, some people believed that bringing more people on would ultimately lower their own burden by distributing the load while others were concerned with the drain on financial and training resources.
On many dimensions the intellectual and social environment was rich and rewarding with interesting projects and research, fun events, insightful people, etc., but there was a lot going on, and it was also often tense/intense.
Expertise assessment and attribution
I’ve seen claims of something like Geoff as a gate-keeper for what theories were seriously considered or adopted by the rest of the group. In fact, many people in the Leverage ecosystem developed novel theories and practices that were utilized by their pods or by others within the broader group, though I do think that Geoff’s assessment was taken seriously and his endorsement of a new way of thinking about something was more likely to propagate within the group, in part due to his willingness to put effort into creating presentations and documentation of new content.
I can see how that kind of weighted opinion might be a negative feature or even a flag in groups where everyone has roughly the same abilities, or perhaps in groups where everyone’s views are treated as equal, regardless of the subject matter. By the end of 2015, we had largely embraced and propagated a shared understanding of the importance of assessing people’s levels of expertise, and being more (or less) willing to defer to them in each area depending on their expertise level. So while we had started out as a flat organization, things shifted somewhat to account for people’s areas of skill/knowledge/models (though not everyone was good at assessing these and we struggled with deferring to others, so there was still plenty of contention).
So in our case, the deference to Geoff in the area of theorizing was a natural consequence of Geoff being recognizably much more experienced and skilled at developing theories than most others. (Just as people would naturally defer to me in the areas of legal compliance or emergency preparedness or event planning). While he does have blindspots and areas where he is stronger or weaker, I think his expertise is real. In practice, it meant that you had to be willing to take on a new area of study (or, in some cases, arrive with knowledge of a new area), get into the weeds, set up experiments or practices, try your best to figure it out, even develop good new material, and potentially then have Geoff show up, start working in the area, and spit out his own theory before you’d gotten there. For the overall project progress, this was probably good, but for individuals trying to get recognition for their skill and efforts, it probably often felt bad. (And as certain people leveled up, or arrived with some level of background experience, it probably felt somewhat worse because it wasn’t as clear to them whether they should be deferring to him or even inviting him to poke around their niche area.)
I often think of a quote from Harry Truman: “It is amazing what you can accomplish if you do not care who gets the credit.” Even as a behind-the-scenes person, I find myself hesitating sometimes when one path will go on my record and another path might work better or faster, but someone else would get the credit or people wouldn’t see that I had put in effort. And I admire the people I work with when I see them push past that and just cleanly do the more effective thing. But I think that’s easier when you’re confident that your position isn’t on the line and/or your reputation is more closely tied to the actual success of the whole thing rather than just your bit and/or you’re so productive that any particular marginal credit won’t tip the scales.
Unfortunately, I don’t think our environment was good for giving people a sense of being socially or professionally(?) secure. Expertise assessment is difficult and I think there were complicated factors that made people reluctant to publicly acknowledge other people’s skill or effort or contributions. One issue is that I think Geoff had strong beliefs about the importance of attributing credit for important advances (of any type that moved the project forward: hiring someone excellent, acquiring office space, developing a procedure for shortening feedback loops, etc.), but did not give partial credit for effort and didn’t always agree with others about which advances were important or even who should get the credit. Another problem was that often when people develop skill, they do it in an uneven way, so you might be doing quite well in one specific area, but be an (accidental) bull in a china shop in another, causing people to be threatened by the idea of you gaining status or authority in a more general way. And because of our nonstandard loosely merit-based org structure, most people couldn’t just rely on their job title or their recognized seniority to maintain their status.
Compounding these insecurities, skilled trainers would often build out very precise models of someone’s mental blocks, and the trainer might then become pretty resigned to a person never developing those complementary areas (at least given the current level of skill that that particular trainer had achieved). This could cause trainers to develop a negative view of the person or not want to invest in the person or even want to marginalize the person (e.g. if they came to believe that the person’s blocks were going to prevent them from being compatible with the trainer’s plans within the project). It’s pretty hard to defend yourself in a circumstance like that, and I think these ambient judgments heightened the pressure to be seen developing interesting and useful theories (even if you were never explicitly told that someone had concerns about you).
So let’s say you have someone trying very hard in the area where they’re getting traction, perhaps in the area that seems most important to them: while I think sometimes it would be exciting to have Geoff take an interest in their work as they explained what they understood and what they had learned thus far (Geoff is generally down to try to learn from anyone or anything), if they weren’t really secure in their abilities and their status, it could end up feeling threatening. And not just threatening because Geoff might now have his own theory that might differ from that person’s, but because their special area of contribution might feel diminished.
I also think there was a related problem in this area: if you believed that something was important, e.g. gratitude, but you hadn’t proven yourself to have special insight in the eyes of your peers, it might be very challenging to get others to take the issue seriously. Whereas if Geoff determined that gratitude was key to some part of his theory on group dynamics, he might have a much easier time convincing the group to put effort into the area, partially due to his track record in coming up with coherent theories, and partially due to his influence with key members of the group. And so if you wanted people to pay attention to something, your best option might be to try to convince Geoff to move it up in his queue, and that could be frustrating.
A diverse group splits up into diverse groups
This next bit is my read on a bunch of dynamics that I wasn’t as directly a part of and I don’t really know how well my perspective matches the reality of the situation.
As I’ve mentioned before, we didn’t filter for cultural sameness, so we ended up with determined people coming from very different places and aiming at very different versions of world-improvement. At this point, in addition to the core work on training and effectiveness, we had a thriving sociology research program as well as a handful of other independent and small-group endeavors, led by diverse and strong-willed individuals.
I think Geoff’s presence was the only thing that allowed that to happen. His acceptance of them and their perspective plus his ability to defend them within the group (not just by being the one “in charge,” but by making sound arguments) caused something like an uneasy truce. Many people who would never have imagined collaborating with each other were willing to work under the same roof, basically with the idea that they would be able to convince Geoff to step in if any individual or pod started doing something that they thought might pose a threat to the project or might have strong negative effects for other people within the group.
As individual researchers gained skill and knowledge, they also increased their ability to recruit other people from within the group to pursue their line of study. Simultaneously, their reliance on Geoff, both for direction/approval and defense, lessened. And because Geoff is in fact difficult to sync up with (though very rewarding if you’re willing/able to put in the effort), their plans increasingly diverged from his and they stopped coordinating as closely with him. But I think this also took them further from the umbrella protection of Geoff’s neutral zone, which then led them to be in a more adversarial relation to other groups pursuing other avenues and I think increased their overall sense of threat.
Interestingly, instead of trying to squash or marginalize these new pods that were emerging on the outskirts of the project, Geoff had their leaders name their groups and he formalized their existence by displaying them on our equivalent of an org chart. Before long, this structure led to a new development where everyone in the project needed to be within one of these subgroups or they would need approval from Geoff to run an independent solo project (which was also represented on the org chart). He also singled out the emerging leaders and invited them to a new weekly meeting to help maintain coordination and keep the peace.
Every month, Leadership Team (as this new set of group leaders was called) ran through a list of everyone on staff both to make any adjustments in the warning/defunding system based on their group leader’s assessment of their trajectory,* but also to make sure everyone was doing ok and to check if there was anything any of the other group leaders could offer to help.
* During the time that we used this system, “grade inflation” was rampant – group leaders were very reluctant to indicate that any of their people were reaching a status worth monitoring. I think this was understandable: they cared about the people in their groups and they didn’t want to risk losing these people who they’d invested so much in. But I think it sometimes had the unfortunate side effect of keeping individuals around who maybe would’ve been better served by moving on, even if it was painful.
These groups/subgroups have been sometimes referred to as factions, but I’m not sure whether it makes sense to define them by their disagreement with the larger group, especially as new ones were formed over time -- most of them continued productively as separate endeavors after the dissolution of the Leverage ecosystem, some continuing their research, some as growing startups or boutique consulting firms, some creating and distributing interesting intellectual content, and some disbanding and/or providing services one-on-one. But something important to note is that the leaders of these groups had little managerial experience, if they had any at all. They had joined Leverage or Paradigm originally as individuals wanting to improve the world and over time they built skills in their subject area and they’d likely led small experiments, maybe even by coordinating people in their pods, but I think they would all call themselves managerial n00bs. They had not been hired into management positions nor had they been promoted into management positions—they just found themselves needing to support the people who had chosen to come follow their research path/be part of their pod.
I remember at some point an advisor telling us that she’d heard that for every negative piece of feedback, you need to give five pieces of positive feedback because of the outsized effect of negative updates. I don’t know that anything like that would’ve been possible in a project where people were so dedicated to rooting out flaws in themselves and others and in the project itself. I think this is partially because of the trained focus of the group leaders’ attention on mistakes in thought and action and partially due to the defensiveness that comes from being in a low-grade antagonistic environment and being simultaneously faced with your own internal and social shortcomings without being certain that you can fix them.
In addition to the lack of seasoned managers, as I described back in the Leverage section, we didn’t have a pool of people who themselves had much experience in the working world, so many of them also lacked skills in navigating different working environments and managerial styles.
People talk a lot about building leadership skills; you don’t hear as much about the importance of cultivating “followership.” But trust me, it’s a crucial skill set, especially in situations where coordination is hard and the leaders/managers are inexperienced.
As time went on, there was a further push to give these subgroups more autonomy. It was clear that the dependence on centralized systems for hiring and funding was leading to a high coordination burden, and increasingly unproductive tension over time as group leaders became more invested in their own projects and were either unwilling or unable to devote time and resources to the central functioning of the larger group. It was also difficult to continue raising donations and investments to pay for the growing project. So, in an effort to make the project more economically resilient and give parts of it more control over their own destiny, subgroups were encouraged to look for ways to self-fund and directly support their members.
By the end, as the groups gained marketable skills and started taking on paid clients (a few of them establishing their own separate legal entities), they also needed to start making their own budgeting decisions, outside of the more centralized funding and defunding system. As they became more independent, the groups had to factor in not only who was on track to do high quality research but who was on track to providing clients with high quality services in a way that would support the group’s work.
These dynamics are hard even under normal conditions. Add that in many cases the topic of study is your own mind and the test is to see what intentional positive changes you can make, and also that success can potentially cause you to gain status and failure can cause you to be judged to be too mentally stuck to be worth investing more time/energy/resources in, and you can see how things could go off the rails for people with certain starting mindsets. If all involved parties are consenting adults who are engaging in the activity out of a shared desire to make progress towards one another’s goals (and they want to stick it out together rather than cutting their losses and moving on), it’s not obvious to me how to fix that dynamic except through learning and revision.
How it ended
I almost didn’t include this section, because it’s still pretty confusing for me. I hope that others who were more involved will be able to do a better job of telling this part of the story.
A few things stand out:
- In 2018 the psychology researchers discovered new phenomena that had occasional strong negative effects on people in and around the project. There was a scramble to better understand the phenomena, to determine whether the effects were volitional, and to mitigate the effects, but it quickly became a source of conflict.
- These discoveries, now referred to as “intention research”:
- caused some people to lower their overall assessment of people’s benevolence.
- affected people’s plans for their research as well as their plans for coordination with others.
- resulted in the generation of what some believed to be more effective psychological tools and practices.
- required a heroic amount of effort from trainers who were already stretched thin.
- resulted in novel work-in-progress models that people used to make judgments of others.
- In some ways they wanted to forge their own path without Geoff getting in their way.
- In some ways they wanted Geoff to ensure that everything was going well for them.
- IIRC Geoff himself hadn’t really wanted to be the leader of the project overall (while he was the original founder, in the early days he was casting around for someone else to take the leadership position, and he ended up taking on the role due to the lack of a better option) and he was burned out.
- I’m not sure of the details anymore, but I believe all Leverage employees were let go, with two fresh hires coming onboard under the new professional structure/academic focus, and I believe fewer than ten people stayed on at Paradigm: a couple people to help with admin and otherwise drawing primarily from a single subgroup that had been more coordinated with Geoff.
- (I’m uncertain whether Geoff understood the full extent to which this would actually end the project as we knew it).
* FWIW, I believe that Geoff understood that his personal flaws would be reflected in the group, because of his leadership position, and this was a big reason he dedicated so much time and energy trying to improve himself (which, ironically, might’ve contributed to others then pushing themselves too hard).
What came out of it
An astounding amount was discovered through our research, both from our researchers as well as from the experiment of running the project itself. I’ve considered what to say here, and I think that beyond what’s necessary to understand my perspective, there’s not really a way for me to summarize our research or even make a consolidated list of our discoveries, and even if I could, I’m not prepared to to put them in context or explain their significance or defend them (though there is a handful of descriptions scattered throughout this post). But I do hope that at some point, if they determine that it’s prudent and can figure out how to convey their findings (not trivial for this kind of research), other people from the Leverage ecosystem will end up sharing more of the specifics of what was discovered and developed.
One thought I’ve had is whether someone could try to compile a library of titles of workshops/presentations and documents prepared by people from the Leverage ecosystem (working documents, not just ones that were prepared for publication), potentially with attribution (assuming we get to the point where it makes practical sense to be affiliated with the project). Some might be jargony and hard to parse, but that type of thing might give a more concrete window into the breadth and depth of work that we were doing and would also give people a sense of who they could reach out to in case they were going to interact with similar research subjects themselves.
The project ultimately shut down before we were able to start launching initiatives, but I think I’d be remiss in not explicitly pointing out that all this effectiveness research and training was building up to causing effects out in the world.
We spent many years in preparation mode, building capacity and trying to see whether we could get into a position to effectively do things with real-world impacts — we were often criticized for being too meta, for spending too much time learning and preparing, but we did have concrete plans to run campaigns in areas we thought would be worth putting effort into, scaling up over time (I personally thought our timelines might be too fast, but I’m a pretty cautious person).
A lot of people on the project were counting on our expanded network of alumni to eventually run large scale efforts to address AI risk (maybe via regulation, education, mediation, etc.), or to run massive anti-aging research programs — I share these two as easy-to-understand (for our surrounding community) examples, but there were many other visions of what to go after as well.
Because the plug was pulled, we never got to try to go do any of those things, which would’ve been much more recognizable than the fringe research-based things that we had been primarily focused on. But some of our alumni are already many years into their first projects and I expect many others to go on to do notable and worthwhile things.
Early basic research is very difficult to appreciate from the outside
I’m not a historian, but it seems to me that many scientific advancements aren’t legible until that new knowledge has been applied to the development of concrete technologies that have reached mass adoption. So unless you are also an early-stage psychology researcher, you may have a difficult time parsing the research results even if they are painstakingly shared with you.
I expect that it was hard for people to calibrate their expectations, given that legible results from our basic research would rarely emerge within the first 5-10 years (not because we were trying to hide our work, but just because that’s how basic research works — I think it’s not literally impossible to convey, but it’s importantly difficult).
Belief Reporting as an example:
This particular introspective discovery/technique can be understood, taught, and used by many people in a way that makes it relatively easy to convey immediate value. (And after determining it to be safe, useful, and easy to learn, we actively tried to teach it to interested people in the broader communities). It allows people to discover/check their underlying/gut-level beliefs more reliably than many other introspective techniques and I think people have gotten a lot of value out of adding it to their arsenals.
But importantly, while we also used it internally to great effect in uncovering underlying beliefs, the significance of Belief Reporting as an important research breakthrough wasn’t just as a useful tool for introspection.
The importance for us is that prior to its development, when we made specific belief changes, we would have to wait for evidence that the belief change had held (e.g. waiting to see if the person stopped smoking or if they stopped procrastinating). Once we gained the ability to directly check beliefs, we didn’t need to “wait-and-see” — we could just verify that specific belief change and move on. (And if the habit or behavior did resurface, we could quickly check to see whether the belief in question had reverted, or if it had held and something else had shifted.)*
For making research progress, this significantly shortened our feedback loops when making changes to belief structures and testing new techniques and theories.
* Note: please don’t take this tl;dr as an indication that we placed an inappropriate amount of faith in the accuracy of Belief Reporting – this technique was calibrated over time as we used it in conjunction with other methods of checking/verifying changes, and even in the later years, was constantly scrutinized by trainers (who spend a great deal of time and effort building out nuanced models for how Belief Reporting can go wrong and how to detect the probability that that’s happening in any particular instance).
So while Belief Reporting is something that the broader community appreciated and gave us credit for, by not understanding the way that it actually dramatically shortened our timelines on psychological research progress, I wouldn’t expect most people to see it as the important advance that it was. They would just put it on a short list of “valuable things Leverage has produced,” without realizing what it meant about our expected increased rate of progress.
And that’s with something that lay-people could actually basically understand, which was quite rare for most of our advances. If Belief Reporting hadn’t been so user-friendly, or if it hadn’t had the side-benefit of helping individuals interested in self-improvement identify issues, I don’t think it would have been recognized in these nearby communities as something of note. So imagine that we had many important updates like that, that furthered our progress, but that weren’t tools we could easily teach or explain to people outside our research program.
Intermission - end of section I
You’ve likely been sitting in one place for a while – there’s a saying in the ergonomic community: “the best position is your next position” – I highly recommend you get up and do a quick stretch and maybe get some water and/or a snack.
I know a lot of this is a little heavy, so if you want a pick-me-up, you can watch my favorite music video
(It's a trailer/promo for a full-length documentary of a bunch of artists who hiked the full length of the John Muir Trail together over 25 days, absorbing it and reflecting it back as art. But I think this is actually better than the film. And the song is great. And the idea of being out on the trail with friends, and toughing it out together – quite excellent)
or, difficult dynamics we had to deal with
I don’t know if this section will interface well with the people still reading, but I would encourage people to be in a mindset of learning rather than trying to gain evidence for some agenda you have. And again, these are just the thoughts that arose from seeing some of the public discussion around the Leverage ecosystem, so I’m going to spend a disproportionate amount of time looking at the more challenging parts of what we did (I’ll say more about this unfortunate dynamic in a later section on the risks of sharing information). It is not meant to be a comprehensive account.
Also: as I share context and provide my read on the causes/contributing factors for various things, please keep in mind that
- I only had a limited view into other people’s experiences
- I believe quite strongly that explanations ≠ excuses
- I am not trying to (re)allocate blame – I think I have deep/fundamental confusions about what to do with blame given all the things I believe about the way that people act given their understanding of themselves and the world, so I’m generally trying to shift those understandings.
While I do use illustrative examples, I’m intentionally trying not to get into any personal conflict or ways that I was hurt by specific individuals (or ways that others were hurt by me), and I hope that we can keep this dialogue on the level of systems and circumstances that made it hard for individuals and the group overall to succeed, so that lessons can be learned. Maybe apologies or reparations to specific people are due, but I don’t believe that the public Internet is the venue where we should hash those out.
If you think I’ve misrepresented my part of the story or your own part of the story in a way that will cause the reader to arrive at a false conclusion, please reach out to me privately and I will do my best to resolve the issue.
A relevant piece of background context for the following section:
Zoe Curzi was a member of the Leverage ecosystem from early 2017 (about a year after Paradigm was founded) until the dissolution in mid-2019 and recently posted about the highly traumatic impact it had on her and her interpretation of why that happened. In the remainder of this post, I’m going to refer to Zoe’s experience and harms that she mentioned, so if you haven’t already, you may want to read her post for context.
It’s very difficult to figure out what to say and who to address. Zoe is my friend -- I got to know her in early 2016 by taking an intimate small-group evening class she offered (as a visiting fellow) to people in the Leverage ecosystem on breath and body awareness; for a while at some point the two of us had a standing date to play Just Dance 2 using the big conference room TV on Wednesday mornings before I started my workday; she often coordinated with me in planning parties/concerts/other events in the building and she’d also invite me to parties/gatherings at her house offsite; she is the kind of person who would notice when I was upset and would make an effort to try to comfort me. I was really sorry to learn how hard things have been for her since the dissolution. I care about her and I don’t want to say anything that might slow her recovery.
But she has made public claims with a framing that I disagree with pretty strongly and it seems worth putting in the effort to limit people’s confusion, given that I have as much context as I do. It also seems to me that if I don’t try to fix misconceptions/misunderstandings that I think naturally arise from her post, there are other people who I also care about who will likely suffer more than they would otherwise -- both because of the way neighboring communities might attempt to punish(?) them, as well as the lingering/lasting harms that might come from there being a negative valence cast on them by anyone in the future associating them with the Leverage/Paradigm projects (in the world where not enough people speak up about the dedicated earnest effort and positive/incredible/impressive things that were accomplished).
While it doesn’t surprise me that people have had a lot to work through since the dissolution -- as I’ve said, it’s been very hard for me personally; I have experienced almost all of the trauma symptoms that Zoe describes, and more that didn’t make it onto her list -- I have a different perspective on why that is and I think it’s worth the risk of trying to convey some of my thoughts, even if they cast doubt on some of Zoe’s assertions and conclusions which, as far as I can tell, rely mostly on strong narratives.
Many/most of the facts that Zoe describes match my memory (for the ones that I was aware of), and I think it’s a sign of her integrity that she included as many as she did, but I think the overall frame will give people the wrong idea about the Leverage ecosystem, especially because there is a general preexisting lack of context and knowledge about what our lives were like and what we were doing, and if you fill in the gaps with only what Zoe has shared, you might come away thinking that things were sinister and that the people involved should be shunned and ostracized.
Seems valuable to point out here:
If you have a bunch of weird(?) people experiment on their own minds and also each other, you would maybe imagine that could lead to bad effects and/or things might fall apart at some point. Perhaps this is why some people found Leverage to be a bad idea from the outset. Well, it took ~8 years (and we learned a lot in the process), but things did fall apart. We did know that going in though, and were aware that things might not work out (though I suppose people were also pretty committed to it working, and planning on that maybe more than they were planning on it falling apart quite so spectacularly).
Mental development was a known risk that we felt was worth exploring (and most of us as individuals were already exploring, prior to joining the group).* If this sounds like a bad idea to you, I get it — pretty sure this is one of those “don’t try this at home” situations. But again, I think it made sense to try and I also think it was a big part of what ultimately led to the unresolvable conflict and eventual breakup of the project.
* (I’ll say more on this later, but if people joined the project and either didn’t actually understand the risks or didn’t actually want to self-improve (especially if they directly participated in our psychology research), that was a failing that should’ve been recognized and remedied more quickly, at least by the people making hiring and funding decisions.)
What was hard
I imagine that there were probably a couple main types of hurt that people experienced: (1) direct hurt while they were striving and stressed and then (2) hurt afterwards from a combination of their plans having failed, their social and emotional support evaporating, and/or their feeling of rejection or having little value (maybe from not having been successful or from not being tended to in the aftermath or not being chosen by any of the half-dozen groups that did move forward as independent projects, or from no longer having access to the training support that they thought would make them valuable).
In hindsight, I think there were many contributing factors to people struggling to flourish in the environment, including a few pretty big pieces:
- There was the inherent difficulty of the task at hand: we didn't already have the skills and knowledge necessary to accomplish our goals, and significant self-improvement/psychological change is really difficult.
- It’s maybe impossible to have your whole self be onboard with finding and changing any given aspect of yourself (you’ve become the person who you are for reasons) and the process of getting yourself onboard or overriding some parts of yourself can be intense and exhausting. Add ever more powerful tools and techniques to the mix as well as the worry about your peers’ assessment of you and the possibility that your efforts could have a lasting impact on the world (or that you won’t be up to the task), and you’re setting yourself up for burnout.
- I also suspect that the structure/lack of structure (while meant to encourage the development of independent thinkers and allow for learning about orgs while not inadvertently importing bad standard practices around “work”) was challenging to navigate for many.
- In some ways hierarchical structures can be disempowering and bureaucratic, but there is also clarity: you know exactly what you’re supposed to do and whose priorities and judgment matter when evaluating your work, you have a chain of command to reach out to if there’s a problem with your peers, and you have someone to advocate for you with people who are higher up.
- It also has the benefit of allowing you to narrow your scope of concern and specialize, if you trust that there’s a structure in place that is guiding the project well.*
- Needing to be your own advocate, needing to keep tabs on the overall shifting strategy/plans/trajectory of the project, being uncertain of the expectations people have of you or your perceived value and the stability of your status...that’s also exhausting and draining and a recipe for burnout (even for highly skilled/senior people).
- For some people, the pieces of structure that they did interact with were pretty tangled, which might intensify the effects of conflict.
- E.g. if you chose to join one of the two psychology pods, you would be learning from the leader of that pod, who would also be training/debugging you and would also need to make assessments about your trajectory that could affect your funding status.
- The project didn’t weigh in on people’s personal lives, so there weren’t rules about who you could be friends with (or partners with) or who you could live with, so in some cases, people would be in pretty complicated layered relations with people, which sometimes went pretty well, but when it didn’t, it could cause stress on many fronts simultaneously.
- (I get that there are good reasons that professional workplaces sometimes have prohibitions against these types of things, and if we could’ve avoided these entanglements, I think we would’ve, it was just really unclear how we could’ve managed that given how many of us were friends.)
- And it was pretty crappy to have your (hard) work be too illegible for people outside the group to appreciate.
- Especially given that some people in nearby communities considered it a status hit to work with us.
* I should note that many people did in fact trust some combination of key staff and group leaders to keep things going and guide the project well, such that they could focus on their own particular projects or research. They weren’t concerned about whether we were hiring the right sorts of people, they weren’t worried about running out of funding, they weren’t worried about cultural problems, they weren’t worried that we might be focusing on the wrong areas – or if they were, they were willing to let others try to address those issues. I was so deeply involved in trying to identify and fix problems and course-correct that it was sometimes disorienting interacting with folks who were just cheerily sailing along (or at least grappling with issues that were less existential in nature).
I think for many, they had a largely positive experience in a really fantastic intellectual and social environment. The individuals we recruited were an amazing group of people: driven and fun and open to new ideas. People gained knowledge and skill and new perspectives on the world and built valuable collaborations and personal relationships – even a couple of (seemingly excellent) marriages came out of the group (and there’s at least one new human that’s been subsequently added to the world).
Because the seed of this post was in response to negative public framing, throughout this post I’m looking at things that were hard for people, but I think it’s important not to discount these other experiences when considering the whole.
And then, as I noted above, there were also many factors that made the dissolution and the aftermath difficult for people.
I cover some of this later in the discussion of things that might affect the interpretation of what happened to cause some of the ill-effects that I’ve experienced and that others have reported.
The causes of harm is an interesting topic for me because, as I mentioned at the start: I was largely crippled by the time the project wound down. Two and a half years after leaving, I’d say I’ve done a lot of healing and I’ve also been able to adapt my life to accommodate the various ways in which I still struggle. Time has probably helped more than anything though I also made very calculated and intentional decisions about how and where and with whom I’ve lived and interacted since leaving, and I’ve received significant support from a couple close friends from the Leverage ecosystem.
But I wouldn’t expect my experience to be the norm, and in fact, it seems to be at least an order of magnitude worse than anything reported to me by any of the dozen people who I’ve had contact with.
So the question becomes: why did some people suffer more than others?
I read somewhere that roughly 1 in 3 people will develop PTSD after a traumatic event (a car accident, sexual assault, an unexpected death, etc.) -- there are lots of theories, but they don’t really know why.
Given that background, here’s some speculation:
I think it’s quite unlikely that Zoe and I happened to have suffered the same harms in the same way. My understanding is that she sees her trauma response as being the same as people who were damaged by their involvement in cults and is arguing that this must therefore have been a cult. Well, I think if she threw a wider net, she’d find a lot of people with the same symptoms who got there via a difficult home environment, work burnout (I was surprised to learn how bad this one has become), a failed relationship, or more extreme things like surviving a war (I don’t mean to minimize her suffering -- I think all of these things are shitty parts of life that some people end up having to deal with to extreme degrees and I think we should be trying hard to help prevent these harms and help people to heal). I do think that my trauma comes from my involvement in the project, but not due to any part of it that seems especially cult-related (though I don’t know much about them or what definition she’s trying to apply and it wouldn’t surprise me if there are similar (if less-intense) pitfalls with all attempts at groups of people trying to or needing to coordinate, e.g. activist groups or group houses or startups, especially those that are attempting something novel). It doesn’t seem like there are enough parallels in our experiences of the project itself to cause me to update towards the group’s existence being inherently bad and the direct (intentional?) cause of my suffering.
From concrete dynamics within the group: I was in a very different position within the project than Zoe; I had already been trying to make things go better for 5 years before she came aboard, I did almost no explicit work on my own mind using our tools, I was central/indispensable for keeping the legal entities in good standing, handling admin/bookkeeping/compliance/payroll, etc., I was part of a 3-person team making financial decisions and policies for the group, and I also oversaw our physical ops team. I was on the Leadership Team but otherwise had relatively minimal contact with the other group leaders (and their groups), and I spent a large percentage of my time and energy trying to help (and argue with) Geoff directly.
From life plans/paths: I guess I don’t know what Zoe’s basic life plans were before she encountered the project and what they switched to. What was going to cause her to be thought well of by society and what was going to cause the world to end up in a better state? How was she going to make friends and be a valued part of a community? But it also seems unlikely to me that her instrumental goals or plans would’ve been very similar to mine.
But maybe there are things that had less to do with the particulars of the structure or the people or the ideas in the group; issues that were more dependent on how those things interfaced with our own internal setups. Maybe we were more in-tune with other people and so the underlying conflict was eating away at us. (I do think I’m more attuned to people’s states and attitudes than most people and I do think this quality in myself made things harder for me in sometimes indirect ways). Maybe external validation/appreciation is important to both of us and in our roles we were starved for it. (I know that gratitude is a big one for me and it was difficult being in an environment where that was rarely expressed). Or maybe we were both people-pleasers and pushed ourselves harder than our level of safety and support warranted; taking criticisms seriously, but not being able to resolve them. Or maybe we’re both very loyal and didn’t allow ourselves to build out backup plans for the (very real) possibility that things wouldn’t work out, and so we didn’t leave as early as we should’ve and then when this path dissolved, we were left without a lifeline.
Or maybe the reasons why it was traumatic for Zoe were just different from the reasons why it was traumatic for me.
In case Zoe reads this, I want to make it clear that I do not think that hypotheses that factor in things about yourself imply that you bear the blame for bad outcomes. I do not think that only weak or sensitive people take damage in difficult situations (though I also don’t think that these terms should be used as a slur -- we are all affected by things in different ways in different contexts). I think it takes courage to be vulnerable and to put your energy into things that aren’t squarely in your safe-zone. In a perfect world, I would hope that we could learn to identify traits in one another that might cause issues and then adapt our plans and practices to accommodate those features, and if that’s not possible, I would hope that the people around us would be well-resourced enough to be willing to set boundaries in ways that protect people while bearing the cost of losing that person’s gifts and also disappointing that person.
While I think that interpersonal dynamics were a large contributing factor in people’s experiences of the Leverage ecosystem, both good and bad, I think the public discourse hasn’t yet done a good job at clarifying what aspects of those dynamics were imposed by structural elements dictated by an established hierarchy.*
For example, I don’t think I ever heard a term like “manager” or “supervisor” used at Leverage or Paradigm – (I’m assuming that when people use them in describing what it was like for them to work on the project, it’s due to the difficulty of not having the vocabulary to discuss something novel and nuanced with an outside audience).
* As has probably become apparent by now, we didn’t have a typical organizational structure that was mapped out in advance which we then used to recruit individuals to place into that structure in specific roles -- we hired anyone who fit our criteria and then allowed them to slot in wherever made the most sense to them, changing their focus and role over the years as they gained skill and as the project grew and developed. We were basically an unofficial meritocracy (though as I’ve noted, assessing individuals and causing a large group of people to consense on a similar assessment is no easy feat -- not only because it can be difficult to make assessments in cases where you yourself lack background knowledge, but also because it’s not obvious which skills or knowledge matter more than others, especially in project like ours). So that meant that we typically didn’t have job titles, and people generally didn’t defer to people because of the power conveyed to them by a title or a role, but based on their basic beliefs about that person’s competency (and probably their beliefs about other people’s assessment of that person’s skill/importance). As I’ve described, more structure developed over time, with certain individuals developing their own subgroups, which gave them influence over the people who chose to join their pod, but the project was defined largely by hiring only people who were intrinsically motivated to make progress toward our shared goals, in whatever form that happened to take, and there was no stigma around doing your own independent research vs. joining the efforts of a subgroup, as long as Geoff approved your research proposal.
In Zoe’s particular case, having joined one of the two psychology research pods, it would probably be more appropriate to refer to her group leader as her trainer or teacher or “sensei” and for Zoe to be referred to as an apprentice. These subgroups, especially the ones aimed at furthering an understanding of psychology or sociology,* were a two-way street: the leader of the group would need to consider carefully before deciding to take someone on and the apprentice would have to be sure that they were willing to make a serious commitment to their teacher.
We learned pretty early on that group training essentially doesn’t work, because each individual shows up with a unique pattern of blocks. At least at the point where I left the project, training was a process requiring large amounts of individualized energy and attention from the trainer; group leaders were then apprehensive about devoting their time to people who either weren’t easily able to learn from them or people who were unlikely to repay the investment by helping them with their research going forward.
* (this was mostly not the case for the subgroups that were more focused on concrete things like running EA events or startups)
Her group leader likely devoted an enormous amount of upfront investment trying to help Zoe gain enough skill to become a productive researcher, by helping her work through difficult mental blocks directly, by helping her to understand how to make progress on her own using various tools, and also by passing on a specific body of knowledge. It seems likely that both parties would feel pressure: the teacher feeling pressure to support the apprentice and help her to gain skill to keep her funding status in good standing (also balancing their investment in a new researcher against the pull to continue their own research or invest in others), and the apprentice feeling pressure to make personal progress, help her pod-mates, and display her commitment.
(I think my earlier comment about the importance of developing both leadership and followership might resonate more in the context of a sensei and their apprentice than a manager and their employee, and I think this is also relevant context for considering the expected patterns of doing psych work, given their respective skill levels.)
And then add some of the surrounding context:
- external stressors from the emergence of new psychological phenomena
- the conflict that was created (or exacerbated) interpersonally and between subgroups,
- a struggle for status or recognition within the broader group,
- the funding crunch adding more urgency and pressure to be productive
- (initially to allow the group leader to justify the budget allocation from Paradigm for retaining and funding the apprentices in their pods, and later to be on track to generating independent revenue to enable the pods to make their own hiring and funding decisions)
It’s not surprising to me that Zoe (and others) would have a rough time. And that people would end up pushing and being pushed too hard. And that it might lead to hard feelings and burnout.
To be clear, I’m not trying to engage in victim-blaming here. It sounds like Zoe had a really bad experience working with her group leader and I don’t think that’s her fault. But it’s also hard for me to blame her group leader who I imagine was doing their best to figure out how to provide support while also maintaining expectations and boundaries.* And it’s even harder for me to blame Geoff, who I believe genuinely wanted the lives/experiences of people on the team to be good, and, like the other people trying to steer the project, recognized many of these issues and was actively trying to understand and fix them.
In sharing my models, and making guesses about some of the dynamics involved, I’m trying to paint a broader picture than I think has been described so far, drawing on the background that I have from working with these individuals for such a long time, though it could also be that many things that happened with Zoe were basically just totally inexcusable and/or malicious and/or very poorly thought out, and I don’t want to deny the possibility of that being true.
Something to note about boundaries: I think it’s natural to think that boundaries just protect one side of a dynamic. That it’s an individual’s responsibility to set clear boundaries and it’s other people’s responsibility to respect those boundaries in order to protect the one who set them. And if the first person allows their boundaries to be breached (e.g. a trainer over-investing), whether by coercion or just a desire to extend themselves to help someone they care about, that they’re the only one who might be negatively affected.
But boundaries protect both parties and need to be set/supported/defended by both parties. Because too often, if someone overextends themselves, they’ll be in a position to be hurt or disappointed, but the other person will also have unknowingly/nonconsensually participated in that hurt.
This is sheer speculation, but it sounds like Zoe might’ve been part of this kind of dynamic — where maybe her group leader overextended in initially providing support or training, but without setting or maintaining clear boundaries and expectations. It seems plausible to me that this could’ve caused them to (unfairly) blame Zoe or think of her as manipulative, ultimately resulting in both parties getting hurt/feeling betrayed.
From Zoe’s retelling, it sounded like she was not truly on board with causing the kinds of mental changes that many were trying to cause in themselves and those around them. (There are many good reasons to be skeptical that it’s the right thing to do, so I don’t fault her if she was not totally game.) If that’s true, then it seems like the greatest failing was not recognizing that from the outset and making sure she didn’t join one of the two subgroups that was explicitly dedicated to psychological change, or, once she had joined such a group, recognizing the mismatch then and either finding her another place within the project or helping her to feel ok leaving. But it seems like her trainers either didn’t really understand that she didn’t endorse changing herself or others, or she didn’t have that understanding herself until much later.
I think for me, one really hard piece was the lack of safety/lack of reliable social support -- I think you can maybe model this like you do the calculations for a crane: if you need to extend the crane out far and/or lift a heavy load, you need a particularly secure base with a lot of counterweights to keep from toppling over. I recognized a lot of myself in this post from Duncan which touches on some of the consequences of being diachronic rather than episodic; seems like maybe people who are further on that side of things would have a harder time recovering from the breaking of connections/relationships and paths/plans. Stability is already a big challenge in an environment with many people engaging in painful psych work and, if they’re successful, also actually changing as people. Add in tradeoffs and often extreme constraints on time and then also the complication of wanting to maintain relationships across factions or other divides and I think it’s easy to see why people had trouble being loyal and supportive friends and teammates (even in cases where they put in a fair amount of effort). And the constant calculus added pressure to push through your own difficult emotional states in order to be fun or functional enough for them to choose you.
(I once had an incident in our big house in the Oakland Hills where I’d been sharing a room with a couple of friends. After one of them moved upstairs into a walk-in-closet to have their own space, I didn’t see as much of them. At some point I went up to visit and mentioned that I wished we could hang out more, to which their explicit response was that it wasn’t any good to point that out: if I offered something more entertaining/interesting than their other options, then I’d get to spend more time with them. I think that sometimes when you start trying to act on explicit models without including more of your implicit content, you can forget that the things you’ve named explicitly might not be the only things that you value. My sense is that a lot of people in these communities have more of a “what can you offer me now” or “what do I think you’ll be able to offer me in the near-term” perspective and maybe overlook the benefit of things like building shared history and trust in order to get secondary things like safety and comfort and the ability to let down your guard. Or maybe I just need more of that than others).
And I think this was later intensified by the amount of assessment that was happening essentially 24/7. I touched on this in the section on expertise assessment, but once training became a central piece, our need and our ability to carefully assess people’s current skills and trajectories increased dramatically. People (often including what might be called senior staff) were assigned trainers or training groups, and their progress and difficulties were discussed* regularly: to better understand our experimental data, to make plans for next steps, and to troubleshoot.
* Important note on training discussions: we had strict internal information sharing procedures (which in my experience were adhered to quite strongly), where sharing permissions were set in advance at the preference of the trainee. So in some cases, a person would elect to have multiple people that worked with them in a training context share notes and discuss, in some cases they would ask that information about them be shared with other collaborators or friends that they didn’t work with in a training context, in some cases they would allow their data to be anonymized and shared with other researchers, and the default was that information learned in a training context would not be shared with anyone at all.
As I mentioned earlier, because of how interdependent we were, a trainer’s assessment of your mental configuration and blindspots might actually be triggering for them and/or be used as rationale (along with more traditional views of you as a teammate) for dialing back the amount of time and energy they’d be willing to invest in training you,* or for trying to influence what your role in the project should or shouldn’t be.
* Training investments weren’t determined by way of a centralized mechanism, but the majority of the skilled trainers were within one of the two primary psychology-focused subgroups and their leaders could usually choose what and whom to prioritize (typically investing in their own members first, but deploying extra capacity as fit with their own strategic assessment, which could be influenced by Geoff or others in the broader group).
Another note about the dynamics between trainers/trainees:
While the areas of investigation or improvement were usually driven by the trainee (maybe a difficulty they were having in being able to think clearly about a topic like fundraising or budgeting, or maybe a problem they’d run into with not being motivated to make progress on something important to their work), unlike with traditional professional therapy, your trainer might have a personal or professional interest in the topic — maybe they are depending on you to provide funding for the group or maybe they had been counting on you making faster progress with whatever area you’ve been working on.
So while your trainer might be genuinely trying to bring out the best in you and help you reach your goals,
- they might inadvertently skip over possible solutions to your problem like letting someone else worry about funding or pausing your current work and coming back to it in 6 months or a year.
- especially if they’re a newer trainer, they might not be confident that they will be able to help you and so might start stressing about funding or not getting the benefits from your current area of focus.
One interesting thing that seems to happen with belief updates is that if there’s a lot of pressure to stay quite close to your original plans and commitments, the amount of possible change is sort of artificially limited.
Maybe you could think about this like the dynamics in untangling a knot in the fringe on a blanket or in your hair — if you try to keep all the strands quite close to their original positions, it’s going to be pretty darn difficult to work the knot out. But if you give the strands room to separate and maybe even go off in other directions, they’ll often be able to pull apart and end up where they were before, but just no longer in a big knot.
For good/stable psychological change there needs to be a lot of space for different routes and possibilities, even if you don’t actually pursue them; once you’ve worked the knot out, you might end up continuing to do something quite similar to what you were already doing.
But in an environment where there are lots of concrete demands from reality to move things forward and solve problems, and where there’s also pressure from the group (or your subgroup) to be productive and be on a good trajectory, and maybe interpersonal pressure to be a certain way or play a certain role, there was often not enough room to really consider having that necessary room to readjust.
Sometimes this sort of thing wasn’t handled well, but over time I think trainers did start moving toward thinking/suggesting that people should focus more on themselves and their own goals, consider things like perhaps leaving the project, etc. I got the sense that those trainers who had made it this far in their thinking were much less thrown off by the dissolution of the project than the people who were maybe more committed to staying the course.
While I’m trying to describe/explain some of these hard things, I think it’s important not to lose track of the fact that this was all happening in the broader context of us all being really motivated to try to reach our shared goals – we wanted to make the effectiveness training work, we wanted to scale up the organizations, we wanted people to level-up and explore new areas and launch new projects, etc. We recognized that a lot of it wasn’t fun and we recognized that there were a lot of problems that we hadn’t yet been able to fix, but this was the stuff we cared about figuring out.
I think for a long time, in trying to work through these hard things and solve the seemingly infinite number of problems and puzzles, I had been like a frog boiling in water, but near the end, something shifted more dramatically.
For years, I had been working an insane number of hours, often doing really uncomfortable work, including helping other people do their own uncomfortable work. Tense and triggering interactions were not uncommon, some even famously and dependably so, such that we had dozens of iterations of methods for tackling these hard things in an attempt to make them less painful. That was tolerable for me only because the people who I was working with directly on these hard things were really grateful, even if a part of them resented me for making them face those difficult things that they would’ve preferred to gloss over.
They were willing to put in effort to explain to others the importance and overwhelming nature of my many roles in the project and, as was gestured at after the fact, defend me in cases where there were complaints that were never brought to my attention.
Something happened to switch that, and I’ve never figured out what or why.
The people I’d been supporting seemed to stop appreciating my efforts and were quick to come to harsh judgments about my motivations and limitations that led them to exclude me or circumvent me and blame me for perceived shortcomings instead of looking for solutions together.
Negative implications of things my trainers had come to believe about my mental setup were sometimes hinted at but never made explicit. So maybe advancements in our psychology tools or effects and perceptions from the intention research were a contributing factor? Maybe I had started doing uncooperative things but wasn’t aware of them? Or had been doing uncooperative things all along, but suddenly other people became aware of them? Or became more threatened by them?
A couple notes relating to the intention research near the end:
I think there’s often a mistake people can make as they gain a lot of understanding in a new field. I think it can be hard to know how vast a subject might be, and if, over the course of a few years or even over the course of a few months, you gain more understanding in an area than you had previously thought possible, some people might make the mistake of assuming that they have a pretty good handle on things. That they basically understand what’s what and can trust their reads more than might be warranted.
What’s the expression? Just enough knowledge to be dangerous?
I was looking in from the outside, but at least during this early part of the research cycle, it seemed like a number of people were overconfident in their beliefs about what was going on with individuals and between individuals and with groups and between groups. In both what *was* happening and what *wasn’t.* And as those assessments caused lines to be drawn in the sand, I think the subgroups were then faced with an entirely new challenge as they needed to make plans and make policies within their groups and negotiate complex diplomatic relations with other individuals and subgroups.
Another thing I would caution people about (specifically people who end up in situations where there’s a fair amount of non-explicit friction existing between people -- which I think happened with Zoe and her group leader and also likely happened with me in my dynamics with various people in the larger group) is to be very careful when determining what part of that friction to assign blame for.
E.g. if your partner gets home late for the third night in a row and you’re feeling neglected and insecure and your response to their apology is “it’s ok, I’m learning to navigate single life again,” that might feel to your partner like you just stabbed them. On some level, you were hurt and scared and were throwing up what you thought was a wall to protect yourself, but they perceived it as a stabbing knife.
I think you’re in pretty bad shape if your partner goes the route of treating it as if you meant to stab them and essentially *did* stab them. But ignoring their feelings and perceptions is also bad. On a practical note, I’d recommend that they instead adopt a different tack and respond with something like “the story that I’m telling myself is that you’re so angry with me that you’d be willing to stab me if you could” and then that gives your partner something less triggering or blaming to then try to work through with you — where it’s not an argument about what did happen: what was said, what was intended; instead it can be about the way something made them feel and the narratives that each of you actually endorse and want one another to have.
The other route too often involves defensiveness and gaslighting (sometimes involving blindspots on one or both sides due to the heightened threat level and need to be “the good one” in order to get the “other one” to stop their hurtful behavior). But defensiveness and gaslighting are both disconnective and can make the other person start to doubt themselves and their perceptions and intuitions in a way that I think often leads to some of the more haunting symptoms of trauma.
It was deeply destabilizing to have some of my closest collaborators and friends seem to update so negatively towards me with no apparent way that I could interface with those negative views or the resulting change in how I was treated.
I think the first and possibly most important step I took after leaving was to try my best to unseat the people from the Leverage ecosystem as the arbiters of my value as a human being (and this is something that I’ve had to work hard to maintain).
I wouldn’t be particularly surprised if it turned out that Zoe’s public call-to-arms was helpful in reducing the significance of their assessments/judgements for her, and I’d be in favor of trying to preserve that part if possible.
On the advice front: be careful creating situations where people are in vulnerable relationships with people who don’t/can’t accept them as they are -- interpersonal dynamics are so complicated that it’s often not enough to care about the person or even to want what’s best for them.
Deciding to remain in a difficult environment
Despite recognizing that you’re taking damage, there can be compelling reasons to stay while continuing to decline:
- Not having a “plan b”
- There was a talk at EA Global 2017 on maintaining motivation while attempting ambitious projects and I think one of the best pieces of advice was to always have a backup plan. I think the main point there was about the way that your mind might ignore flaws in your mainline plan because it needs a viable plan and that’s the only one it has, but this strategy also allows for an escape hatch when things reach a point where it would probably be best for you to throw in the towel. With something as unique as the Leverage ecosystem, a backup plan is a pretty tall order, but since I’m here trying to share lessons, I’m including it on the list.
- I’d made a ton of progress with some of the training I’d been facilitating, but if I had left before a “save point” it seemed really likely that a lot of my work would be lost or reversed.
- I think in projects like these where people are working at the cutting edge, you can run into an issue where you’ve taken a lot of damage getting to where you are, and you’re continuing to take damage, but the people who seem to be most likely to be able to hone or develop the tools to identify and address the problems are the same ones contributing to the issues…
- If you do decide to give up and retreat to a less punishing environment, you’re basically hoping that regular interventions will eventually be able to heal you, or that the cutting edge research will continue to progress without you and they’ll circle back down the road to try to help once their tools are out of beta -- neither feeling like high-probability outcomes.
- If you can see the progress they’re making and you’re also in a position to increase their skill either directly or indirectly (and they’re motivated to help if they can), it can be very hard to know when to pull the plug -- you’d be giving up on getting better in the short-run, but at least you’d staunch the bleeding.
I was doing all I could to boost people up onto the top of the wall, with the plan that they would then reach back and pull me up.
For me, and I was very clear about this with myself and the people I was trying to help, it was a situation where it was fine if this version of the project failed — it was an experiment after all — but if we didn’t pick up the pieces and try the next thing, if instead we just gave up, the amount of struggle and sacrifice that had gone in would be unbearable. And that was a serious gamble: it was not clear how things would turn out and the further down the road we went, the higher the chances for success but the more damage had been dealt, and the deeper the devastation if we couldn’t pull through.
Ultimately the adversity became too overwhelming and I had to press the eject button and deal with the heartbreak that was so many years in the making, but in the same way that the research was ended mid-cycle, some of us had plans that were also cut off mid-cycle in a way that was pretty hard to grapple with and recover from.
It’s hard to know when to call it quits
I don’t know exactly how long you have to try hard at something before sunk costs become a thing. And they’re not just real for the person who should maybe leave; they’re real for everyone involved.
There are plenty of things I could be criticized for — while I’m a helper at heart, I can be stubborn and somewhat relentless in my attempts to root out problems and make things better, and it’s difficult for me to cleanly defer to how other people want things done.
But I think one thing that might have been straightforwardly laudable along the way but which might’ve had the greatest real-world harm/damage was that I worked so hard to keep everything going, when maybe it would’ve been better to let this version fail much sooner to then be able to build the next version without as much collateral damage.
I’m open to the possibility that I myself should’ve left early on, as I got indications that my particular contributions wouldn’t be appreciated without a lot of hard work to convey the importance of the things that I valued, but the assembled group of people, despite their deep-rooted differences, were always just cooperative enough and just open-minded enough and just earnest enough to tip the scales in favor of staying and trying to convince people of the parts of reality that I thought I could share with them or help them navigate. But it was an incredibly difficult slog.
Even a normal job is surprisingly like a romantic relationship: you look for jobs like you look for mates—maybe you’re introduced by a friend, maybe you use an online platform. Dates are like interviews, there’s a trial period, and eventually both parties are investing a lot in one another and in their shared venture. As it starts to not work there are two choices: double-down and invest further in problem-solving and conflict-resolution, or make a clean break while things are still going relatively well -- give up on a lifetime together but stay friends and reduce the potential harms and opportunity cost of a more drawn out period of discontent. I don’t think the correct choice is obvious in relationships and I don’t think it’s obvious in a professional collaboration either—and the parallels are even clearer when your work is so personal and you and your teammates are so mission-driven.*
* For some, the metaphor might not click – you’re an accountant or a consultant and the commitment to a particular employer/workplace doesn’t really seem like a significant investment. But maybe you had feelings like this in choosing your career and having doubts about whether it would be fulfilling?
For me, maybe because of the way that I put so much of myself into my work – I think I’m a founder at heart and I do the work that needs doing out of a desire for the venture to succeed, rather than it being in my job description – essentially every collaboration I engage in maps onto this metaphor pretty cleanly.
So would it have been better to end the relationship earlier? Maybe? But if it was really primarily that last year that proved too challenging, that seems to me like a reasonable amount of time to attempt a number of interventions, realize that disbanding is the correct move, give people notice, and wind things down.
In looking back for an earlier fork in the road, I see a number of discrete moments where I should probably have moved on, but I don’t see another obvious choice point for ending the whole project. And I think in this case, a lot of good came out of the collaboration even though it was difficult, and, as I’ve said, I think it’s too early to judge its overall impact. It will likely be a number of years before we can all see the direct effects of the knowledge and tools that were built and the extended impact of the projects that have been (or will be) launched by people within the Leverage ecosystem who made use of our training. In Silicon Valley it’s expected that startup founders will need to dedicate a minimum of 10 years to a project, so we’re still pretty early in that cycle, even if their first attempts succeed, which is not itself typical of even the most renowned entrepreneurs.
Was it bad that Geoff pulled the plug when he did? I don’t think so. But I think it probably would’ve been better if there had been more support for the people whose paths/plans had been broken in the process.
But just like with relationships, it can be hard to get that support from a former partner when the hurt and disappointment on both sides is still so fresh. Maybe having a supportive community would’ve been a more fitting solution, but I know first-hand how difficult it can be to maintain those personal connections when all your energy is going into making the thing work. And unfortunately the nearby Rationality and EA communities seemed to have built up animosity towards us over the years (which I’ll touch on later), so I’m sad but not surprised that the support wasn’t there when it was needed.
I think it’s worth pausing to appreciate just how bad the conflict leading up to the dissolution, as well as the dissolution itself, was for a number of people who had been relying on the Leverage ecosystem for their life plans: their friends, their personal growth, their livelihood, their social acceptance, their romantic prospects, their reputations, their ability to positively impact the world.
Until reading Zoe’s post, I only had evidence of my personal experience and that of the couple of friends I remained close with. I was surprised to hear how bad it had been for her, though I’ve often wondered how people dealt with the aftermath, and I think it was likely tough for many of them.
Up until the day I left, I was working with a small group of trainers pretty much every day as we tried our best to resolve interpersonal conflicts that had spiraled out of control -- these were people I’d collaborated with for years, people I’d cried with and people I’d celebrated with, people who I think genuinely cared about me. On the day that I left, I had expected to return after taking a breather for a few weeks. I only packed a small carryon bag. It didn’t occur to me that I’d never walk back through my bedroom/office door again and I might never see my friends again. But within a couple days, it became clear that it was finally time to cut my losses, and stop struggling to fit myself into a circumstance that was so inhospitable to me. I knew I couldn’t return to the Bay Area for the foreseeable future; I’d need to replan my life and figure out where to go and what to do with myself while I got back on my feet. It was a devastating time, but/and it took a month before one of my closest friends and trainers texted to see how I was doing. In the intervening years only one other person from the Leverage ecosystem has reached out to check in on me (though others have been friendly when they email to ask for help with things or in their responses to me reaching out). I didn’t really know how to interpret that at first; at the time it felt like reaffirmation that even my closest collaborators believed that I was dispensable as a person, but I think it’s reasonable to conclude something more basic like: 2018 was hard. 2019 was hard. 2020 was hard.
If your plans had depended heavily on collaborating with this particular group of people and that didn’t work out -- whether in 2014 or 2019, whether you were relatively new like Zoe or whether you had been there from the start like Geoff (let’s not forget that it was probably terrible for him too) -- I’d expect it to take a while to reorient and figure out a healthy and positive way of interacting with your old friends and teammates. For many of us, I think our lives had essentially fallen apart, some more intensely or more quickly than others, and some long before the official dissolution -- it could be that the most supportive thing we had to give one another was respectful distance.
Harms due to the scarcity of similar projects
or, what to do when it’s not a good fit
It seems like there’s an inherent problem with being a unique and in-fact-good project that was not in a position to utilize everyone who might want to be a part of it.
Over the years there were a number of cases where someone either 1. didn’t quite meet our recruitment criteria or 2. met the original criteria but didn’t end up having a useful place in either organization as time and our criteria progressed (essentially people who had been grandfathered in), and it had much stronger negative effects than I would’ve predicted.
I think because of the ambitious kind of people who were drawn to attempting to massively improve the world (and the fact that we ended up with ones who had basically rejected all other opportunities they’d come across before discovering us), our project was unusually compelling.
(I did sometimes wonder if, for some people, it wasn’t that they thought our world-improvement strategy or the composition of our team was unsurpassed or even particularly likely to succeed; it was that they had a path wherein they themselves would gain skill, knowledge, and power in order to directly impact the world, and we were the best option they had found for leveling up on the relevant dimensions.)
Both types of a not-quite-fit were sometimes awful (for the prospective person and for the rest of the team). In case (1), rejecting people was painful all around. Sometimes when people would find us, they would recognize the way that we were much closer to what they were looking for than any other option available. They’d be excited about our culture, excited about our research, excited about our plans, excited about the prospect of improving a lot in our environment, etc., but if we determined that it wasn’t a good fit, the disappointment could be intense. Even when we tried to craft bespoke solutions to have the person instead join as a skill hire (like I had), the feeling of rejection and/or exclusion was sometimes really really terrible/unacceptable to them. And sometimes people who didn’t receive an offer at all seemed to hold active grudges for years, some of them still popping up to criticize us any time they’re reminded of our existence, despite having had very little original exposure and often minimal, if any, contact in the interim years.
In case (2) where someone did join but didn’t end up being able to contribute, the person wouldn’t want to leave and they’d keep trying to be useful in some way, but keep running into demoralizing evidence that their talents just weren’t going to be valuable or valued in the way that would actually be fulfilling for them. And while the concept of being defunded without being excluded was created to address some of the badness that might otherwise come from being cut off from the project and/or the social connections they’d made, I don’t think there was a way to change the strong interpretation they had of not being important/special/valuable if they weren’t judged by the group to be worth paying a salary to. Even before the point of defunding discussions (there was a tiered system to allow for months of advance warning), I think people could clearly tell when they weren’t on track to being an integral part of the project, and for some personality types I think that was completely crushing (and it was simultaneously really non-obvious what to do about it).
(I touch on something related to being a unique and in-fact-good project in a later section on ambitious plans)
Harms from the surrounding community
(Let me make it clear that I have many friends in the EA and Rationality communities and many more that are ~facebook friends that I keep at a distance but that I value being connected to. I have a close friend at CEA and a couple buddies at MIRI who are important to me, and I also have friends who are significant donors in both spaces. I also understand that these communities have grown such that the subset of people and organizations who have had these negative effects are probably a tiny fraction of the full communities, despite their outsized influence on my experience. I’m pointing at things that seem bad/harmful to me, but I’m not trying to cast judgment in a more broad sense.)
Something that has always puzzled me about the groups of people I’ve met in EA and Rationality and to some extent also in the Leverage ecosystem is (what feels like) 1. an unnatural reluctance to ascribe credit to certain groups or individuals. And 2. an unnatural reluctance to proactively cooperate.
Maybe it’s because these groups believe that if you acknowledge receiving something of value from someone, you’re in their debt in a real way that might be called upon later, whereas other people are more willing to just live in a world where people help each other out and there’s no debt accruing if the scales are unbalanced?
Or maybe other people are confident that they’ll be able to repay the debt and people in these more intense communities are constantly worried about being constrained by obligations and are trying to avoid them in this sideways manner?
But as someone who regularly goes out of her way to help people and cause things to go better, the way those efforts often end up being brushed off or in some cases erased from the record is quite painful. I think it is true that a job well done is often its own reward, and the occasional lack of recognition and gratitude has not stopped me from almost pathologically trying to improve everything I’m within reach of, but this is still a hurtful part of my story at Leverage and Paradigm and also in the EA community.
I remember vividly sitting in the audience at EA Global 2016 for the opening keynote when one of the founders of CEA gave a fairly comprehensive history of EA and he didn’t even mention Leverage Research — despite the fact that we had spearheaded the first EA student network in 2012, and back in 2013 we had founded the event that he was speaking at.
(Brief aside, for some relevant backstory: in early 2012, I met up with Geoff at the Leverage house in New York to talk about where my skills could be best put to use. Because I had helped a friend run the front end of the Singularity Summit in 2011 (serendipitously, but that’s a story for a different day), he thought I’d be in a good position to put together a similar but smaller event with the organizations involved in Effective Altruism that summer. One of our teammates went to the UK to meet up with the groups over there, but they returned empty handed, having failed to convince them of the value of such an event, so I instead focused my energy on building the EA student network (which the central EA orgs decided not to participate in because they wanted to exclusively build support in elite universities as part of a branding decision). The following summer, after the team had moved to Oakland, we decided to try again and were met with similar resistance. Maybe important to paint a picture: we had a really small team and a tiny budget -- deciding to put on an event like this didn’t mean hiring an event manager, it meant putting our other priorities on pause for weeks with everyone pitching in to handle food and sleeping logistics and workshops and speakers -- reflecting on it, that was maybe the teamiest thing the Leverage ecosystem crew ever did together! In order to persuade the main EA orgs to come, we ended up fundraising from dozens of individual donors in order to pay for their travel costs. The week-long event was a huge success and I think everyone was really glad they decided to participate. We tripled the size of the event in 2014 and split it into two (one smaller 5 day event for leaders in the EA communities and one large weekend event open to everyone interested in EA).
Even when we passed the torch for the event to CEA in 2015, we gave tons of support and advice before the event, someone on our staff MCd the event, and a few of us ultimately ended up stepping in to run mission control during the actual event to handle a number of fires that had cropped up. In 2016 we invited the event organizers to use our building as their HQ for the final push, Paradigm made a donation to support the event, and when we got an SOS from the volunteers, we arrived at the registration table carrying printers and other supplies from our office — The same event where our contributions were stricken from the record and just a year prior to EA Global 2017 when we were told that we were not welcome to set up an information table at the event (despite having been asked to participate in a number of panels as well as giving individual talks as we had at all prior events) — thankfully we were able to rectify the information table situation, but it still stung.
At some point, I remember being surprised to find that one of the EA orgs had posted a photo on their website of our staff talking to interested people at EA Global 2016, but they photoshopped “Leverage Research” off of the sign (here’s the original).
And I remember someone wanting to join our team, going through the interview process, and being made an offer, but then ultimately deciding against joining, for the sole reason that the people in the EA orgs that they’d been working with were so hostile towards us. They weren’t willing to die on that hill (which I think is perfectly reasonable, despite it being pretty awful that they were put in that position to begin with).
It seems clear that people wanted us to contribute. Our talks at these events were often standing-room only; people wanted our training and coaching. They wanted our help. But they also seemed to want us to not exist — maybe they were fine with us as individuals, but not as a group? But, like, would they endorse that?
No matter how much effort we put in, or how much value we provided, it never felt like it tipped the scales.
There’s a way in which this kind of thing is crazy-making. And it makes it harder to want to put in effort to bridge the gap. It almost felt like people were unwilling to act in a friendly cooperative way because they thought they’d then have to be friendly or cooperative in the future and they weren’t willing to commit.
I don’t really know what’s going on, but my hypotheses above regarding debts and constraints are the best guess I’ve come up with.
And if something in this area is in fact happening in individuals and perhaps in groups, it might make people actually resentful of your contributions, especially if they are already antagonistic towards you for some independent reason that would make it even worse to find themselves in your debt or to be confronted with evidence of your value.
And if that’s the case, then trying to offer more might exacerbate the situation (where they don’t want to have had it be that they received value from you). Which, in an antagonistic situation, might be stacked on top of them already being threatened by evidence that you in fact have value to offer (which doesn’t fit into their safer narrative that you’re unskilled or confused or otherwise expendable/unimportant such that they shouldn’t coordinate with you or shouldn’t make accommodations for you or even treat you with respect or thoughtfulness).
I don’t know that this is part of what’s happening, but I offer it as a perspective to consider on both sides. I think this pattern has developed on occasion for me personally and it was/is very counterintuitive (and painful) that the right thing to do was to stop trying to help or even be cooperative.
(Separate from the harms that are caused to the people putting in the effort (via this lack of recognition of their value and usefulness and good deeds), I expect this dynamic to also harm the people/orgs with the blindspots that are not acknowledging these contributions, because they will fail to correctly identify nearby sources of value and will therefore miss opportunities and misallocate/squander/fail to protect resources that would’ve otherwise helped them achieve their goals.)
And then there’s the more overt hostility from some of the Rationalists.
I’m still pretty confused by how much backlash Geoff got for sharing his psychology theory with the Rationality community. I have found it exceedingly helpful in predicting individual behavior, giving clues for where to expect blindspots or irrational thoughts (which is useful on its own but also for making targeted belief upgrades), and generally making people’s behavior (including my own) make more sense to me. When I’ve shared the model with people outside these communities, they tend to get pretty excited and more often than not, I get feedback down the road about what a difference it made for them in navigating their own life and relationships (in my experience, having a reasonable explanation for irrational behavior can take a ton of pressure off of interpersonal dynamics, and sometimes just opening things up like that can make a resolution more possible).
But that’s not been my experience in the EA or Rationalist communities. If people ever talk to me about it in-person, it’s not to take advantage of the opportunity to ask clarifying questions (which I’d be happy to answer); instead they tend to present the fact of the theory’s existence as something I need to defend in order for them to not write me off—not based on them using it and finding holes, but based on something like “the people in my community who I defer to think that it’s bad so I think it’s bad, and further, that Geoff’s a crackpot—change my mind.” (Which is a pretty lame type of conversation to be presented with since they have no objections themselves and can’t seem to present the objections of others in a coherent way.)
One thought I’ve had is that perhaps the Rationalist community in particular is uncomfortable allowing outside theories in without first insourcing them, but they don’t have a structure for processing and vetting content other than through their leadership, and their leadership is busy and hasn’t opted-in to serving that function. So instead there’s a cursory “eh, looks wrong” and then there’s not really anywhere to go from there for either actually engaging with hard-to-prove content or clearing the status hit. Or maybe the issue is more status-based to begin with--some kind of allergy to confidence?
In other parts of the post I mention people trying to cut off funding – it might seem like that was happening just by trying to limit people’s exposure to us, or even spreading negativity/talking smack,* but some people actively sought positions where they had control (or at least veto power) of distributing EA or x-risk funding, and were/are determined to block us from receiving even small grants. I remember at some point one of these bodies rejecting all of our proposals, but giving $20k to an individual to take some time off and reflect on burnout (which seems fine/good, but which I think indicates either an intentional snub, or a gross misunderstanding of the scope and quality of our research team as well as the real-world impacts of limiting our access to funding).
I’ve come to believe that there are a few people in the surrounding communities who are so triggered by our efforts (and/or by Geoff personally) that it would be appropriate for them to recuse themselves from situations that determine the allocation of funds from 3rd parties. And at a minimum, they should disclose their prejudice to the donors who are entrusting them with these decisions, so that the donors are able to decide whether to independently assess those specific projects which would otherwise never receive funding.
* People sometimes complain publicly about not seeing results from our years of work, with the implication that we never made any progress.
Instead of assuming that because you can’t see our progress, it doesn’t exist/doesn’t have value, it seems like people should be more curious — recognize that they might be missing something important. “Why are so many smart and strategic people investing financial and human capital into this project?”
We were never funded by masses of individual rationalists or EAs—which makes sense given that there’s a very high barrier to entry in understanding and evaluating our work and our trajectory (and that individuals tend to defer to leaders or orgs in deciding what to fund). We were funded by a few highly successful donors and investors who were willing and able to put in the time and effort necessary to understand our work and/or understand that evaluating it would look very different than many (all?) other types of projects they funded.
It’s quite strange to me that people would look at the pattern of evidence and conclude that the funders with inside knowledge are the ones who have been mistaken and the people from the outside looking in are the ones who are well-positioned to recognize that those funders aren’t making worthwhile decisions.
This is a different point, but FWIW, I think Reserve is poised to feed a large amount of money into EA/x-risk causes (through the tokens allocated to investors and other individuals in the EA & x-risk communities), so if you had concerns about Leverage being a financial drain on EA/x-risk, I think it’s looking more like a financial boon.
I also think that there might’ve been a problem where people were trying to pattern-match us to something they were familiar with — maybe the proximity of CFAR and the fact that we ran beta training workshops meant that people assumed we were trying to actively develop and distribute rationality hacks and were just doing a really poor job of it?
Another thing that’s always been puzzling to me has been the weirdness around recruiting. It feels like people were always upset at the idea that we would be proactively talking to and assessing people in case there might be a mutual fit — maybe this was the reason we were asked not to put up an information table at EA Global? But such a tiny fraction of the population matched all of our criteria (with current skill not being one of them) and we didn’t have the funding to outbid other offers, so it doesn’t really seem like it would be that great of a threat to surrounding organizations, even if they did perceive themselves as competing for talent. And given that we’re all basically goal-aligned (and that necessarily one of the criteria was genuinely wanting to join our project such that finding a match would be mutually beneficial for a prospective candidate) I don’t know why there would be concern.
I’ve seen a couple mentions of us recruiting young people with a tone of disapproval -- I think the explanation for younger people joining our (poorly-funded fringe moonshot) project more frequently than mid-career professionals is pretty straightforward, and as people work on the project, it seems reasonable that they would age and then be on average older than new hires (I was in my 20s when I started as a volunteer and in my 30s when I left as someone who could’ve been referred to as senior staff -- though I’ll note that as a ~flat organization, my salary was never any greater than what we offered to new full-time hires). But I also wanted to flag that it feels like there’s some agism at work here – we only ever hired adults.
I don’t know that I can cite a specific post, but I’ve gotten the impression over the years that some people have been concerned that while we’re maybe not dangerous or otherwise actively bad, they don’t believe we’re producing useful research, and so the people on staff are being wasted when they could be off doing other things (perhaps at nearby orgs). Maybe the point is moot because we are almost all off doing other things now (not sure if those who had been concerned now feel satisfied? Or if they’ve made any job offers to the people who had become free agents?), but in case there’s lingering resentment, I think it makes sense to look at things like this concretely so that we’re on the same page.
It was quite rare that we would hire someone who would’ve been deemed a desirable candidate at any adjacent org. We had a motley crew who came in with various skills from all sorts of backgrounds, many of whom didn’t explicitly draw on their cached talents in their work with uss. Some of them had achieved mastery in their domains and continued to develop during their tenure with us, but as far as I’m aware, their skill sets wouldn’t have slotted into any nearby organization. There’s only one case I’m aware of where someone from the extended network of Leverage-associated projects hired someone away from an x-risk/EA org (out of 60+ hires), but that person had previously been at Leverage in the early days and was ultimately snagged by MIRI, so I think they were a bit of a special case.
One thought I’ve had is that perhaps people were increasing in skill while they were with us and that gave a false impression from the outside that we were high-grading the talent pool instead of developing it. Or maybe the employment/dating metaphor is strong and when talent is the bottleneck, people are blanketly opposed to there being any other eligible suitors around.
We didn’t know how to address people’s discomfort
This fits in with my comments about the ways that we were, I think, poorly treated by some of the surrounding communities and orgs. Not wanting to acknowledge our contributions, not wanting to give us a platform, not wanting us to hire people, and sometimes being pretty attacky.
One relevant aspect is perhaps competition for resources: limit our status & visibility in order to limit our access to funding and talent.
But I think it’s likely that there are other upstream things that made people uncomfortable/uneasy that maybe fed into a sense of being in a more competitive/adversarial position in the first place – some of the effects of which seem to be self-reinforcing.
The primary one was perhaps the way that we were pretty clearly a group of individuals that were/are much less likely to allow societal norms to limit our actions (which also created a fair amount of friction internally, as I’ve described in earlier sections). The way we were willing to live and work together, the way we were willing to subsist on stipends, that we made some decisions by consensus, that we recruited people without specific jobs for them to do, the non-standard ways that we evaluated those people, the things we were willing to try to figure out about humans, etc. etc. There were a lot of indications that we were a group of people who might not stay in our lane and do predictable standard things. (I understand that making people uneasy, but I also worry that this aversion to “weirdness” is sometimes used to push away or suppress an important segment of the world-improvement space).
And maybe as an extension of that one, I think people couldn’t really trust that they were getting the whole story in any given interaction or collaboration.
I think there’s at least one reason for this, which has had multiple effects:
- Over the years, we learned many things and developed many theories that informed how we approached problems, how we made prioritization decisions, what areas we decided to get involved in or conduct research in, etc. But whenever we needed to interact with external parties, this background body of knowledge meant that there might be really large gaps between how we viewed the circumstance and how that other party viewed it. When you have large gaps in a collaboration like this, there are basically three options for moving forward:
- (a) You can invest a lot of time and energy trying to reconcile your differences upfront (which might require sharing knowledge/models and might also require some amount of psychological change), or
- (b) You can establish a clear hierarchy such that one party will cleanly defer to the other without first understanding and integrating their considerations (if there’s not an outside arbiter/person in charge, you might have to work this one out using expertise assessment and/or trust), or
- (c) You can work around one another, trying not to in-fact do anything that would be bad for the other party, but not first making your plans explicit and coming to consensus.
For (a) I think that some members of the project (Geoff included), were less optimistic that potential collaborators would be able or willing to quickly and efficiently resolve their differences. (In Geoff’s case, I think some of that came from early failed attempts to sync up with leaders of EA and Rationalist organizations.)
For (b) I think the overall skepticism and lack of trust between leaders in these communities made deferral unacceptable, with individuals often believing the other party was wrong or psychologically blocked or acting in bad faith.
So I think option (c) becomes the most attractive path forward. It avoids potentially arduous and fruitless attempts to get on the same page, it avoids fighting over who should be calling the shots, and it allows the collaboration to continue — hopefully achieving goals on both sides.
I’ve seen what looks like this type of decision-tree play out a number of times inter-organizationally as well as interpersonally, with varying results.
- Sometimes things are basically fine – all parties benefit from the collaboration, maybe not even caring or recognizing whether the other party was pursuing something they hadn’t made explicit.
- Sometimes people recognize that the other party is pursuing some further thing and while they don’t actually have a good reason to be opposed to that as a goal, they’re upset that they weren’t consulted first.
- Sometimes the non-explicit goal is in fact bad for one party and they’re upset about the negative consequences and also upset that they weren’t given the chance to stop it.
(There’s a term that we used to describe this sort of working-around-the-rules: “munchkinism.” And we, including Geoff, often felt the need to be munchkiny, e.g. in converting the Oakland Hills house into 10 “bedrooms,” or in hosting the first EA Summit there (with about 60 people staying at the house for a week) – we did it because it was really high value and we thought that we could do it without causing any damage to the landlord or their property, we were willing to cover the costs if something did go wrong, we knew how to handle fire safety, etc…and because we were pretty confident that the landlord would say no if we asked.)
From my personal experience of being on the receiving end of scenario 3 more than I’d like, I’d say that when it comes to collaborations, option (c) is generally better on paper than it plays out in real life. I think it erodes trust (which I think I might believe is a bigger deal than most, perhaps because of my general inclination towards teaminess and because one of the lasting effects of my experience on the project is an unshakable feeling of being unsafe on a gut level), and I think people overestimate the likelihood of ending up in scenario 1 (maybe partially by underestimating how much non-verbal communication (and competition?) is actually happening between individuals).
My models are years out of date at this point, but historically I think Geoff in particular was overly pessimistic on being able to execute option (a) or option (b) with his own friends and teammates – essentially believing that they were less cooperative than they were, which I think led to a fair amount of preventable harm. For me I think this was exacerbated by me (and my models) being a bit of an enigma to him, causing his execution of option (c) to often go quite poorly.
I also think that other people ended up behaving in a munchkiny way in part because they weren’t able to manage to collaborate straightforwardly, and maybe in part because they were culturally influenced by his attitude.
It’s harder for me to assess whether Geoff’s pessimism was better calibrated when dealing with outside (often at least mildly antagonistic) parties. Maybe he was sometimes too quick to give up on engaging directly, resorting to something munchkiny; to the extent that he was, that’s on him. But I don’t think it’s fair to round it off to us being uncooperative because it seems to me that a number of the relevant individuals and groups are/were actually unreasonable and would’ve blocked us from doing things that made sense to do. If you disagree, and think people are more cooperative than it seems, that is a possible intervention point for people who are wanting to improve coordination with Geoff.
But I see the difficulty in the decision-tree, and I’m sympathetic to the desire to take actions in the world that you think will be good for your goals and will generally cause the world to be in a better configuration, while navigating around perceived blocks/conflicts/confrontations.
As what might be an example: I don’t know the details of certain conflicts like the one surrounding the Pareto Fellowship, but from the outside it looks like a number of parties believed that the program could provide a lot of value to the general community/movement (while boosting a bunch of young people who might go on to have outsized positive impacts), and they didn’t think they could come to consensus with all interested parties about how to do that (partially because some people’s pre-existing animosity towards the Leverage ecosystem couldn’t be reconciled with other people’s pre-existing excitement about their training), and so they went out and made it happen in the way they thought was best.
The organizers were totally inexperienced at running anything like this and I think they were very smart to tap into the nearby resources that were available to them (including convincing me to allow them to run it in our building which then also got them all sorts of bonus logistical and training support). I put in a ton of work to help the organizers, my ops team put in a ton of work to help the participants, we risked violating the terms of our lease (which was actually another example of a successful execution of option (c) – our landlords probably wouldn’t have approved of it if we had asked, but it ended up causing them no harm) and we called in favors to be able to accommodate more participants in an external apartment than could fit in the building, a bunch of people from the Leverage ecosystem delivered workshops on many different topics and had evening chats with participants (which would’ve been much harder to get them to do if it hadn’t been so conveniently/centrally located). And I think most participants got a lot of value from the experience.
But there was some upsetness around how the program was run. I heard things about the organizers asking weird questions during the selection process for the Fellows (which, knowing the organizer in question, doesn’t surprise me), and I’ve heard of some people feeling uncomfortable about the questions (which, if the organizer was actually trying to make predictions based on the Pareto Principle in judging people’s potential on a video call, also doesn’t surprise me).* I heard rumors that some people at CEA were upset that our trainers provided the majority of the programming (though I don’t know of any specific critique based on quality or content). I’ve heard of people being upset that interested fellows were encouraged to interview after the program ended and that some of them accepted offers, though I continue to be confused about why recruiting is viewed as something bad. (I think three fellows (two of whom had been interested in the Leverage ecosystem prior to the fellowship) ultimately joined Paradigm, one had received an offer from Leverage prior to being accepted as a fellow, and another ended up working at Leverage for about a year – I’m pretty sure all of the new hires would consider that to have been a pretty valuable outcome for them and I would be surprised if they had the sense of receiving special treatment during the fellowship).
* a quick note about this. According to CEA’s website, the summer program received 500 applicants and chose 18 fellows. On their mistakes page [subsequently updated to include more explanation, following my post] it seems to round the entire program off as a mistake, (and implies that the organizers were fired), but it only lists one concrete negative aspect which was the reported discomfort experienced by some applicants during their interview (which I believe took place over zoom). I don’t know what questions were asked, but I’m concerned by the idea that so much weight would be placed on a filtering process which I’d assume had been intentionally designed to weed out candidates who would have a bad experience participating in an 8-week residential training program designed to boost people’s chances of making an outsized impact on the world. It feels to me like an example of some EA orgs being too willing to sacrifice effectiveness in favor of what’s comfortable and professional.doubled
So again, I don’t know the details of the conflict, but it seems like the program went really well (much better than should’ve been expected, given that there were only two people assigned to run the whole thing and they were doing it for the very first time), and so I’d guess that this particular conflict was primarily of the type described in scenario 2 where the upsetness is coming from something like a violation of expectations or trust, rather than from something bad actually having been done.
I think I’d expect this type of violation and resulting upsetness to arise primarily in areas where people have trouble locating and resolving the crux of ongoing (perhaps not-exactly-explicit) disputes like “is recruiting bad?” or “is Paradigm training bad?” or “is Geoff bad?” or even “has this group discovered relevant true things?” which then make it difficult to sync up on things that might be downstream.
And until those disputes have been clarified and handled (in a way that’s clearly visible from the outside), I think it’s reasonable to assume that people who want to cause good things to happen in the world (who are pretty confident that those types of contentious things are in fact quite valuable), will do mildly munchkiny or sketchy things in order to avoid direct conflict or direct blocks to the (good) things they’re trying to do. And I think that will make some people uncomfortable, and reinforce the barriers to being able to collaborate more straightforwardly.
Further notes on having ambitious plans
I believe everyone needs some sort of underlying gut-level plan where everything’s going to end up being ok -- which is hard to accomplish if, e.g. you’re going to end up dying and/or the world that you’re working to improve is still clearly in pretty bad shape.
From the Wikipedia article on Wishful thinking:
Wishful thinking is the formation of beliefs based on what might be pleasing to imagine, rather than on evidence, rationality, or reality. It is a product of resolving conflicts between belief and desire. (original source)
Christopher Booker described wishful thinking in terms of:
"the fantasy cycle" ... a pattern that recurs in personal lives, in politics, in history—and in storytelling. When we embark on a course of action which is unconsciously driven by wishful thinking, all may seem to go well for a time, in what may be called the "dream stage". But because this make-believe can never be reconciled with reality, it leads to a "frustration stage" as things start to go wrong, prompting a more determined effort to keep the fantasy in being. As reality presses in, it leads to a "nightmare stage" as everything goes wrong, culminating in an "explosion into reality", when the fantasy finally falls apart. (original source)
Something I think mainstream psychology hasn’t figured out yet but which is true and highly explanatory is: wishful thinking isn’t voluntary, and you can’t just turn it off by being aware of the phenomenon. Even if you really truly intend to avoid all wishful thinking, the pull is inescapable on some level, and will end up shaping at least some part of your worldview. You can take steps not to act on that part of your worldview, and many successfully do, though often even that active self-governance can only go so far, and the fantastic beliefs creep in and end up shaping how you relate to the world and how you go about life.
In my interpretation of any person and thus any group of people, I’m assuming that this intense form of wishful thinking is going on; that each individual has at least some part (and often a large swath) of their belief system warped in some strange way in order to accommodate their personal plan for everything ending up ok, and that they are at no fault for that condition, since that’s just how human beliefs seem to work.
I think humans have a basic set of things that they need handled so that their life story is acceptable to them — not with their rational endorsed beliefs, but with their underlying “am I ok” beliefs. If you have things that you need in order for things to be ok for you, but that you don’t have a clear and reasonable path to achieving, it’s quite likely that your mind will warp around the obstacles that you can’t handle. In addition to “wishful thinking,” maybe you’ve seen this kind of thing referred to as “motivated reasoning.” E.g. if you need it to be that you’re a good person, but you get evidence that you really hurt someone (and your belief system doesn’t account for mistakes or redemption), then you might find yourself doubting whether that person really did get hurt, or doubting that they were hurt by you, or maybe you’ll find yourself believing that they deserved it (which can then cause you to stray even further from the truth, as you then justify that belief).
If things are generally going ok for you: you have a cute girlfriend, you have a high-status job, your parents are proud of you, your friends think you’re fun and clever, your college football team is doing well, etc., you might not face too many constraints and resulting warping -- that might not be true if you find yourself accused of having white privilege or some other societally threatening thing, and you might be quite irrational when it comes to the question of whether you should be distressed about factory farms and climate change and global poverty, but otherwise you’d basically be leading a pretty chill existence.
If instead you find yourself trying to accomplish something that’s incredibly difficult, working long hours, putting off investing in relationships, struggling to get people to understand or value your work, managing a sibling rivalry or critical parents, it’s going to be a greater challenge to have good enough plans and backup plans to avoid running into places where your mind strays from the truth in order to find an acceptable path forward. Maybe you’ll come to believe that the work that you're doing is more significant or more promising than the evidence warrants, or perhaps you’ll shorten your timelines so that it fits in better with your other life plans. Or maybe you’ll come to believe that the most difficult part of the task doesn’t actually need to be handled, so you can dismiss it. Maybe you’ll come to believe that the people who don’t value your work are themselves irrational, and if you’re competing for resources or status, but need to believe that you’re more deserving, maybe you’ll find yourself believing they’re not only irrational, but dangerous.
Leading a life that’s full of difficult challenges is dramatically affected by how high your self-efficacy is in any given area. Difficulties don’t automatically cause irrationalities (and subsequent missteps) -- it’s when you can’t see a way to navigate a path forward. In my first example where you might have hurt someone, if you are confident that you can recover from mistakes and that people will forgive you, your mind will be much less likely to end up trying to shirk potential responsibility. This could come from skill in conflict resolution or it could come from being part of a community that demonstrates a tolerance for mistakes and puts in a good faith effort to restore the status of members who acknowledge their errors in judgment.
I find it very useful to consider the constraints that people might be under as well as the paths that might be available to them (or blocked from them). It can help to make sense of otherwise mysterious behaviors, help to predict future behaviors or areas of irrational beliefs, and, maybe most importantly, it can show you what acceptable paths you might be able to unblock for the person or group of people that will shift their behavior away from the more destructive path they’ve resorted to.
When you have a lot of ambitious, relatively uncoordinated people trying to do complicated interconnected things -- especially if their strategy has been to consolidate their goals/plans for efficiency, perhaps at the expense of diversification in avenues to achieving personal goals -- the number of total constraints can be very high, and I think it’s reasonable to expect there to be cascading effects from individuals having and then acting on their warped beliefs.
My advice to leaders would be to encourage people to take care of as many of their personal goals outside of the project as possible: take a yoga class, visit your family for the holidays, write poetry and post it on your instagram, make time to stay connected with your friends, figure out what sparks joy and make sure you’re getting enough of that. Because the more pieces of people’s plans that have to go through the project (or really any centralized thing), the higher the stakes and the more conflict will arise from people’s paths being in a frequent state of threat (even from aspects that you wouldn’t have guessed would conflict with someone’s implicit/underlying plans).
I think I’ve presented a fair case for it being non-obvious how to allocate blame or credit. Though maybe it’s natural and fitting to judge those in leadership roles when weighing the good and the bad.
One thing to note is that because we didn’t hire people into leadership positions, the ones who ended up steering the project were the ones who cared the most about identifying and working to solve problems in the group and their leadership roles were a reflection of how much they had attempted to shift things within the various groups/projects/orgs. So while I think it might make sense to hold them accountable, as they were also the people trying the hardest to fix things, maybe they should be supported rather than censured?
Even if after a careful analysis you decide that the project was net negative (which I think is very unlikely) and you decide that it’s the fault of the people who were trying to help the project succeed, I don’t think that means that we’re bad people. And I think the outside audience should be very careful in how they judge and potentially punish people who attempt to do hard things like this -- I think you’d be hard-pressed to look out at the world and confidently assert that we won’t need to try anything new or difficult in order to fix everything that’s going wrong.
(Maybe you think that we can actually reliably avoid the potential negatives like the ones that we encountered along the way, but whether you try to innovate in psychology or governance or technology, I think it’s going to be impossible to lower the risk of harm down to zero.)
If we do need some people to be willing to attempt challenging novel feats, necessarily risking failure, it seems quite counterproductive to publicly shame the people who actually put in that effort.
How should the efforts at Leverage & Paradigm be viewed?
Theodore Roosevelt gave a speech in 1910 that includes a passage referred to as “The Man in the Arena” which feels relevant when I’m considering how our accounts are/should be received by a broader audience. It maybe requires people to believe that the group of individuals in the Leverage ecosystem were trying very very hard, sometimes at great personal cost, to ultimately be able to dramatically reduce suffering in the world. If you’re participating or trying to learn from this discussion and you don’t believe that premise, I’d encourage you to try to get clarity there before continuing. And if you feel yourself becoming defensive while reading this quote...try not to take it personally and instead try to see the truth in what I’m gesturing at.
It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.
We were in fact trying very very hard to do a good thing. I’m quite sure many involved have spent a great deal of time and effort trying to understand our errors and failures, in order to be able to do better in the future. Collaborative critiques from others who are trying similarly hard to do similarly good things are quite welcome.
While the entire span of the Leverage ecosystem was difficult for many in many ways, I think it was really that final year that got pretty rough for a lot of people at once: critical mass of people out of their depth.
And while that last year presented a unique challenge, with the discovery of a lot of new and unpleasant psychological phenomena and the resulting/coinciding increased conflict between the subgroups as well as between individuals, I think that the fact that we had 6+ years of history behind us at that point should help people to see that the final year (which did not bring out the best in us, but which we weathered/navigated better than I’d imagine most groups of humans would) is not the only era by which we should be measured.
If the story had been that we (Geoff? Everyone who ended up in a leadership position?) had been trying to maliciously brainwash and manipulate and control people for 6 years and finally succeeded after year 7 and then shut it down…well, that seems like a pretty odd explanation. I think the story that we were a very diverse group of people trying to understand how to increase effectiveness and ran into unexpected challenges that caused conflict that was too hard for us to resolve without disbanding makes more sense, and is what I believe basically happened.
Are lofty goals to be sneered at?
Is there no legitimate reason to try so hard?
Or just no legitimate reason to think your efforts might pay off?
Is it unacceptable to toil towards goals that you are unlikely to reach?
Historically, the Rationalist and EA communities have experienced a fair amount of derision for seriously considering unpleasant aspects of reality and factoring them into their plans—from powerful AI (building it or being concerned about it being built) to whether it’s immoral to have children or buy expensive handbags when those resources could go to save lives elsewhere.
In these communities, it’s a widely experienced struggle to figure out how to relate to friends and family who are leading relatively normal lives when you have done the math and considered the self-sacrifice and have decided that it actually does make sense to devote a significant majority of your time and energy to the enormous task of reducing human suffering (and not just by working for something legible and acceptable like The Gates Foundation, or even by earning-to-give at Google, but (at least for some subset of us) by attempting something unproven, or even a moonshot). How do you continue to connect to those people given that you have limited bandwidth due to your other commitments and given the natural undercurrent where they either think you disapprove of their life choices as selfish or insignificant, or they think you’re delusional or full of hubris for even attempting such an ambitious task?
Humans are social animals and once you’ve helped your friends or family or colleagues to grok how much suffering is happening and you’ve also shown them a way that they might be able to help, it can then be pretty uncomfortable for them to explicitly acknowledge that they’re not going to change their current plans in order to attempt to stop the suffering. It can simultaneously be uncomfortable for them to get evidence that their current plans aren’t going to have as significant of an impact as they thought they might have.
We all want to be loved and accepted for who we are without doing anything special to earn that love or acceptance, but as a society, we don’t have a good framework for that—there are built-in assumptions about needing to contribute/accomplish as well as needing to be special (which adds a tricky comparative/competitive element) and I think those dynamics add complexity to thinking and talking about and participating in ambitious plans to improve the world.
(It seems like the person Zoe mentioned who seemed to strongly disapprove of the way that acting was part of her plans was maybe responding to some of these things -- believing that being an actress isn’t a very effective way to end suffering in the world, or maybe being threatened by evidence that Zoe might not have been as dedicated to world improvement as he needed her to be, in order to make their group’s plans work. His response sounds shitty, especially given that he clearly knew that this part of her plan was not easily shifted, but it seems like they were both pretty threatened by the incompatibility of their plans.)
I don’t really want to get into it because this might be an important healing step, but I think something has happened to make it hard for Zoe to accept the viability of ambitious plans and so she has disavowed her own as well as those of her former teammates (or if she never actually had those goals and was just trying to improve her acting or teaching of acting in an interesting environment: oof for all involved). Different people have more or less stable or rational sources of high self-efficacy and that was also true amongst the people in the Leverage ecosystem. I don’t know how early people found Trump’s presidential aspirations credible or how many people were surprised when an EA became a multi-billionaire in the span of about 4 years, but it’s pretty clear that people can achieve remarkable things with wide-ranging impacts and given that, it seems like we shouldn’t artificially limit ourselves.
A lot of Zoe’s most negative claims hinge on us all having been deeply wrong and confused about reality (and she herself only recognizing that when looking back years later): how unique Geoff might be and how calibrated we were on our trajectory. But as a reader, you aren’t presented with any of the evidence; you can only use your confidence in her ability to assess philosophical positions, psychological theory, psychological practice, and sociological models, or you can rely on your own priors. If you start off assuming that nothing impressive or noteworthy was happening in the Leverage ecosystem, or with Geoff, then her narrative makes sense. But what if that’s not the case?
It feels to me like maybe there’s a general ‘pushing away’ that society has with ambition generally, not specifically with regard to projects, but overall. And I feel it even more strongly in the Rationalist and EA communities — where people seem determined to make sure that everyone knows their place and no one dreams too big.
Maybe it’s from an association with greed or other types of antisocial behaviors? Or maybe there’s a strange status/one-upmanship thing happening? Or maybe encouragement and support was never highlighted as an important community value?
But I think it’s wrong to blanketly condemn ambition or to lose track of its necessity for accomplishing great feats.
This may be stating the obvious, but every olympic athlete, every world leader, every Nobel laureate, every Michelin star chef, every Steve Jobs and Jeff Bezos, they were all born tiny helpless babies, barely distinguishable from any other.
Maybe they had trainers or tutors, maybe their families had wealth or connections — certainly not all of them, and certainly not in a way that puts them in a completely different reference class from the people reading this post. But one thing they do all have in common is ambition. And from the stories that I’ve followed, they also tend to have people who believe in them — in the early days, often more than they believe in themselves.
I get why people are sometimes put off by people with dreams and I get why we tend towards skepticism — in circumstances of scarcity, we might be threatened by the extra attention they might garner and if the person in question is wrong about their odds, and if they don’t build out backup plans, they can end up in a place that’s much worse off than otherwise, potentially disappointing the people who had hoped they might succeed.
If you’re a high school kid thinking that you’re going to become a professional athlete and you squeak by with grades only good enough to qualify to keep playing ball, unless it’s obvious that you’re already the best of the best, I can see the usefulness of giving you a reality check and discouraging you from putting all your eggs in that basket, especially given that there are probably more likely ways for you to be able to achieve wealth or fame (or if the goal is social acceptance or the attention of girls, there are probably even easier ways, and if the goal is being able to continue being part of a team or any number of great things that come from sports, there are probably other ways to fulfill those goals as well).
But if the person’s goal is to dramatically improve the state of the world, unless that’s simply an instrumental goal on the way to fame and fortune, or unless it’s demonstrably poorly thought out, then it’s not clear that it makes sense to discourage them. We need people who are willing to treat these sorts of aims in the obsessive way that the Williams sisters became the world’s best tennis players. Maybe we can provide advice or guidance when we see that a particular strategy isn’t going to work, but as a community, it really doesn’t seem like we should be loudly doubting ambitious people or deriding them or shaming them for their missteps.
There just aren’t that many people willing to put in the hard work to become top-tier on the relevant dimensions; if anything, we should be cheering them on and providing support as they train to climb mountains that haven’t yet been summited.
As I said before, I don’t know of anyone who had the explicit plan to “take over the US government” but if we’re just looking at all the possible interventions for improving things, I wouldn’t want all the good guys to write off plans that have steps in that general area for fear of seeming too ambitious. There are a bajillion things wrong with the US government. It’s not something I’ve looked at closely, but I can think of plenty of examples: the incentive structure around campaign financing, the military industrial complex, the unwillingness to implement universal healthcare, the mass incarceration of our own citizens, the way we are or aren’t leaders on the global stage when it comes to issues like climate change or genocide or child marriage — if it’s possible to help all of those things to go better, then that seems like a noble aim. There are a bunch of regular humans currently running the government. It doesn’t seem crazy to imagine a world where those people understand x-risk and understand their own blindspots and are willing and able to coordinate across the aisle.
I think it’s important to have environments where people are able to consider ideas like these, whether or not they ultimately decide that that’s the exact right strategy. Aren’t we trying to increase our ability to see clearly and think clearly in order to positively impact the world?
Leverage’s trajectory & uniqueness
I’m sure it was very different to have been around from the beginning vs coming in at the end. Coming in at the end, you didn’t get to track the developments over time, more natural structure and hierarchies were forming quickly as more promising research avenues were emerging and less promising ones were discarded, and with diverse independent groups pursuing different things, it was much harder to be synced up with everyone and to be confident in your own assessment and understanding of the whole picture, our progress, and your place within it.
For someone coming in once the subgroups had started forming, I’d bet that their experiences (both good and bad) were quite varied – I’m sure that the leaders of the different groups had different management/leadership styles that were good for some people and maybe terrible for others—I know I would not have been able to work productively under a number of the more skilled researchers that ended up creating their own groups.
So I’m not surprised that Zoe would doubt whether we were actually on track to make important progress toward our ambitious goals, especially looking back after it had all fallen apart.
There are definitely sources of knowledge to be found in many corners of the world: ancient traditions and modern practices. And in many cases, going and studying them will be fruitful. (Though I’m sure there are difficult dynamics out there as well—the roads to enlightenment are notoriously fraught and in my limited experience, masters tend to come with baggage :/ )
Zoe claims that you can go elsewhere to learn equally effective psychological tools, but I think for our purposes, a much more important question is whether you can go other places and learn to invent equivalently effective psychological tools.
For a small group of untrained people to independently derive/discover so much in a handful of years does, I think, indicate something quite unusual about Geoff’s ability to design a productive research program. And if you believe that the way forward is dependent on as-yet undiscovered truths and understandings of the world, that should weigh pretty heavily in your calculus.
I recently came across this post from a conversation Eliezer had with Geoff that highlights the way people might have come to expect that if this were truly a heroic tale, the surety of success would have been evident all along the way and we’d already know how it all ended up. But reality is much more uncertain and takes a lot longer.
But when you read about that in a book, even if the hero spends years of book-time to get their results, *you* get to read about the fruits of their effort fifteen minutes later.
You also get to be certain that they won, whereas in real life you've got to stick to things for years not certain that your efforts are going to have any effect. Even in the early parts of the book, heroes are very rarely presented as having to wonder if they're working on even remotely the correct problem.
Should people be trying to “cancel” Leverage, Paradigm, Geoff, and the surrounding orgs?
Interestingly, as I’ve tried to explain, I think the more constrained the environment, the more interdependencies, the more likely trauma will ensue. If there had been enough funding to keep paying people who weren’t contributing or if we had access to enough talented people to reduce the pressure on both the productive people and the less productive people, if the outside community and outside world had been supportive and curious and collaborative, I think the risk of becoming overly dependent on this particular plan and on being a productive/important teammate within the project might have been significantly lower (and I also think this would’ve lowered the friction/fallout from other people in the group needing any individual to be a particular way).
Essentially (perhaps counterintuitively): unless you believe that the intent is malicious, or unless you believe that you can (and should) stop Geoff from attempting to improve the state of the world, if you’re interested in preventing psychological harm like what Zoe and I experienced, adding more resources to his pool is probably net beneficial. (That probably goes for many/most of the people from the Leverage ecosystem as well.)
This should be obvious to anyone who has actually paid attention to Leverage or Paradigm over the years, but while Geoff is difficult to coordinate with and also has a bias toward action which can exacerbate the problems from not being coordinated with him, he is basically a really good person trying very hard and often (like many of us) at great personal cost to dramatically reduce suffering in the world.
And if you’re just trying to assess the risk of repeated damage, I would guess that Geoff himself is even better positioned to prevent harms of the type that occurred than he was previously and as far as experimental research organizations go, I think he was already top-tier for having safety rails (though this is assuming that the work is valuable enough to accept some risk rather than fully optimizing for producing no harm – I grant that he was maybe more willing to push the envelope to see what results might arise than the average bear, but then was also more willing to pull the plug than most would’ve been).
It could be that you have other lessons that you’d like Geoff to learn and you think that the way to teach those lessons is by humbling him, and the tools you have at your disposal involve cutting off resources and status. I’m sympathetic to that, but I’m not very confident that that’s an effective teaching tool in this circumstance.
(Either way, if you find yourself armchair quarterbacking, I would at least ask that you be careful not to dismiss him, dehumanize him, attempt to hold him accountable for things beyond his control, etc. Again, perhaps counterintuitively, mistreating the people who you feel threatened by is probably counterproductive. If you find yourself having trouble knowing whether or not you’re being thoughtful and giving him the benefit of the doubt, maybe hold in mind the fact that he has a loving mother (I met her at the parents’ weekend — she is sharp and nerdy and fun and is currently over halfway to level 41 in Pokémon Go), and imagine her reading any comment or message you might send.)
Beyond canceling Geoff, it seems like there’s a push in the direction of trying to tarnish the reputation of everyone who was ever involved with Leverage and Paradigm, by way of condemning the projects themselves. I’m not sure why that’s happening, but I guess I would ask that the people involved look carefully at where that motivation is coming from — whether it’s a personal agenda or if it’s just being carried along by group-think. And whether or not you think the recent online posts have simply reflected a sensible pursuit of truth, I’d ask you to double-check the effects of the avenues that are being pursued to make sure you understand and endorse them.
Recognize and try to limit desperation in yourself and others
There’s something I saw in the Leverage ecosystem and something that I think is often at play when people don’t show up as their best selves (which seems like might be happening to otherwise thoughtful people who are taking part in this inquisition) — when people have important goals but limited paths to achieving them, desperation can cause them to do things that they would normally balk at.
If important steps along someone’s path (e.g. of dramatically improving the world) are threatened by some outside force or even by too many considerations to carefully tiptoe through, I think it’s pretty natural for people to decide that the collateral damage is just a regretful but necessary part of doing what needs doing.
I think this type of thing hurt me more than anything else -- probably in part because I myself present a fair number of constraints via my high standards for conscientiousness/accommodation of others, and also because as time went on, I became increasingly vulnerable to damage—even now the people in my life who want to take care of me have to factor into their plans the way that I require special handling. People hurt me in pursuit of varied goals, but often with me relegated to the category of “regretful but necessary” collateral damage.
In the last couple of years I have learned a lot about recognizing and interrupting the patterns of interaction that stem from people (often myself) being low-resourced or overly constrained by outside forces: feeling like they are at risk of not being able to accomplish their goals, worried that they’re losing people’s respect and admiration, feeling like their inherent lovability is in jeopardy (or maybe not buying into the idea that they have inherent worth, which makes lovability/acceptance even more fragile), etc.
As I said before, one natural way of limiting damage is to limit desperation, e.g. by reducing the number of constraints on people: try to help them have good backup plans, don’t cut off resources or try to lower their social standing or self-efficacy by shaming or blaming, etc. And this is not just relevant for how to limit damage from other people or projects -- you also need to limit desperation in yourself or you are likely to take actions that are ultimately bad for both you and others.
What to learn next?
While I think there are valuable lessons to be learned from understanding how people were hurt or helped by the ways they interacted with some of the people and ideas and structures within the Leverage ecosystem, and many more lessons to be learned from the more explicit research that was undertaken, if I could only pick one thing for others to learn from, I would have you instead listen to
This may feel like a non sequitur, but consider it: it’s much easier to understand, digest, and implement (she’s worked hard to become a master at turning research into relatable stories); much less triggering for everyone involved; and I’d argue much more likely to be applicable to the situations you’ll find yourselves in.
If you are one of a select few who are planning on spending many years diving deep into researching human psychology, it probably makes sense to try to convince people from the Leverage ecosystem to mentor you: providing some guidance, warnings, suggestions, and stories of success and failure.
If you are going to try to build a complex world-saving organization, especially one without a clear hierarchy and a clear (and narrow) strategic plan that your teammates signed up for on day one, or a project that is dependent on people needing to undergo extensive self-improvement, it probably also makes sense to try to convince people from the Leverage ecosystem to mentor you.
But everyone else (even those who might be in one of those two categories but perhaps in a more followership position): Brené Brown can help you to maintain healthy boundaries, build skill in stopping yourself from doing things you don’t endorse, give you strategies for talking through difficult things with your partners and colleagues and friends, and recognize when you’re getting in your own way. I highly recommend exploring her work and the work of people she draws on.
Intermission - end of section II
Maybe good to get up again for a snack/water/stretch break?
That last chunk was probably heavier than the first, so here are some relatively chill cat videos:
If you don’t already know him, Maru is maybe the most famous cat on the Internet and he has a passion for boxes.
(you’ll also find bonus content like Maru trying to fit into boxes that are too small and Maru trying to deal with a box that is too big)
Or maybe that’s the wrong vibe.
Why this is hard to talk about
I don’t want to fight over narratives with my friends in public
Covered in earlier sections – I care about a lot of the people involved, and these are really complex and nuanced topics.
Novel organizational structures and the trap of dismissing them as “cults”
Or: the problem of using an existing lens from society to understand what happened—what was significant, what was intentional, what was meaningless, what must’ve been imaginary or propaganda, etc.
While I think that Zoe’s approach to sharing her story likely caused more damage than she had taken into account, I’m glad that some people seem genuinely interested in understanding more about what happened. And I can see that she probably wasn’t in a position to carefully consider the impacts beyond her concern about blocking Geoff (though in reading her post, I wasn’t able to understand what caused her to place the blame for her bad experiences and subsequent trauma-response at his feet in particular -- if you read through it carefully, it doesn’t seem to actually make the case for Geoff being the cause of all of the problems, so maybe I’m missing an inferential leap that’s coming from a different lens that she’s using?).
I hope that talking about her experience and her perceptions of injustices makes it easier for her to move past this really awful time. I admire how hard she tried to make things work and I am simultaneously sorry that she pushed herself too hard and was pushed too hard by others (whatever the exact causal structure). I am genuinely glad that she seems to have broken out of the loop that was causing her to believe that she needed fixing. If I could push a button and have people in my life blanketly switch from believing that there’s something wrong/broken/weak about themselves that’s causing their suffering and instead believe that someone or some situation caused them suffering unjustly, I think I’d push that button. It’s an easier hole to dig out of and a better jumping off point for being able to have a conversation about what happened and what could be/should be done differently.
I do think it’s regrettable that she chose to label the project as a cult, both because I think that’s obviously and demonstrably false (assuming she meant to refer to “a group that tends to manipulate, exploit, and control its members,” as the anti-cult movement defined it in the 1970s a la wikipedia* though there seem to be many different possible meanings, many of them referring to things that actually seem worth defending), and because that word has a strong negative emotional charge (it’s basically used as a slur in mainstream society) and that is likely to:
- dehumanize the real-life people involved
- inject a bunch of unstated assumptions into the discourse that are very difficult to articulate/surface and resolve
- cloud the judgment of those who are part of this discussion
- cause future people who might bump into her post to (re)label everyone associated with Leverage or Paradigm without looking any deeper (who are then subject to items 1-3)
When people hear the word cult, they think of destructive cults like Heaven’s Gate, NXIVM, Jonestown, the Manson Family, or maybe less violent ones like hippie cults or communes, maybe Scientology, or even Mormonism – often marked by physical abuse, sexual abuse, financial exploitation, polygamy, etc. In case it isn't obvious by now, let’s be very clear: the Leverage ecosystem does not belong in this reference class.
Because the accusation is so vague, I’m not sure what would be useful to add.
To the best of my knowledge, there was no deception or chance for confusion in the recruitment process – new hires went through a lengthy process where we described the project, the nature of the research, explained that being part of psychological research is risky, confirmed that they would be comfortable not sharing confidential information, explained their compensation and benefits, etc. After accepting an offer, they participated in a 3 week orientation onsite along with a handful of others in their cohort where they were introduced to various researchers and other subgroup leaders (as a chance to start considering what area they might be interested in) and got the opportunity to test out many of our training and psychology methods. This was followed by a 3 month trial period, where they were also provided housing (to ensure it was a good fit before potentially getting locked into a year lease on a new apartment – since almost all new hires were moving here from out of the area).
If there was any pressure to stay on the project past the trial, I think it would come from the people on the team who liked a new hire personally and thought they were promising. From an organizational standpoint, it was very costly to retain someone who wasn’t a good fit.
I don’t know that I’d say that all criticism of people in leadership positions was welcome, but I will definitely say it was rampant. And as far as I recall, Geoff was probably criticized more than anyone else. Many people believed that he was uniquely talented, but I don’t think that’s propaganda, I think that’s just true. And I think that most people understood that his talents were lopsided – he was incredibly sharp in some ways and incredibly obtuse in others, and certainly not imbued with some divine power.
* If you’re a casual reader, it’s fine to gloss over this bit.
If you are someone who has spent many hours already and anticipates spending still more time trying to understand the circumstances in the Leverage ecosystem as well as the dynamics that are playing out in the online discussions, I think you should probably take the time to read through the relevant wikipedia sections. There is a whole body of research about this and it’s probably worth familiarizing yourself with it:
People want to put things in boxes that they understand.
Labels and metaphors can be useful shorthand, but they can also cause you to lose a lot of nuance and ability to see and think clearly about a topic.
In some of the public discussion, people have labeled the Leverage ecosystem as a cult. (They usually say things like “I wouldn’t quite call it a cult” or “it’s problematic to use the word cult” or “maybe a better word than cult is [e.g. high demand group]” but the impact actually is still that people walk away feeling like it’s been called a cult and I assume that most of them are using the pejorative connotation of the word, rather than the denotation.)
People don’t have a natural referent for “ambitious experimental psychology & sociology research community.” And people aren’t used to interacting with groups that are attempting really difficult things that are more consequential than profit or loss. If, instead of trying to pattern-match it to a cult, you try to pattern-match it to a PhD program, or a start-up, or an established corporate job, or an R&D lab, or a Buddhist monastery, different sets of things become more relevant or stand out more as out-of-place given the expectations you have for that environment. I would probably taboo the word “cult” to be able to observe & discuss without a distorting lens that ascribes particular intent.
A couple examples from Zoe’s post:
- If the reason for so many hours of work was intended to affect people’s ability to think clearly, why was leadership subjecting themselves to even more hours of work?
- Doesn’t the distinction between defunding and firing/exclusion imply that the designers of the system cared about whether people felt included and worthy of collaboration, even if they weren’t able to contribute enough to the research project to justify funding?
Rephrased or reframed, some things sound like pretty standard operating procedure:
There were many useful things that needed doing; if you weren’t willing or able to do any of those things, you would no longer receive funding. If you joined a subgroup focused on psychology and your teammates needed debugging/training help and you needed practice debugging/training, it makes sense to me that you’d be asked to do that. And if you didn’t feel comfortable doing that, it seems good that you’d be given the opportunity to try to resolve the issue with your group leader or go find some other way to contribute.
If it looked like you weren’t on track to contribute and your funding status was in jeopardy, it seems good that someone would give you a heads up that it was probably not a great time to take a lengthy vacation.
You could go take courses elsewhere but the project would reimburse half your costs only in the case that you followed through on bringing that new knowledge back with you.
Humans seem to put a lot of weight on something like “what they’ve seen before” or "what's normal" and it becomes very difficult to look clearly at things that haven’t yet been sanctioned by society. Some examples are maybe tired, but true: is drinking alcohol in fact totally fine where smoking marijuana is bad and should be punished with incarceration? In broader society, we have trouble even agreeing that they’re both drugs.
Capitalism itself looks terrible if you take it out of context (and maybe even if you don’t): essentially indentured servitude until you’re old and then maybe 10 years to enjoy your freedom before you die (and that’s not even factoring in the actual sweatshop conditions that exist in other countries to provide objects for casual consumption in the developed world). But then with the lens of capitalism, maybe we’re fine with CEOs working 100+ hours a week and not taking a vacation for years at a time, but if the person doesn’t have a C-level position, even in a seed-stage startup, we assume if they work hard, they’re being exploited.
I’ll take a moment to expand on this friction with people trying to judge the project from the outside:
People take cues from society in a really strong way, often not recognizing this in themselves. If you just took a bunch of facts about the jobs and roles and challenges people take on and laid them out without context, I think people would strongly object to those supposed working conditions, but once those facts are slotted into the socially sanctioned puzzle, people accept them without question:
Astronauts abandon their families and friends for months if not years at a time; Olympic hopefuls train and push their bodies to the limit as if it’s a full-time job (often before reaching adulthood); monks must give up their worldly possessions, shave their heads or otherwise limit their self-expression, and rise hours before dawn; Catholic priests take vows of celibacy; soldiers take jobs where they are trained and paid to kill other humans; PhD students work long hours, often on other people’s research, for years without a promise of a degree or a job; women shave off their body hair, don shoes that bend their feet at an unnatural angle, and paint their faces (often in order to meet professional standards in their workplace); hundreds of people band together to create movies that portray gruesome acts in literally horrifying detail -- and these are relatively high-status roles that people aspire to.
Apart from the women who are pressured into modifying their bodies and low-level soldiers who are conscripted due to a lack of other opportunities, all of these examples are people intentionally and explicitly choosing these paths for themselves, knowing the tradeoffs.
So when we see people working on ambitious projects, enduring personal suffering and conflict, working long hours, missing family gatherings or delaying starting a family of their own -- I think it’s important to counter the natural inclination one might have to dismiss their drive as foolhardy or to assume that they’ve been duped.
There are things in the world worth making sacrifices for, even if those things have a good chance of failure or have not yet received the societal stamp of approval. And in fact, I believe that some of the causes that are most worthy of working hard for will be ones that haven’t yet grown large enough or successful enough to reach the mainstream and/or aren’t yet obviously going to reward you with status/resources/knowledge etc., especially when examined from the outside.
Weird experiments and terminology result in sensational claims and rumors
Crystals? Demons? Seances?
Note about my own beliefs:
I believe things like:
- Living with a cheating partner will erode something in you even if they show no surface-level discernible signs of their betrayal.
- Even if you do and say all the right things, if you in-fact resent your kids, they’ll pick up on that and develop differently because of it.
I don’t know that you have to be on board with particularly “woo” things to agree with me, but if you do agree with the above claims, you might want to consider the possibility that people are communicating significantly more to one another than just with their explicit words. And I think it’s also worth considering that it’s going both ways -- that information is both being sent *and* received by people at a pretty high rate. And probably much higher for people who work or live together than e.g. the people who interact briefly at the grocery store. And probably much higher for people whose plans are very dependent on one another such that they pay close attention to each other. If my partner holds their phone in a slightly different way, I might suspect that they’re trying to hide something from me, or if I’m on a work call and someone pauses for just a fraction longer than I was anticipating, I might get nervous that there’s a disagreement that’s not being voiced, and that might snowball as the other party picks up on my stress.
Psychology is a wild thing and as far as I can tell, it is as vast as it is deep.
From conversion disorders (formerly hysterical blindness) to chronic depression to falling in love. From otherwise loving and accepting couples fighting about leaving their clothes on the bedroom floor to some people building companies that launch humans into space while other people contentedly play video games and drink beer with their buddies. From devout Christian Trump supporters to subconscious racism in the progressive Left. From people upset about anti-aging research to people incapable of accepting or addressing global warming. From people being able to enjoy a vacation in an “exotic” location rife with poverty and oppression to young girls firmly believing that they’re going to marry Justin Bieber or screaming women mobbing the Beatles. From multi-millionaires deciding to scam people to get even more money to people gambling away their life savings, to suicide bombers and genocide. From people refusing to wear a seatbelt to people sending death threats to strangers on the Internet to people being up-in-arms about euthanizing a handful of dogs as part of medical research but then turning around and contentedly eating bacon for breakfast.
And what about the powerful emotional impact of music? It’s become almost impossible to imagine a movie or show without a soundtrack. And why is dancing such a thing? And what’s happening with laughter and why we find things funny?
And those are all basically things affecting individuals. Add this complexity to small or large groups, ideological or otherwise, and then take it another level up and try to understand societies and the institutions that hold them together.
And if you’re trying to actually figure it out, both in vastness and in depth, it seems quite likely that you’re going to discover very strange things along the way. And it also seems quite likely that it will be hard to keep the general public synced up with your research before you’ve developed a concise theory that explains what you’ve found and also fits in nicely with what society is ready to accept.
I get the impression that a lot of the more experimental therapy that currently exists has practitioners who don’t know *why* or *how* something works, just that it (sometimes) does. For the purpose of helping people process things in their lives, that seems generally good/useful, but that’s not really the level of holistic mechanistic understanding that I think we’ll need to get to in order to reliably cause the type and magnitude of effects that will be needed to help people both in addressing suffering and trauma, but also in addressing blocks to effectiveness and coordination.
Zoe casually mentioned a number of therapies/theories that have made it to the mainstream (perhaps with a background implication that their development didn’t face any of the challenges that our research group faced--or maybe she wasn’t considering their development and was just thinking of easier ways to learn those practices or benefit from them as they currently are without the need to derive them using a scientific process). One thing that I feel pretty sure about is that if I tried to explain a number of accepted and seemingly useful techniques and paradigms to my grandparents (who’ve all been dead at least 20 years and so didn’t get the chance to slowly acclimate to some of these ideas), they would think I was pulling their leg.
“You want me to believe(?) that I have a bunch of different people living inside my head??” (Internal Family Systems: IFS)
“You’re going to solve my trauma from the war by having me do something with my eyeballs while I think about what happened??” (Eye Movement Desensitization and Reprocessing: EMDR)
“You think my mental state has something to do with storing my anger in my left thigh??” (Somatic therapy)
Outside of the puzzle of societal narratives and acceptance by the sphere of professional therapists, all the above things seem completely outlandish.
The things that we were researching were similarly outlandish, but without being tempered by social sanction and without time to develop comprehensive explanatory theories or even time for our internal group to become accustomed to the new interpretations of psychological phenomena that were being developed, much less be in a position to present it to people outside our research bubble.
Explaining the phenomena our researchers were trying to figure out was too complex/too unbelievable/too inciting.
That's a big part of why we didn't really discuss it externally. That's why it was amazing that we had a safe internal space to discuss things, where there was a lot of room for interpretation/ideas/internal trust. As Zoe noted, in some ways that trust and safe space did erode and it made things harder. But in some ways it was maintained, and we kept things (mostly) internal.
Zoe's public post totally erodes that space though. Terminology, concepts, ideas, that were not formatted, not carefully explained, not carefully contextualized, etc. for a public audience, have now been put into a public space, in a pretty non-careful way.
I respect that Zoe felt she needed to do this and thought it would be healthier for at least her, but the process of doing so led to very quick and intense mischaracterizations of what was happening (I would not blame someone reading Zoe's post with no context for thinking that things/the people in the Leverage ecosystem were completely batshit).
So, if you encountered her post initially, you have been haphazardly let into a really really complex (and new/budding/incomplete) internal research space. With that in mind, here are a few things about topics Zoe mentioned:
While I never heard of anyone in the Leverage ecosystem experimenting with séances, (or doing something that looked like a séance from the outside), I wouldn’t be that surprised if it had happened and I wouldn’t be surprised to learn that that might have an interesting effect. (I *would* be quite surprised if their intention was to communicate with the ghost of a dead person, though I guess if IFS has us act as if we have different versions of ourselves inside us and if somatic therapy assumes that we store psychological content in our bodies, then it doesn’t seem *that* wild to entertain the idea that we might’ve unintentionally stored some mental model of our deceased mother that would be valuable to try to engage with. :shrug: And I guess I’d have a similar response about “demons” -- seems really unlikely that people would’ve been referring to little horned creatures, but not that difficult to imagine finding complex negative content in people).
I’ve never studied rituals, but I can see that they’re a big part of the history of humanity. The Rationalists themselves have tried to weave ritual into their community through the Rationalist Solstice with candles and group singing and we see it in communities of faith in things like Passover Seder, as well as in non-religious weddings and funerals in cultures all around the world.
I don’t know the mechanism involved in the power of ritual, but it seems silly to block off that whole area of inquiry because it’s weird or reputationally tied to groups of people who aren’t known for bringing critical thought to bear. I’d probably tread carefully as I would in any area where I don’t understand the mechanisms, but if we’re after true understanding, it seems like we can’t just leave arbitrary stones unturned.
Also, if you find yourself still balking at the idea that people with good epistemics would/could seriously consider things that you consider too “woo” — maybe take a minute to think about what evidence you could receive that would shift your attitude. E.g. going to 10 energy workers (who share the same vocabulary/referents as best you or they can tell) and having at least 6 of them identify an issue with a particular “chakra”? Or going to an energy healer with chronic pain and experiencing pain relief through physical touch? Or perhaps without physical touch? Because I would bet that the people in the Leverage ecosystem encountered compelling evidence as they sought out or were introduced to masters of various fringe practices and tried to learn from them (while building more robust theories than perhaps those held by those practitioners as to the cause of the effects that they found).
I’m also pretty confident that multiple significant donors to EA and Rationality have themselves benefitted from the knowledge and tools that came out of our research (woo or otherwise) -- not just tools or frameworks for thinking about or changing their own beliefs, but useful insights for running happier, healthier, and more effective teams (we learned from both our successes and our failures). Though I suspect that they will either not be aware of this conversation, or if they are, I would guess that they would be reluctant to chime in given the strong negative framing from Zoe’s public post and the witch-hunt vibes from the cash bounty that was subsequently offered up for other stories of the Leverage ecosystem similar to hers.
And while we’re here contemplating all the diverse puzzles and difficulties that exist because human minds are so complicated, I encourage you to consider whether your plans for world improvement currently handle this entire dimension. To me it seems obviously crucial for executing and coordinating on plans for world improvement and also for accounting for human psychology in the creation of the plans themselves. Maybe this is why people gravitate to AGI solutions. Maybe it could end up being a problem you can basically solve by just getting an infinite number of forks and plates instead of changing people’s attitudes or behavior intrinsically…but given the amount of imagined scarcity that I’ve seen in my life, I wouldn’t put all my eggs in that basket.
Oh, and crystals...yeah, I don’t know. I never saw or heard of any subgroups doing anything with crystals, though we did have a professional practitioner use one of our office spaces for a few of her clients for a while and she set up little arrangements of crystals and sea shells and things as part of a cleansing ritual for her practice. The people I interacted with raised an eyebrow, but I never met her or heard whether she had a productive collaboration with the group who had set up the arrangement with her, so I don’t know whether any useful theories about the use of physical objects like that were gleaned from that relationship. At the all-hands meeting where Geoff announced that he would be winding down the project, the group who had brought that practitioner in gave Geoff a crystal orb as a parting gift representing clarity of thought or something, but it seemed like a symbolic gesture and definitely wasn’t presented with instructions.
I should note that this was not my area of research and I hope that I am not misrepresenting things in a way that will make it more difficult for the researchers to be able to share their models. I think if someone who was involved says something happened within the boundaries of our research collaboration, it probably did. But let’s not lose touch with the fact that our most commonly used tools were laptops, slack, box and arrow diagrams, google/confluence docs, and whiteboards.
We lost our friends and our lives fell apart
Covered in an earlier section above, but you can see how this might make it painful to discuss all of this on the Internet.
We weren’t (and still aren’t?) sure if it’s good for society to share some of our discoveries
Covered above in the Secrecy section. And it’s still unclear for some of the content if it’s a good idea to discuss it publicly in full detail.
It sucks to deal with people’s misunderstandings
As an example, despite trying quite hard, it really seemed like there was nothing we could put on the Leverage website that would satisfy people.
Obviously, we didn’t have a PR person (and I’ll be the first to admit that I have essentially no understanding of how PR is supposed to work) — we just had our motley crew, trying our best to handle all the things that needed doing. Each time we wanted to change the website, I would have to book time with Geoff and maybe another couple people over some number of weeks (to come up with a plan, to produce content, etc.), all of whom were really busy trying to put out fires internally and make progress in our understanding and execution in a bunch of key areas.
One hypothesis I’ve had is just that people don’t think mechanistically/concretely about how a lot of things happen, so they assume that we could just push a button and they would perfectly understand, so they’re frustrated/suspicious about why we’re not pushing that button.
Like this post, trying to represent what we were doing takes a lot of time and consideration — but you can’t expect someone visiting your website to be willing to read through 3 hours of background context.
- If we shared explicit examples of things, people complained that we were too weird or that we didn’t have enough justification.
- When we removed the things that offended people (and caused more conflict and vitriol than we knew how to handle), they were upset that we were trying to hide something.
- When we completely redesigned the website to represent what we were doing on a higher level, people were upset that there wasn’t more substance.
- When we redesigned again and shared more content, people were upset that there wasn’t enough of it.
- And apparently when they redesigned after the dissolution to emphasize/reflect the complete change in the composition and focus of Leverage 2.0 (specifically the research nonprofit), some people again accused them publicly of trying to hide something.
I haven’t looked at their website since I left, but it sounds like there’s been a fair amount of sharing there, and similarly in other avenues on social media, but the vibe I’ve gotten is that people are still upset.
During my time there, I found it exceedingly frustrating trying to not to freak people out via even the smallest change to our website, while also having an overwhelming number of internal things that felt much more important than putting endless effort into trying to correct people’s misconceptions.
What I would’ve given for someone to show up with an open mind, genuinely curious to learn more!
Note: if it’s not the case that people were upset by these things, or if it turns out that like 5 very loud people were upset by these things, and everyone else was chill… I don’t know. I guess that would be information I’d be really interested in, both in understanding the dynamics in the conflict between these communities and in updating my models for future scenarios. And I guess if I did learn that, I would also then try to push harder to cause chill people to make their voices heard, because I expect that these sorts of things cause a *ton* of stress, and it would be pretty great if we could reduce that just by coming up with ways of having visible feedback more accurately represent how the majority of people are actually responding to things.
One reason that I haven’t talked to people much about what happened, is that I think people on the outside, especially those who care about me and who see that I was hurt, will try to ascribe their own meaning and labels in order to make sense of things for themselves.
That’s perhaps a relatively innocuous problem, but it makes me feel more disconnected; I don’t want to be the cause of greater misunderstanding in the world (or in myself) and I also don’t want to spend my energy correcting their misconceptions, especially because that often puts me in a position of defending the actions of people who hurt me e.g. by taking the time to share more context about the constraints those people were under and the beliefs that they held that caused them to act as they did.
So while people in my life have found it very difficult to understand what went wrong (and rightly so), they can agree that the environment was quite bad for me and we can all share in the relief of that being in the past now.
The basic explanation that people seem to be able to accept is something like: I invested a lot of time and energy and hope in a number of individuals and in building up infrastructure to support them, and the particular way that it fell apart/was dismantled was really devastating.
But I grant that it would be a lot easier if society had an understanding that I could refer to. If my marriage had ended in infidelity and a messy divorce or if I’d been a soldier at a POW camp or if I had lost a child to cancer, people still wouldn’t really understand, but we would have a shared reference point when they saw signs of my trauma or my grief.
Most of the people I know and probably most people reading this post have never worked in an environment like the one we created, so there is an incredible amount to explain for it to start making any sense at all (she says, 45,000 words in). I’m really uncertain whether it’s even possible to reach a point where I could give casual descriptions of things and cause people to gain more true understanding rather than the misinterpretation that my statements typically generate.
It’s pretty scary to try to describe something that’s important to you to people who matter to you (or publicly on the Internet) and anticipate that they will have a good chance of coming away deeply misunderstanding while thinking they have figured it/you out.
Illegible or unknown causes of trauma
To be honest, even if I never had to interface with anyone else, it would still be easier for me if I had a point of reference with which to interpret the trauma that I carry around. In reading Zoe’s account, I see her suffering and I see her grasping for an explanation. It’s really not obvious that anything that happened would cause such a strong latent response. Was it overwork? Was it adopting the tenets of self-improvement in a way that cut out self-acceptance? Was it having a leadership figure who she looked up to and who she believed understood her deeply then turning around and doubting her value and goodness? Some side-effect from being around people who themselves might’ve been delusional or who had a strongly adversarial view of the world? A slow erosion of her ability to trust herself?
It’s really interesting to me that the intense badness exhibited only after the dissolution of the Leverage ecosystem (for her and others she mentions). Was it losing a life plan where all her goals were set to be achieved in one fell swoop? Was it losing faith that (even without her contributions), the group would continue to exist and would have a chance at dramatically improving the world? Did it have to do with needing to fit into the normal societal hierarchy again? Or was it that the negative effects from these bad training interactions were being held at bay through the belief that debugging or trainer support would be able to resolve all her issues, but then that path evaporated?
For me things were quite bad for a couple years before I left; I can point to times when things were particularly toxic or even name specific traumatic incidents, but here we are, years later, and the symptoms persist -- intellectually I know I’m not in any physical danger (and I never was), but my body and my subconscious are much less certain of that.
Speaking from first-hand experience: it’s uncomfortable not knowing why you feel so bad/unsafe and not having a plan for how to fix it.
We disagreed about lot of stuff and probably still do
I feel like I’ve gestured at this a couple different ways already, but I can feel the contradiction that might make it hard to accept. In some ways we were a loose coalition of people with very different plans and goals and worldviews. But we were simultaneously bound together in believing that this would be the group of people we would continue to coordinate with for many many years. We cared about what people thought of us; we wanted to be respected, we wanted people to cooperate with us, and we wanted our band of misfits to succeed. It’s hard for me to imagine any group lasting very long with so many strong-willed people trying to exert their influence: trying to poach talent, trying to level-up, trying to gain followers, trying to have their pet-projects or issues shift the focus of the group, trying to gain status and funding, trying to discredit others, some working within the system (such as it was), some unable to interface with structure or confrontation and instead going behind people’s backs -- but all I think pretty genuinely believing that their actions were important and justified, which they maybe were.
We had deep disagreements about culture and timelines and funding and recruitment and the odds that any particular avenue would bear fruit. Many people thought we needed to make changes, but no one agreed on what changes to make, so people had to push their agendas separately, often in direct opposition, e.g. someone believing that what we needed was for people to take bold action, and someone else believing that we needed people to be more thoughtful and considerate and collaborative. I think this was a relevant factor in the way that conflict ended up boiling over, and while I’m sure the passage of time and circumstance has mellowed some of that, I think there’s good reason to think that those disagreements still persist.
I, for one, am more concerned about drawing the ire of non-Geoff people from the Leverage ecosystem, though I am considerably less concerned about them than I am about people from the neighboring communities that are explicitly or implicitly on the attack. I think the way that groups tend to pile on when they smell blood in the water is much more threatening than a former teammate having a beef with me.
With the claims of other people being afraid of retaliation from Geoff, I have a clarifying question: in Zoe’s most extreme example of someone’s fear, is the idea that she endorses that level of physical threat based on her experience of working with him? (which seems quite unlikely to me). Or is she saying that people who are normally sane can sometimes become delusional/untethered to reality on the topic of Geoff? (which seems more likely given my overall experience witnessing various people’s & group’s responses to him, but which seems to add more questions rather than answer them). If the claim is in fact only that that’s not a normal response and that we didn’t have a “normal work environment,” then that seems indisputable.
I don’t think it’s particularly unreasonable that someone might be worried about Geoff’s reaction to them smearing the project publicly; though given the people who have tried to hurt him and the project and watching his responses over the years, I would bet heavily against the world where he would redirect his energy in order to attack a former member of the team out of something like spite. That said, having Geoff pissed at you isn’t a thing I imagine anyone aspiring to.
As far as reactions go, I myself went through a lot of different states when I found out about Zoe’s post and her decision to frame our years of grueling work and sacrifice as a nonsensical delusion-fest orchestrated by Geoff (I don’t know if I endorse this summary, but that’s the current state of processing that I’m in -- earlier states included sadness for what she went through, anger at the way that some of the psychology researchers decided to hold people accountable for mental content they didn’t have access to, sadness (and disappointment) that she was only able to find the kind of support that she needed by diving into the former-cult-member community, and also worry at the thought that the diaspora might’ve had overly negative interpretations of their social value if they didn’t know to factor in things like the trauma that Zoe and I experienced, which I would assume had a cascading effect via limiting our ability to support one another as much as people might’ve expected given our earlier relationships).
But I guess I should add that if any person’s intention is to attack Geoff (and the rest of us), and they are actually having a negative impact, then I would expect him to try to defend us and fight back. It doesn’t really seem like he should be faulted for that, or that people would need to factor that in if they were just sharing their stories without it being part of some larger malicious agenda.
Personal hurt and healing
One uncomfortable piece for me in particular, is that I have been very careful to create distance between myself and most people who were part of the Leverage ecosystem, as well as the broader communities, and the primary tool that I have for that is limiting what they know about me and my life, in order to have their judgements carry as little weight as possible in my own assessment of whether I’m a good person, worthy of care, respect, and belonging.
It could be that this won’t make sense to people who haven’t experienced intense trauma like this, but I’d prefer if they didn’t know where I am in the world (or where I’ve been), I’d prefer if they didn’t know what effects they’ve had on me, I’d prefer that they not know who I have contact with, or how I spend my time, or even what I look like now. For the people I was close with, I don’t want to give them the chance to be casual or dismissive in a way that denies or sullies the importance of our shared history, and for the rest, ideally I just wouldn’t cross their minds until the point where they can relate to me without internal dissonance.
So, in sharing my experiences and thoughts in this post, I’m exposing myself to a lot of potential judgment from communities I’m not even a part of, as well as from individuals who were involved in the Leverage ecosystem, some of whom surely have different perspectives than me, and none of whom I’ve had a chance to resolve interpersonal conflicts with yet. I’m doing it because it still seems net-positive (though I have questioned that many times in the process of compiling this post).
Conflict with some EAs and Rationalists and the role they’re playing
As I mentioned above, there has been ongoing conflict with some EAs and Rationalists, and that sets the stage for this section, which is directed at a particular subset of people from the nearby communities (as well as from Leverage 2.0, to some extent) who are trying to influence how this all goes in the public eye (if you’re a casual reader, feel free to skip to the next section):
When I first saw the bounty for negative information about Leverage (which has been subsequently edited), I was struck by how far it seemed from truth-seeking. This isn’t a situation where a large unidentifiable swath of the population may have been affected by asbestos and you need to erect billboards: “if you or someone you love has suffered from the effects of working at Leverage Research, call us today!”
We’re right here. You know us. You coordinate with us not-infrequently. We’re well-spoken and relatively clear-thinking. If the thing you’re after is truth, there are plenty of available ways of seeking it that don’t involve pushing people to dig up the worst of what happened and point fingers publicly for a chance at a cash reward.
But then I read Zoe’s post and saw that she emphasized that no one from Leverage 1.0 should be contacted. So factoring that in, my updated hypothesis is a little less damning, but I still think that if information is worth $100k to you, you should probably override one person’s blanket request involving dozens of people (most of whom she herself has no contact with) and then just go try to learn things. Or even if you’re unwilling to go seeking information, you should probably be much more explicit about what it is that you’re trying to ascertain and for what purpose.
You could even make a poll and send it out with the questions you’re trying to answer:
- Were people hurt? Yes.
- Was it due to an intentionally manipulative or exploitative structure? No.
- Was it due to the involvement of Geoff such that any future coordination with Geoff would result in similar harm? I don’t think so?
- Did the research yield a large volume of fascinating, relevant, and also surprising (and sometimes negative) results? Yes.
- Were important lessons learned from the way that the experiment failed? Yes.
- Did it fail gracefully? No.
- Is a similar structure likely to occur if Geoff receives funding? No. (You can fault Geoff for a lot of things, but he is very unlikely to run the same costly failed experiment twice.)
- Should people be allowed to run large-scale psychology research projects with willing human subjects? Probably? This one is harder to answer because I think that Geoff was actually better positioned than most to recognize important safety considerations and it still dealt a lot of damage. But I think we aren’t yet in a position to judge how much value came out of the whole endeavor, so it’s hard to make a call. In general, I think the world is in pretty tough shape and we should be allowed to take personal risks as we attempt to get to a position to be able to help things.
It makes sense to me that people at Leverage 2.0 feel like they need to respond to some of the concerns that have been raised as well as correct some of the misinformation and misrepresentations that have been posted about the Leverage ecosystem. (And from what little I’ve seen, I appreciate the way that they have defended our efforts in some of these online forums.)
I’m also not surprised that some individuals or groups would be interested in gathering and perhaps publishing more information.
But I really don’t think it makes sense for other people to tell our stories for us.
It took me 7.5 years of living this experience, 2.5 years of processing it, and 6 weeks to draft and edit the post that you’re reading, which still only tells a fraction of the story.
(It tells an even smaller fraction of my own story, because we’re probably at least another year or two out from the natural timeline of when this would’ve been ok for me. It seems like people are undervaluing giving us space to talk about all of this when we’re actually ready.)
While I do think that some of the people who want to share our experiences mean well, I think it’s pretty misguided.
If there are particular time-sensitive things you need to know for some particular purpose, it seems fine to try to gather that information. And if a community representative needs to then post something like “I don’t see evidence of malicious intent or high-likelihood of harm coming from the current and planned activities of Leverage 2.0 or Paradigm” that also seems fine.
But otherwise I think the respectful and reasonable thing to do is to let us tell our own stories at our own pace in whatever way we choose — I think I’ve laid out sufficient explanations for why we have not been more forthcoming already.
(As a datapoint: it added quite a bit of additional stress to writing this, knowing that at least one outside party had committed to publishing their summary report in short-order, without having any way to speed up my own process or to get that person/organization to wait for me.)
And if for some reason you want to have our stories come out sooner, then I would suggest being very explicit about why that is and simultaneously trying to legibly improve the environments where you’d like those stories told.
Re: risks of sharing information
I don’t remember seeing or signing the information sharing document that was created and distributed after the dissolution of the Leverage ecosystem, though it was described to me as helping to make explicit the norms that would be reasonable to abide by, given what a tumultuous and sad and uncertain time we were in. It makes sense to me that we would make an effort to not engage in mud-slinging or use intellectual property developed by other people without attribution. I don’t believe that I have personally been less willing to speak publicly about my experiences during the project because of it (though maybe I would’ve felt different if I had signed it? I find myself continuing to be uninterested in naming and blaming, so perhaps not).
In the beginning -- back in 2011 and 2012, I was very excited and open about Leverage and the people there and the things they (and later, we) were trying to do. I posted photo albums on fb and I explained AI safety risk to strangers. By the end of 2012, I was becoming more hesitant because of the blurred lines between people’s private lives and their relation to the project -- when you live and work together and there are no set working hours, how can you tell if you’re sharing a work thing or a life thing? If you’re friends, maybe it doesn’t matter, but how can you tell whether you’re just friendly but not actually friends? If someone joins you to pick up ice cream down the block, is it just a chance to get some fresh air, or did they not want to pause the conversation about gamification, or do they genuinely want to go have a little adventure with you? What photos are fine to share and what photos are bad for any reason: someone doesn’t ever like pictures of themselves, someone is dating someone new but doesn’t want their recent ex to feel bad; if you essentially work all the time but only post pictures of what happens in the breaks, are you sending the wrong message about how dedicated people are?
Then in 2013 and 2014 there were a few incidents where people spread misrepresentations about us publicly in ways that felt really awful -- some were malicious and some were maybe just thoughtless, but the way that the EA and Rationalist communities responded to one attack on fb (piling on at first and condemning us before eventually recanting and expressing remorse once witnesses came forward to clear things up) and the way that my friends in the Bay Area responded to what was likely careless hyperbole spread by someone’s then-girlfriend (essentially feeling really uncomfortable and wanting to distance themselves), caused me to update to believing that unless someone had a strong foundation of really liking us, or unless they were able to approach us or our ideas independently (without the views of their in-group clouding their judgment), the odds were pretty high that someone seeing a post would find something about us to take issue with.* And then further, that once someone did take issue, there would be plenty of people ready with pitchforks.
(e.g. I would be (pleasantly) surprised to find that the comments referencing Zoe’s post are careful to phrase things like “If the things that Zoe claims are true, it seems that some troubling dynamics developed during the year or so immediately prior to Leverage 1.0 being dissolved, and if possible, it would be good to figure out whether they are true and whether there was malicious intent” because given my experience of this sort of thing (including the fb post where I learned about the bounty), I would expect to find many more posts saying things like “Well, now that we have confirmation that Leverage is/was quite bad, _______”)
* This is maybe a good moment to point something out about how these public posts can work, if you’re not careful — near the beginning of this post, I told you that in the early days when we didn’t have enough room, visitors (sometimes people who wanted to join the team) would occasionally end up sleeping on the floor in some common space (in Brooklyn, at least once in the kitchen and perhaps most memorably on the stairwell landing). At the time, no one batted an eye (well, except for me -- I was somewhat horrified at our lack of proper accommodations/hospitality). People hung out talking until late, grabbed a pillow and some blankets, and found a place to crash. But now that I’ve stated this, months or years from now, a person could make a post claiming that at Leverage 1.0, prospective hires were forced to sleep in public spaces, insinuating (though not explicitly claiming) that it was some sort of strange hazing tactic. And if someone from the Leverage ecosystem piped up to say that they didn’t experience that themselves, and that they didn’t see that happen to anyone else, someone might ask them if they were 100% sure that it never happened. And now there’s a strange dynamic where if they cede that maybe it did happen (which is true), readers might then update, not to the truth which is that sometimes the house was overflowing with visitors and people who wanted to stay the night had to improvise, but instead, because of the frame of the claim (and thus the frame of any rebuttal) readers might update to believing that at least at some point in the past, Leverage 1.0 engaged in shady hazing practices — which is completely untrue.
With my example above about astronauts, olympic hopefuls, soldiers, horror movie film crews, etc., if you strip away the context, you’re left believing that the “facts” are describing a negligent parent and partner, a delusional masochist, a murderer, and a group of what you would probably think of as twisted psychopaths. You *need* the context, especially for unusual circumstances, or you and your audience will be quickly led astray.
(It actually seems likely to me that the Rationality community would have a name for this kind of thing, and if there isn’t one, then there should be – essentially: saying true (or almost true) things with framing or lack of context that causes people to believe false things. I think this is a major tactic that people have used to attack us, and I’m hoping that I will have provided enough context here that it will be noticed/called out more frequently. I also saw people within Leverage, including Geoff, use this tactic.)
This meant that I didn’t think that I could casually share things without risking further attacks or costly misunderstandings (we had no spare time to allocate to arguing with people on the Internet and when we did try, it didn’t seem to have much of an effect), so I switched to only talking about what we were doing one-on-one at events that we hosted or at other events around the Bay Area, social or otherwise, and I’d try to catch my family and friends up when I’d see them at the holidays.
But the pace of discovery and development and changes in the structure and composition of the team was too fast to allow for people to actually keep up unless they were in the thick of it with us; I started to dread holiday visits because I couldn’t get myself to give up on having the important people in my life understand what I was doing, but trying to explain all the necessary pieces over some pumpkin pie and spiked nog was a fool’s errand. It was a little easier talking to people at events -- people typically were there because of a shared interest in improving the world and/or in better understanding human behavior and effectiveness, so we had common ground. And it was better than any kind of public forum because despite having much less reach, I could handle people’s concerns in the moment and I could change topics or excuse myself if it felt like someone was just looking for conflict or if they had already made their minds up about us and our work.
And I think I was not alone in this general belief that some fraction of people would respond badly no matter what we shared. In addition to working long hours, we also did all sorts of fun and interesting things that would normally be plastered all over social media: a dozen of us went to the World Economic Forum in Davos, we had holiday parties with competitive games like “pin the snowman parts on the snowman,” a few times a year we would cook an obscene amount of food and put on a fancy dinner party that lasted all night, we hosted a “Parents Weekend” where about 25 of our parents traveled to Oakland to learn more about our work and our goals and try out some of our training tools. We had dozens of birthday celebrations of different types – once someone brought in a tango instructor to give us all a class, and sometimes we’d eat a ton of someone’s favorite food and watch their favorite movie with them, or share the homemade birthday cake that someone’s mom would mail every year, and once we put together a slideshow of pictures from someone’s whole life and we all hung out while they did an AMA, I even think there was a piñata at some point. We had a team bonding excursion to play paintball together, and there was a whole phase where if you looked out across the street on any given afternoon the odds were good that you’d see people practicing their swordwork. We had a large group that gathered on Sunday evenings to improve their voices by way of singing sea shanties (with coaching by one generous and patient soul). We put on a few really excellent concerts and one really fantastic series of short one-act plays (led by Zoe). At some point, for no reason at all, someone gifted all of us with a free-form movement class that was a total blast, and someone else took a bunch of us out dancing at a club in SF with their favorite DJ in what ended up being a truly unforgettable night. There was even a gorgeous wedding and reception a few years ago, complete with performances from people in the group as well as lengthy and heartfelt toasts.
Tons of pictures were taken across these events and many others, but almost nothing was ever posted online and while I don’t remember people really talking about it, my guess is that it was largely due to the background belief that people would find some way of judging us poorly. None of this even has anything directly to do with any part of the project that I’ve ever heard critiqued; it’s more like there was enough evidence of us not being accepted by the outside communities (as a group or as individuals) that it was better to not try to share even the innocuous parts of our lives.
This also relates to the issues that arise with people trying ambitious things (that I talked about in the section on how the efforts of the Leverage ecosystem should be viewed) and I also think that part of it was an extension of the problem of there being too much to catch up on. If you haven’t been keeping your fb friends updated, suddenly sharing a picture of yourself in Switzerland may require more of an explanation than people have the time and energy for (especially if the subsequent discussion turns to talking explicitly about the existence of one’s ambitious goals).
I think you can probably understand my hesitation in sharing photos of these things here -- I have many of them that are easily accessible and I think it would probably help to paint a more complete picture, especially for those who only really know about us via posts on LW -- but I’m not confident that people could receive them in a neutral way, and I’m also quite worried about doing anything to publicly reveal the identities of anyone who was part of the group or who attended any of our events, because I don’t currently trust that they won’t be defamed directly or via association.
I guess maybe the right thing is to ask people to opt in. If you’re from the Leverage ecosystem or if you attended events with us and you’d be comfortable with me sharing photos (or video) here that include you, just shoot me an email with your preferences. If there’s enough willingness, I’ll put together something that I think will (re)humanize this group of people in a more visceral way.
Once someone has made strong negative claims, in a desert of information, there’s no quick and easy way to add a small amount of additional information to then resolve those claims. And if all you do is address the claims, you’re forced to speak entirely from their particular limited narrative. But if you just speak from your own, then it can seem like you’re being avoidant. It’s taken me the better part of a month to write this relatively choppy response, sneaking in an hour or two here or there, cutting into my sleep time in the evenings or my meeting prep time in the mornings (I think it’s reasonable to assume that the majority of people who were working hard in the Leverage ecosystem are currently working hard wherever they find themselves today), and I still haven’t addressed all the public claims or even told all the important parts of the story.
I think it can be easy to assume that maybe there’s been an admission by silence, given all the people who could speak up but haven’t. I promise you: the task of giving a measured public response in a circumstance like this is really really hard.
And the project was so diverse that I expect when people from the Leverage ecosystem see Zoe’s interpretation and description of her experience, they may recognize that they have a very different view of their own experience of the project (a bit like the blind men and the elephant) but not feel comfortable sharing it in a way that might be interpreted as pushback—especially if they care about her, which I believe includes everyone in the relevant category.
We want to move forward with our lives
(and not be canceled)
Maybe the people in the Rationalist community have determined that LW is the only relevant audience. That this is where trials should take place and where judgment should be passed. But for people who intend to interact with the broader world, even the decision to have this type of unmoderated conversation in public may ruin dozens of people’s ability to engage with that broader world. It was already the case that I decided to remove Leverage from my LinkedIn many years ago — before things started to get really tough. I didn’t do it because I was ashamed of all my years at Leverage, but because there were enough haters on the Internet that I didn’t want to risk my affiliation causing trouble for the startup team I had started working with, especially while they were just getting off the ground.
For those of you who are trying to curate a community, I would look hard at the ways that you might be giving ill-will a platform and allowing people to throw shade under the guise of keeping people informed or pursuing the truth.
You may still be uncertain about what happened in the Leverage ecosystem and who is to blame, but I don’t think this should be contentious: while I am far from perfect, I am generally a very high quality teammate and the world would be a worse place if I were blocked from contributing to other efforts to improve the state of the world. And this holds true for dozens of others from the Leverage ecosystem.
In today’s climate, hosting this information-poor conversation in a googleable forum might well compromise people’s chances at a life in public service, or their ability to secure funding, or may cause them to be more broadly “canceled.” It seems like Zoe might not have been in a position to think through the way that the framing of her post would damage everyone with ties to the project: employees who joined what was essentially a new traditionally professional project after the dissolution of the Leverage ecosystem, people who have worked tirelessly to build really impressive things in the intervening years, and even people she might consider victims (not sure how she would classify someone like me in her analysis) — but the people in the EA and Rationality communities (as well as the friends who are supporting her) should be savvy and distanced enough to recognize that.
In discussions of this post (the content of which I can’t predict or control), I’d ask that you just refer to me as Cathleen, to minimize the googleable footprint. And I would also ask that, as I’ve done here, you refrain from naming others whose identities are not already tied up in all this.
So, yeah, this whole thing was quite complex and challenging and there are lots of reasons that it’s neither appealing nor easy to talk about this stuff, especially on the Internet.
And it doesn’t seem to me that the people who are seeking information are doing a very good job of making it easier for us to talk about what happened.
I guess I hope that the background context I’ve shared will help outside parties to reorient and will make room for people from the Leverage ecosystem to share their experiences, both good and bad, maybe publicly, or maybe just privately, with people who have earned their trust.
I really don’t think that any of us should be ashamed of what we tried to do, or even ashamed of the ways in which we failed (though I do expect us to have learned valuable lessons). We are a bunch of imperfect humans who made a valiant and ultimately heartbreaking attempt at doing something really important and special, and I’m sorry that the world hasn’t been kinder to those of us who’ve had to pick ourselves up and dust ourselves off over the last couple of years.
It’s been really hard for me to write this, and I can’t imagine that it will be easy for others in the project, but for avoidance of doubt: it’s totally fine with me if you want to tell your story in a way that contradicts mine or if you just want to point out things that I missed or things that I framed differently than you would’ve (I’m also happy to make edits if there’s something I can fix) – as I’ve said, these are really just the things that felt useful to share when I found out about the online discussion that’s been happening, and it’s not meant to represent the whole picture.
I feel bad that this post spent so much time slogging through all the hardest parts – I would really love it if future posts could highlight more of the fun and excitement and discovery and camaraderie and optimism of our efforts, because that’s an important part of our story as well and I expect there’s a fair amount of implicit confusion about that built into the online discussion thus far.
It’s a slow painstaking process for me to put my thoughts into writing like this, so I’m probably not going to have the time or emotional stamina to engage with comments or reactions to what I’ve shared here (though if you’re a friend/consider yourself a friend, feel free to reach out), but I hope you found it useful.
Until next time, as Brené would say: “Stay awkward, brave, and kind.”
12/22/21: quoted a request from the section “We want to move forward with our lives” in the preface for greater visibility