Some thoughts on recent Effective Altruism funding announcements
[Linkposted from my blog, Understanding Social Change]
Epistemic status: Quite weak. I wrote this fairly quickly and I’m probably much less informed than those actively involved in the EA funding landscape.
It’s been an interesting couple of weeks for the EA movement, to say the least. The FTX foundation, thanks primarily to Sam Bankman-Fried, has announced the launch of the Future Fund, whose giving will focus on improving the long-term future. The Future Fund says it wants to deploy $100 million this year alone, with the potential to increase up to $1 billion if the opportunities are good enough. On top of this, Open Philanthropy is now hiring for four different roles within their longtermist community building team, who gave $60 million to long-term community building in 2021 and are very likely looking to increase this number significantly, given the expansion of the team from four people to potentially eight.
The announcement by the Future Fund was particularly interesting, as they’re taking a more decentralised approach to grantmaking than most existing foundations. Specifically, they have a long list of projects they’re interested in hearing proposals about, with a competition to source more ideas, as well as running a regranting challenge. The regranting challenge is something particularly needed I believe, as it’ll build the grantmaking capacity of people within the EA movement, which will be crucial going forward when deploying even larger sums (I expand on this further down).
If that wasn’t enough, Open Philanthropy also announced they were hiring a program officer for community building around their Global Health and Wellbeing portfolio, which focuses on global health and poverty, farmed animal welfare, scientific research, and more. They state here they expect the program officer to allocate $10 million in funding in their first year and that “funding could grow significantly from there depending on the volume of good opportunities they find.”
So, what does all this mean for Effective Altruism as a movement, or for individuals who are trying to do the most good? A few things potentially:
The funding landscape for long-term vs near-term opportunities seems to have changed significantly, with approx. 5x more funding available for long-term community building relative to near-term community building.
We need to scale up our grantmaking capacity as funding might increase by 8-10x by 2025, relative to 2019 levels.
The EA movement needs to consider how having the influence and resources to grow certain fields by up to 10x will influence the wider ecosystem of organisations. It’s traditionally been based around marginal allocations of money, but now more complex coordination dynamics might appear.
And what’s been said many times, but is still true:
4. We need entrepreneurs, founders and other people who can kickstart projects
5. We need to massively scale up our ambition
-
1. The funding landscape for long-term vs near-term opportunities seems to have changed significantly
Seemingly an impossible and never-ending challenge, is how we allocate resources across a variety of different worldviews. This is a particularly hot topic when it comes down to near-term vs long-term causes; How do we allocate resources to help people alive now vs people who might live 10,000 years away? And how do animals fit into the picture?
To think about how the funding for near-term vs long-term cause areas have changed, we can use community building, or more ‘meta’ interventions, as an example. This assumes that the funding allocated to community building by the EA community is broadly representative of the total funding allocated to near-term vs long-term causes. This might break down slightly as it’s generally acknowledged that there are more tangible near-term opportunities to give (e.g. GiveDirectly could probably still productively use $350 million, although the effectiveness of this is 5-10x lower than other GiveWell top charities). On the other hand, longtermist funders are constrained by the number of good opportunities and proposals they receive. Due to that, it seems reasonably obvious that the longtermist community would spend more on broad movement building, to build the pipeline of people who are capable of starting successful and impactful projects to improve the long-term future, whereas this is comparatively less of a priority for the near-term EA community.
If we just look at the recent Open Philanthropy updates on their community building spending, we can get a sense of how this looks for Open Philanthropy, who controls a significant portion of committed EA funding. One note is that since this post by Benjamin Todd, the major donor situation has changed slightly, with Dustin Moskovitz’s net-worth dropping to $13 billion and SBF’s increased to $24.5 billion. This means that the FTX foundation might be the biggest EA funder going forward, so it’s probably more important to see what their priorities are. In both cases, the trends tend to be a growth in longtermist community building over near-term efforts:
The Open Phil program officer for near-term community building is expected to give $10 million in their first year, whilst the Open Phil longtermist community building team already allocates $60 million, and is hiring an additional four people, therefore could very realistically reach $100 million with the potential doubling in team size. An important caveat is that there is likely a good amount of overlap between the two portfolios, but the difference in funds is still apparent.
The Future Fund, the longtermist fund of the FTX foundation, has committed to spending $100 million - $1 billion in the next year. Conversely, their arms focused on near-term efforts, FTX Community and FTX Climate, have no similarly large promises (yet!) and have committed $18 million and given $3.8 million to date. I do expect this to change at some point but again, I think it’s fairly apparent that the long-term side of the foundation is being prioritised over the near-term opportunities, which is an indicator of overall priorities and funding to come.
Lower bound estimate (in $ millions per year in 2022) | Median estimate (in $ millions per year in 2022) | Upper bound estimate (in $ millions per year in 2022) | |
Open Phil Longtermist community building | 100 | 120 | 140 |
Open Phil near-term community building | 10 | 20 | 30 |
Ratio of OP community building, longtermist / neartermist | 10 | 6 | 5 |
FTX Future Fund | 100 | 400 | 1,000 |
FTX Community Fund | 18 | 50 | 100 |
Ratio of FTX longtermist / neartermist | 6 | 8 | 10 |
In short, it’s tough to place an exact number on the ratio of allocated funding for near-term vs long-term causes, especially without the FTX Foundation announcing their commitments to FTX Community and FTX Climate. At a rough best guess, this ratio seems like 1:7 (near term: long-term) when considering community-building for Open Philanthropy, assuming a doubling in size of the longtermist community building team causes an increase of 66% funding distributed. For FTX, it could similarly be around 1:8 or even up to 1:10, provided the longtermist opportunities are good enough. A caveat again is that the projects proposed by the Future Fund and those funded by Open Phil will likely overlap with near-term community building efforts (e.g. improving operations with EA, more competition in the EA ecosystem, increasing diversity within EA, etc.) so this isn’t so clear cut, and could be more like 1:5. Another thing to keep in mind is that the EA Infrastructure Fund (via Peter Wildeford) is also quite keen to fund neartearmist community building, so the difference might be reduced further, depending on the size of projects.
Interestingly, Ben Todd finds that resources allocated to broad longtermism are around 9x smaller than what EA leaders think they should be. This additional funding increase, given that my ratio of 1:8 is broadly correct, would bring funding for broad longtermism much more closely in line with the ideal portfolio from the EA Coordination Forum in 2020.
In addition, according to the same post by Ben above, neartermist grants were a much larger proportion of EA’s giving in 2019 relative to longtermist giving (71% to 29% if you include animals in the near-term bucket). Overall, this funding increase by the Future Fund brings little overall change in ratios of funding for longtermist vs neartesrmist causes for 2022, as it is counterbalanced by GiveWell allocating an estimated $400 million in 2021, up from $172 million in 2019. This is somewhat surprising, as I always assumed longtermist grantmaking was already much greater than the neartermist portfolio, yet this post also confirms otherwise. However, It’s quite likely that longtermist grantmaking will outpace the neartermist portfolio over the next 3-10 years, as the total funding committed to longtermism is seemingly much greater (definitely by the FTX Foundation, if not also Open Phil).
Worldview | $ millions per year in 2019 | % of total in 2019 | $ millions per year in 2022 (estimated) | % of total in 2022 | Growth from 2019 to 2022 |
Near-term | 230 | 57 | 640 | 60 | 2.8 |
Long-term | 117.8 | 29 | 318 | 30 | 2.7 |
Animal Inclusive | 55 | 14 | 110 | 10 | 2.0 |
Total | 402.8 | 100.0 | 1067.8 | 100 | 2.7 |
A note that my table above is quite simplistic, and I could be off by quite a lot, as I’m working solely with public information which isn’t always perfect/up-to-date. I calculated the 2022 values using Ben Todd’s previous funding allocation post, along with adding $400 million (Open Phil + GiveWell) million to near-term funding efforts, $200 million to long-term funding ($100m from the Future Fund + $100m from Open Phil, which is roughly doubling their 2019 figures), and increasing animal funding by 2x, which are all somewhat conservative/reasonable I think. It’s quite likely that there have been changes in other donors within the EA community that might have altered these numbers as well. However, it’s also fair to assume that Open Phil, GiveWell and FTX are the biggest funding sources within the EA community, so it might not be too far off given I’ve estimated those amounts correctly.
All in all, I think it’s safe to say that if you want to launch a longtermist community building project (e.g. like one of these), now is probably a pretty good time to go for it!
2. We need to scale up our grantmaking capacity
The amount of funding committed to Effective Altruism has grown dramatically in the past few years, with an estimated $46 billion dollars currently earmarked for EA. With this significant increase in available funding, there is now a greatly increased need for talented and thoughtful grantmakers, who can effectively deploy this money. It’s plausible that yearly EA grantmaking could increase by a factor of 5-10x over the coming decade, based on the FTX Foundation and Open Philanthropy scaling up their funding, which they’re planning on doing (I believe). Some quick numbers on this:
EA deployed $400 million in 2019
GiveWell is expected to deploy around $1 billion per year by 2025
FTX could deploy $1 billion as soon as this year, but more likely over the next few years
Open Phil gave $330 million in 2021 and this could reasonably increase to $1 billion per year by 2025 (they’re planning on giving $500 million to GiveWell’s recommendations alone in 2022 and 2023). A small caveat is that Facebook’s stock fell by almost 40% since the time of writing that post, which is where most of Dustin Moskovitz’s money is held, so that might affect future spending if Facebook’s recent stock performance persists.
Overall, that means by 2025, the EA community could quite reasonably be deploying around $3 billion per year, which is 8-10x larger than 2019 figures. Whilst the number of grantmakers doesn’t have to scale up by 8-10x in line with this, especially if the Future Fund is taking a more decentralised approach, it’s plausible to assume that we’ll need significantly more grantmaking and project vetting capacity, possibly by a factor of 3-5x. Obviously it’s no longer 2019 and we’ve probably increased our capacity in this field since then, but I’m doubtful it’s grown by more than 2x in the past 3 years.
80,000 Hours also agrees that grantmaking is a significant bottleneck to effectively deploying funding. They also touch on the difficulty in entering this field, especially for people earlier in their careers, which most EAs tend to be. This implies that we need some structural solutions to overcome these barriers, whereby people can both a) test if they would be a good fit for grantmaking, on a satisfaction and aptitude level and b) build trust with philanthropists or foundations if they were a good fit.
As someone who’s quite interested in exploring grantmaking, I’ve found no opportunities to proactively test this on a small scale (with the exception of the regranting program by the Future Fund, which I’m applying for). This seems like something we should be working harder to address and as per my comment on important projects that FTX could fund, I believe there are some ways we can develop this grantmaking capacity. Some ways to build the grantmaker pipeline might be grantmaking fellowships, grantmaker mentoring, more frequent donor lotteries, more EA funds-style organisations with rotating fund managers, more regranting challenges (if successful), and more. Specifically, I would be excited about some version of this pipeline:
Selective grantmaker fellowships (in the format of a 6-8 week course) organised by an organisation that’s either focused on upskilling EAs (e.g. Training for Good) or grantmaking specifically. This would almost definitely need to be run by at least one experienced grantmaker for it to be worthwhile.
The best candidates from the fellowship are invited to take part in a regranting challenge by a foundation, or invited onto EA Funds (or a similar org) as a part-time guest manager
The best candidates again can be offered more permanent roles within foundations / with philanthropists.
These candidates (or ones that didn’t meet the bar earlier on) have 1-to-1 mentoring with experienced grantmakers within their field, to further hone their judgement and develop best practices.
Step 2) onwards could also happen for someone who performs particularly well at a donor lottery. People within the EA community generally believe that donor lotteries are useful, yet we only have 3 per year. Having something closer to 5-10 would allow many more interested people to try out grantmaking, which is seemingly one step on the journey of how Adam Gleave became a fund manager on the Long Term Future Fund.
Generally, I think another EA Funds style organisation with rotating fund managers would be extremely high value, as you could offer approx. 4 people the opportunity to test their fit per cause area, as well as increasing funder diversity by a good margin.
3. Does the EA movement have to shift from marginal thinking to more complex coordination dynamics?
Many EAs utilise marginal thinking when thinking about their donations, which might look like, what’s the specific additional impact of my donation, given that X amount of money has already been donated to this charity? This worked well when EA was relatively much smaller and had less leverage over the wider ecosystem of a certain cause. However, with the significant increase in committed funds for EA, there are already signs that EA funding is shaping certain fields towards EA priorities. As giving is plausibly expected to grow by up to 8-10x by 2025, relative to 2019 figures, this will likely become even more pronounced. For example, there was approximately $40 million being spent on biorisk and AI alignment in 2019, and I’m almost certain that if the opportunities were good enough, this could easily be $400 million per year for those two causes in the next couple of years. However, increasing the funding size of a field by up to 10x might introduce some complex dynamics, such as:
Making a field more narrowly focused on one or two programs/projects that are key EA priorities, rather than expanding more generally. Crudely, it might cause a brain drain of talented labour away from certain fields or subfields. A fictitious example might be that EA funding towards climate focuses solely on nuclear fusion innovation, so attracts the top researchers there, which means green hydrogen, advanced geothermal and fission become much more neglected. This also assumes that researchers can easily move between certain sub-fields, which isn’t always true due to specialisation, but at the least it might affect the trajectory of early-career researchers who do have some flexibility.
This could be harmful if the narrow focus is contingent on one primary theory of change. If then that theory of change doesn’t work out for some reason, it means we’ve potentially altered the priorities of a field towards a certain approach, which then leads to a dead-end. Instead, I generally think it’s good to pursue multiple promising theories of change, so our strategies are more robust to the various ways the world could turn out. See more about this in a previous blog I wrote, in the context of animal advocacy.
Strong financial incentives where EA donors/organisations offer more money than existing organisations in that space. This could cause the leadership or otherwise key people to leave existing organisations, making them weaker overall (or even worse, destabilising them completely)
Power dynamics that affect epistemic integrity: Whilst EA generally prides itself on transparency, humility and the ability to change one’s mind in light of evidence, it’s still a common fear that your funding might be cut if you don’t go along with some of the beliefs of your funders. This could lead to pursuing suboptimal or even harmful projects, for the sake of having a stable salary. Whilst comments by EA funders indicate they are very much in favour of funding critiques of dominant EA worldviews, this is still a very real concern for a lot of people.
Unknown unknowns / more things I can’t think of
Some tangible examples of these effects already happening might be:
Farmed animal welfare: A post from ACE that I can’t currently find calculated that in 2018⁄9 (roughly), around 25% of all funding for farmed animal welfare came from EA donors. 25% is a significant portion of farmed animal advocacy, and will likely influence the programs that organisations pursue or expand. For example, it’s quite likely that Mercy For Animals scaled up their corporate campaigns, due to large EA funding in this area, much more relative to their work on public engagement. In short, it’s very plausible that strong support from EAs for corporate campaigns has shifted a lot of the farmed animal welfare towards expanding their projects in these areas and focusing less on other programs. Whilst this isn’t necessarily a bad thing (as cage-free campaigns have been extremely effective), it’s certainly a consideration that EA funders should have in mind.
Biosecurity: I’ve only heard things informally about this, which is that funding by EA donors is attracting researchers towards less likely but much more severe pandemics, rather than ones that are more likely but less of an existential threat if they do occur. I have no idea if this is true but it seems plausible, and again, a potential cause for concern.
AI Safety: EA probably funds almost all the orgs in this space, so it’s almost certainly affecting the priorities, but I’m not involved enough to say exactly how.
An important caveat is that I’m somewhat confident that people at Open Phil, FTX Foundation, EA Funds, etc. are thinking about this already (I hope!). And as mentioned above, these dynamics aren’t necessarily bad, as it potentially means people are working on more effective interventions or pressing problems. However, it may lead to some unwanted and potentially negative consequences, listed above.
In short, what this means for major funders is that they might now need to have a more holistic view of the cause they’re working on, to ensure that it is developing well across the board, and ensuring additional EA funding doesn’t lead to any blindspots or adverse consequences. Broadly, this means investing in an ecology of change. This is potentially an over-simplification of a very complex coordination problem, so take it all with a pinch of salt.
The final two points have been spoken about much more within the EA community, yet the issues still remain:
4. We need entrepreneurs, founders and other people who can kickstart projects
In essence, it’s now widely talked about that funding has grown faster than the number of people involved with EA. With this increased amount of funding, one of the main bottlenecks to greater allocation per year is the number of good proposals, which is very closely related to the number of founders willing to launch them (as we have no shortage of ideas). This means that entrepreneurs, or people willing to start nonprofit or for-profit organisations, are in extremely high demand and could have huge leverage in unlocking more funding (especially by building scalable projects).
How could we make more of this happen? A couple of very preliminary ideas:
More incubators: Charity Entrepreneurship has done amazingly well, launching over 15 high-impact charities over the past 5 years. However, the longtermist version of this project was less successful, although it could be tried again. Specifically, there could be a lot of value in cause-area specific incubators, for issues such as animal welfare, global health, nuclear risk, pandemics, and so on. This would allow for greater cause-area learning and specialisation, potentially increasing the success rate of the charities that do launch.
Bringing non-EA entrepreneurs into EA: This would go hand in hand with an increased push on outreach, advertising and marketing by Effective Altruism as a movement. We’ve generally been focused (and still are focusing quite a lot) on engaging young people, primarily university students. However, this has potentially led to the problem we have now, where we have a shortage of experienced leaders and entrepreneurs relative to the highly capable young people we have. Specific outreach, whether that’s programs, incubators or online advertising, tailored to non-EA (but still value-aligned) entrepreneurs might lead to an influx of new capable leaders who can start the ambitious and altruistic projects we have in mind. This could be aided by refining EA communications and framing, to attract this slightly newer audience.
5. We need to massively scale up our ambition
It’s hard to add more to the post linked above, but in short:
We have an incredible opportunity to do good, with the amount of resources being allocated towards making the world a better place.
The world has so many problems that could be easily fixed (or improved with enough effort) so that it could help countless lives, both human and nonhuman.
It’s down to each and every one of us to go out there and do all in our power to bring about these changes. The world isn’t going to improve itself.
Is there much debate on this? I’d expect most EAs to answer ‘no’ and ‘discount rate=0’.
I’d expect more debate over the tractability of longtermist interventions.
As an EA group facilitator, I’ve been a part of many complex discussions talking about the tradeoffs between prioritizing long-term and short-term causes.
Even though I consider myself a longtermist, I now have a better understanding and respect for the concerns that near-term-focused EAs bring up. Allow me to share a few of them.
The world has finite resources, so when you direct resources to long-term causes, those same resources cannot be put towards short-term causes. If the EA community was 100% focused on the very long term, for example, then it’s likely that solvable problems in the near-term affecting millions or billions of people would get less attention and resources, even if they were easy to solve. This is especially true as EA gets bigger, having a more outsized impact on where resources are directed. As this post says, marginal reasoning becomes less valid as EA gets larger.
Some long-term EA cause areas may increase the risk of negative outcomes in the near-term. For example, people working on AI safety often collaborate with and even contribute to capabilities research. AI is already a very disruptive technology and will likely be even moreso as its capabilities become more powerful.
People who think “x-risk is all that matters” may be discounting other kinds of risks, such as s-risks (suffering risks) due to dystopian futures. If we prioritize x-risk while allowing global catastrophic risks (GCRs) to increase (that is, risks which don’t wipe out humanity but greatly set back civilization), that increases s-risks because it’s very hard to have well-functioning institutions and governments in a world crippled by war, famine, and other problems.
These and other concerns have updated me towards preferring a “balanced portfolio” of resources spread across EA causes from different worldviews, even if my inside view prefers certain causes over others.
This is directly captured by the ITC framework: as longtermist interventions are funded and hit diminishing returns, then neartermist ones will have the highest marginal utility per dollar. (Usually, MU/$ is a diminishing function of spending, so the top-ranked intervention will change as funding changes.)
Yes my bad! This is actually what I meant e.g. the epistemic uncertainty around longtermist interventions makes it challenging to determine funding allocation. Will amend this, thank you!
We (Training for Good) are actually developing a grantmaker training programme like what you’ve described here to help build up EA’s grantmaking capacity. It will likely be an 8 week, part-time programme, with a small pot of “regranting” money for each participant and we’re pretty excited to launch this in the next few months.
In the meantime, we’re looking for 5-10 people to beta test a scaled-down version of this programme (starting at the end of March). The time commitment for this beta test would be ~5 hours per week (~2 hrs reading, ~2 hrs projects, ~1 hr group discussion). If anyone reading this is interested, feel free to shoot me an email cillian@trainingforgood.com
Nonlinear is launching a longtermist incubator! Given my background co-founding Charity Entrepreneurship and nobody else moving forward on the idea, I thought it was a good fit for us.
Details to be announced soon.
More precisely, longtermists have zero pure time preference. They still discount for exogenous extinction risk and diminishing marginal utility.
See: https://www.cambridge.org/core/journals/economics-and-philosophy/article/discounting-for-public-policy-a-survey/4CDDF711BF8782F262693F4549B5812E
Your point on biosecurity and marginal thinking was discussed in this forum post.
The point about bringing non-EA entrepreneurs into EA is a good one (I think—but I think that I think that because I have also been thinking about it recently!). One idea I’ve had is whether or not it’d be worth hosting some type of conference where we bring together HEA university students with enterprising and ambitious (not required) business school students to exchange ideas and brainstorm, with the aim of allowing co-founder pairs to enter some type of competition with seed-funding as the prize
Great idea, at TFG we have similar thoughts and are currently researching the best way to run a program like this. Feel free to PM to provide input.
By “pure time preference” I mean discounting future lives simply because they are in the future.
Can that be expressed in non-utilitarian terms?
Hi James,
I understand you received EAIF funds to explore social change.
That seems like a challenging and important project.
Following links in this post, you have made a website for your Social Change Lab and have yourself as Founder and Director, and hired another person as “Research Manager”.
It would be valuable to get details on your new institution and “object level objectives” and work. The link (“initial research”) on your website goes to the initial post you made on the EA forum before you received the IF grant, and doesn’t address issues with that post that others brought up.
Can you describe what your plans have been, how your ideas or theories are playing out, or succeeding or failing? What have you learned or expect to learn?
To be clear, I don’t think you need to be successful in any one grant to be a very valuable EA or person (as long as the externalities are low). In fact, this person here received $40,000 from the EA animal welfare fund, a much more funding constrained area and for a project one might expect a tangible output from. It’s not clear what happened there, as there is no legible output at all on the EA forum about this from him about his project.
The subtext of this comment is complex and negative to you. TLDR; I think it’s OK to try to steer or criticize EA, and even take EA resources when doing so. I’m concerned about this being done in a certain way, occupying or tagging niches in a high-trust environment.
This question seems like it should be a private message? I don’t see how it’s relevant to the post you’re replying to.
The EA forum is complex and probably unique. I think there are several important features:
It’s performative, as EA has various audiences to whom maintaining tone and norms is important. It’s also part work forum, like a company intranet. Every funder and collaborator can see everything you ever write, so breaking norms, such as being negative or confrontational, is costly (while certain actions may be risky or have no personal reward).
The forum is a way to communicate and try to find the truth about important causes or decisions. However it does this in a funky way—you can confront ideas with extreme aggression (and get authority for doing so), yet you might not even be able to indirectly suggest that there are issues about someone’s relevant credentials or ability (even when they use these explicitly or you suspect they have arrogated themselves).
The forum has strikingly different reactions based on insider or outsider status: content from newcomers and many critics are treated well, even when it’s pretty bad, or they make direct personal attacks. At the same time, people who occupy meta positions, places of authority regularly encounter hostility. This is probably a feature, not a defect. However, it’s possible someone could straddle the space between these roles, and shield themselves by the norms of one, while using the other to advance their goals.
There’s more prosaic issues. Like other forums, it isn’t always representative and can acquire constituencies with their own views. Issues or grievances (that are very real or neglected) can be hard to explain or confront, and exist for long periods of time without challenge or solution.
There’s some other features that are relevant, but this is too long already.
If you thought people were exploiting these features in some way, I guess you could write a short form post or something directly denouncing people personally. But that seems hard and bad for lots of reasons. It also doesn’t give you a chance to collect more information.
If for some reason, you had a pretty deep model where you thought someone was doing this, you might write a comment that isn’t that confrontational, and whose consequent reaction gives more information to you.
Note:
Note that a compelling reason to do this is if you think you were involved (or held back in some earlier process) and might bear some responsibility, especially if the underlying issue involves entrenchment.
You might be worried about making a mistake and the comment having various negative externalities. If in the past you interrogated the reactions of the EA forum to negative comments, you might have information that suggests these externalities are small.
Hi Charles, I’m quite confused by this comment (especially the subtext) and messaged you directly to hopefully sort this out.
Hi James, I think my comment is reasonable, I don’t see why you can’t answer the question raised.
I view your private message (which asserts my comment is inappropriate and strongly suggests I only raise them in private to you), strong downvoting my comment and pointing to my subtext (which exists whether I write it or not) as negative.
I’m happy to answer your questions, we’re working on our introduction post now so it’ll be up by the end of next week hopefully. For the record, I didn’t strong downvote your comment or “assert” anything but I’m not sure this conversation will be a productive dialogue anymore so I’ll send you the document once we’ve finished it.