EA Poland—details of online book club

In this post we share our experience of running an online reading group in hope to help others who run similar projects. This is a part of our Community Building work in EA Poland.

You can read more about our other activities in our recently published post.

TLDR

We’ve been running an online reading group for 9 months in Poland, once every 2 weeks. We invite people through our social media—Facebook, Instagram, and newsletter. Some meetings attracted a lot of (even 15) people, some did not. We had two objectives in mind:

  • Community building—using reading group as low-commitment entry-point to EA. It probably failed. Although a few new people joined our Slack after the meetings, they are no longer active.

  • Creating and strengthening bonds between EAs. Limited success. Success, because of 1) just hanging out together, 2) exploring tough topics together, made us friends. Limited, because there are only a few regular participants, so our bonding is limited to this small subgroup of the EA community.

Our current plan is to continue organizing those meetings.

Our previous experience

In this post I describe details of how our reading group has worked since November 2022, when we decided to keep our meetings open to everyone. Before that, from March to September 2022, we had our internal book club meetings (only for EAs, not announced on our social media). We read Doing Good Better and The Precipice. From those meetings we had a few main insights:

  • Weekly meetings are too time consuming for us.

  • Reading whole books does not work for us because

    • People who missed a few meetings are less likely to join in the middle of the book.

    • Over time, we get tired of discussing the same topic.

  • Moral claims are much more likely to start a discussion than factual ones. We learnt that while reading The Precipice—chapters 1 and 2 of The Precipice led to fiery discussions on longtermism, but later chapters, which contained mostly factual information about particular X-risks and we found that there was really nothing to talk about.

How it works

Organisation

  • We meet every 2 weeks, online.

  • The meeting lasts 1h15min. We started with 1h and then agreed that we’d prefer longer meetings.

  • Each meeting is announced on Facebook, Instagram and in our newsletter.

  • I prepared a Doc with the key actions when organizing the meetings. Might be helpful for other organizers.

Readings selection

  • We decide on the next reading together. I collect 4-5 diverse titles and write a very short description of each of them. Then, I post them on our community Slack channel and we start voting.

  • We choose readings that are easily accessible—either free online access or available in most public libraries.

  • Topics are related to EA, ethics, philosophy. We just read what seems interesting to us at a given moment.

  • We try to select readings that take 1-1.5 hours to read. We noticed that with longer readings we sometimes fail to complete them.

  • In the final paragraph of this post I gathered all our readings and general impressions from the meetings.

How it worked out

We had three main goals:

  1. Having fun ourselves and learning new stuff together.

  2. Community building—using reading group as low-commitment entry-point to EA.

  3. Strengthening bonds between Effective Altruists.

We think that we indeed 1. had fun and 3. got to know each other better. We also think that those meetings can be used as an entry-point for non-EA people, however we do not find support for this in our statistics. Indeed a few people joined our community Slack because of the reading group however they are no longer active users. In January 2023 we started organizing Intro Fellowship, so it is possible that it attracted newcomers who would have otherwise joined our reading group.

Lessons learnt

  • Long books can be too long because participants drop out, and no new people join in the middle of the book and participants might get bored.

  • Readings on morality are more discussion-friendly than readings on pure facts.

  • It might be tempting to plan a lot of reading for a single meeting, but in the long run we were just failing to complete it.

  • Meeting once a week was too frequent for us to sustain in the long run.

What’s next?

We plan to continue organizing meetings only changing the frequency. Instead of one meeting every two weeks, we would like to try doing them once a month. We expect that this could increase the attendance because it’d be easier for busy participants to find some time to prepare and attend.

Statistics

Here are some statistics and comments from our previous meetings. This can perhaps be useful for other book club organizers, just to see how a particular reading worked out for us.

DateReadingImpressions/​reception#participants
02.11.2022Peter Singer—Famine, Affluence, and MoralityThis topic worked really well with fluent discussion flow.

14

16.11.2022Peter Singer—Famine, Affluence, and Morality >=7 (forgot to write down)
30.11.2022Peter Singer—Animal Liberation (chap. 1)This topic worked really well with fluent discussion flow.

11

14.12.2022Peter Singer—Animal Liberation (chap. 1)There were only 4 people who basically agreed on the topic, so some non-mainstream cases were discussed, e.g. feeding dogs, moral offsetting, or moral status of farming “happy” animals.

4

11.01.2023

Holden Karnofsky—The Most Important Century:

1. All Possible Views About Humanity’s Future Are Wild

2. The Duplicator

3. Digital People Would Be An Even Bigger Deal

Fluent discussion flow. No need for using prepared questions. Some discussions on “playing god” and what we ought to do, minor discussions on technicalities of duplicating people, and the role of AGI in our future.

15

25.01.2023

Holden Karnofsky—The Most Important Century:

4. This Can’t Go On

5. Forecasting Transformative AI, Part 1: What Kind of AI?

6. Why AI Alignment Could Be Hard With Modern Deep Learning

There were only 3 people, but all of us with some knowledge about AI safety, and one of us quite large knowledge, so we discussed more technical topics like Mesa-Optimizers or Shard Theory

3

08.02.2023

Holden Karnofsky—The Most Important Century:

7. Forecasting Transformative AI: What’s The Burden Of Proof?

8. Forecasting Transformative AI: Are We “Trending Toward” Transformative AI?

9. Forecasting transformative AI: the “biological anchors” method in a nutshell

10. AI Timelines: Where the Arguments, and the “Experts,” Stand

Sharing our thoughts on forecasting—nice discussion, far from being saturated

4

22.02.2023

Holden Karnofsky—The Most Important Century:

11. How to make the best of the most important century?

12. Call to Vigilance

The good thing about meetings in small groups is that you can go in-depth with a particular topic.

3

08.03.2023Foerster, Thomas. “Moral offsetting.” The Philosophical Quarterly 69.276 (2019): 617-635. https://​​philarchive.org/​​archive/​​FOEMOFluent discussion. After the meeting, one participant said that it helped him to clarify thoughts on the topic.

3

22.03.2023Thomas Nagel—What is it like to be a bat?Everyone agreed that this essay was hard to read and didn’t give much insight. We had somewhat related discussions on what it might be to be another animal and how physically they differ from us. Then we went through 4 arguments as in the document by Prof. JeeLoo Liu

5

05.04.2023Nick Bostrom—Are you living in a computer simulation?Cancelled because too few participants

2

19.04.2023

Eliezer Yudkowsky—Ethical injunctions, link: https://​​www.lesswrong.com/​​s/​​AmFb5xWbPWWQyQ244, first 4 posts:

- Why Does Power Corrupt?

- Ends Don’t Justify Means (Among Humans)

- Entangled Truths, Contagious Lies

- Protected From Myself

Fluent discussions. We decided not to continue the series, because we guessed that most ideas we already got from the first 4 posts, and preferred to move to some new text.

6

03.05.2023-Vacation because of day off in Poland-
17.05.2023Nick Bostrom—Are you living in a computer simulation?Fluent discussions on assigning probabilities using indifference principle and probabilities people assign to each of 3 outcomes.

4

31.05.2023Scott Alexander—Meditations on MolochIt seems that most participants got the point and agree that we’re in a bad place. We discussed some ideas of what we can do about Moloch and digressed into related topics, e.g. ai safety or surveillance in China

7

14.06.2023Break due to EAGxWarsaw
28.06.2023Tobias Baumann—Avoiding the Worst: How to Prevent a Moral Catastrophe (chap. 1-6)Fluent discussions

6

12.07.2023Tobias Baumann—Avoiding the Worst: How to Prevent a Moral Catastrophe (chap. 7-11)Cancelled because too few participants

1

No comments.