New & upvoted

Customize feedCustomize feed
NEW
CommunityCommunity
Personal+

Posts tagged community

Quick takes

Show community
View more
Not sure how to post these two thoughts so I might as well combine them. In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire. However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head: * Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard. * Ambition and working really hard as success multipliers in entrepreneurship. * A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others. * It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1] Just because SBF stole billions of dollars does not mean he has fewer virtuous personality traits than the average person. He hits at least as many multipliers than the average reader of this forum. But importantly, maximization is perilous; some particular qualities like integrity and good decision-making are absolutely essential, and if you lack them your impact could be multiplied by minus 20.     [1] The unregulated nature of crypto may have allowed the FTX fraud, but things like the zero-sum zero-NPV nature of many cryptoassets, or its negative climate impacts, seem unrelated. Many industries are about this bad for the world, like HFT or some kinds of social media. I do not think people who criticized FTX on these grounds score many points. However, perhaps it was (weak) evidence towards FTX being willing to do harm in general for a perceived greater good, which is maybe plausible especially if Ben Delo also did market manipulation or otherwise acted immorally.
46
tlevin
3d
3
I think some of the AI safety policy community has over-indexed on the visual model of the "Overton Window" and under-indexed on alternatives like the "ratchet effect," "poisoning the well," "clown attacks," and other models where proposing radical changes can make you, your allies, and your ideas look unreasonable. I'm not familiar with a lot of systematic empirical evidence on either side, but it seems to me like the more effective actors in the DC establishment overall are much more in the habit of looking for small wins that are both good in themselves and shrink the size of the ask for their ideal policy than of pushing for their ideal vision and then making concessions. Possibly an ideal ecosystem has both strategies, but it seems possible that at least some versions of "Overton Window-moving" strategies executed in practice have larger negative effects via associating their "side" with unreasonable-sounding ideas in the minds of very bandwidth-constrained policymakers, who strongly lean on signals of credibility and consensus when quickly evaluating policy options, than the positive effects of increasing the odds of ideal policy and improving the framing for non-ideal but pretty good policies. In theory, the Overton Window model is just a description of what ideas are taken seriously, so it can indeed accommodate backfire effects where you argue for an idea "outside the window" and this actually makes the window narrower. But I think the visual imagery of "windows" actually struggles to accommodate this -- when was the last time you tried to open a window and accidentally closed it instead? -- and as a result, people who rely on this model are more likely to underrate these kinds of consequences. Would be interested in empirical evidence on this question (ideally actual studies from psych, political science, sociology, econ, etc literatures, rather than specific case studies due to reference class tennis type issues).
Trump recently said in an interview (https://time.com/6972973/biden-trump-bird-flu-covid/) that he would seek to disband the White House office for pandemic preparedness. Given that he usually doesn't give specifics on his policy positions, this seems like something he is particularly interested in. I know politics is discouraged on the EA forum, but I thought I would post this to say: EA should really be preparing for a Trump presidency. He's up in the polls and IMO has a >50% chance of winning the election. Right now politicians seem relatively receptive to EA ideas, this may change under a Trump administration.
I might start doing some policy BOTEC (Back of the envelope calculation) posts. ie where I suggest an idea and try and figure out how valuable it is. I think that do this faster with a group to bounce ideas off.  If you'd like to be added to a message chat (on whatsapp probably) to share policy BOTECs then reply here or DM me. 
I make a quick (and relatively uncontroversial) poll on how people are feeling about EA. I'll share if we get 10+ respondents.

Popular comments

Recent discussion

Context: These are very rough notes I took for a memo session I was presenting at the UGOR 24' retreat. 

 

Why I think this is important:

  • A good social vibe is (in my opinion) a precondition to a successful EA group. 
  • Having friends in your group is an incentive
...
Continue reading

I really like the spectrum videos. I often think of how to get that kind of awareness of how we agree and disagree in an online setting. My tool viewpoints is one kind of push at this. viewpoints.xyz.

But there is something really fun about just seeing people share their views on specific points then move on. 

I sense if we did it a lot we'd be a healthier community.

Greg_Colbourn posted a Quick Take 2h ago

(EA) Hotel dedicated to events, retreats, and bootcamps in Blackpool, UK? 

I want to try and gauge what the demand for this might be. Would you be interested in holding or participating in events in such a place? Or work running them? Examples of hosted events could...

Continue reading

This announcement was written by Toby Tremlett, but don’t worry, I won’t answer the questions for Lewis.

Lewis Bollard, Program Director of Farm Animal Welfare at Open Philanthropy, will be holding an AMA on Wednesday 8th of May. Put all your questions for him on this thread...

Continue reading

Are there any interventions that are specially promising to increase the fraction of philanthropic and governmental spending on animal welfare, which is currently tiny?

4
Vasco Grilo
4h
Is there a 2nd generation of corporate campaigns for chicken welfare in the works which would build upon the success of cage-free campaigns (hens) and the Better Chicken Commitment (broilers)?
2
Vasco Grilo
5h
Hi DanteTheAbstract, You may want to check Open Philanthropy's grants to support farmed animal welfare in Asia.
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.
Thomas Kwa posted a Quick Take 5h ago


Not sure how to post these two thoughts so I might as well combine them.

In an ideal world, SBF should have been sentenced to thousands of years in prison. This is partially due to the enormous harm done to both FTX depositors and EA, but mainly for basic deterrence reasons; a risk-neutral person will not mind 25 years in prison if the ex ante upside was becoming a trillionaire.

However, I also think many lessons from SBF's personal statements e.g. his interview on 80k are still as valid as ever. Just off the top of my head:

  • Startup-to-give as a high EV career path. Entrepreneurship is why we have OP and SFF! Perhaps also the importance of keeping as much equity as possible, although in the process one should not lie to investors or employees more than is standard.
  • Ambition and working really hard as success multipliers in entrepreneurship.
  • A career decision algorithm that includes doing a BOTEC and rejecting options that are 10x worse than others.
  • It is probably okay to work in an industry that is slightly bad for the world if you do lots of good by donating. [1]

Just because SBF stole billions of dollars does not mean he has fewer virtuous personality traits than the average person. He hits at least as many multipliers than the average reader of this forum. But importantly, maximization is perilous; some particular qualities like integrity and good decision-making are absolutely essential, and if you lack them your impact could be multiplied by minus 20.

 

 

[1] The unregulated nature of crypto may have allowed the FTX fraud, but things like the zero-sum zero-NPV nature of many cryptoassets, or its negative climate impacts, seem unrelated. Many industries are about this bad for the world, like HFT or some kinds of social media. I do not think people who criticized FTX...

Continue reading
16
7

EA is very important to me. I’ve been EtG for 5 years and I spend many hours per week consuming EA content. However, I have zero EA friends (I just have some acquaintances).

(I don't live near a major EA hub. I've attended a few meetups but haven't really connected with ...

Continue reading
1
defun
2h
There's a small local group in my city but I didn't click with any of the attendees (mainly because of different levels of ambition).

Perhaps you could start a group that does something slightly different? Or speak with your national organisation (if you have one) and collaborate with them in starting a national cause or profession-based group? Or a company group? 

For example, in the Netherlands, in addition to our city/student groups, we have a policy and politics group, a new group at ASML, and a new animal welfare group. And then we also have the Tien Procent Club. They're inspired by Giving What We Can and run events focused on effective giving. They started in Amsterdam but the... (read more)

2Answer by Nathan Young7h
I met people via in person events and parties. But also via twitter and to a lesser extent substack. I sense either I meet people and figure out who I work well with or I produce content that draws like minded people to me.

Epoch AI is looking for a Researcher on the Economics of AI to investigate the economic basis of AI deployment and automation. The person in this role will work with the rest of our team to build out and analyse our integrated assessment model for AI automation, research...

Continue reading

Thanks for posting this! In case you didn't notice, you haven't mentioned a deadline. (Wouldn't have thought it weird if you hadn't included the text "Please email careers@epochai.org if you have any questions about this role, accessibility requests, or if you want to request an extension to the deadline.")

My (working) Model of EA Attrition: A University CB Perspective

Background/Why I’m writing this post:

I've been co-organizing an EA student group at Queen's University in Canada for about a year now. When I first joined Queen's Effective Altruism (QEA) on campus as a general...

Continue reading

Solid piece. I like lists of things and I appreciate you taking the time to write one.

I sometimes wonder how to combine many qualitative impressions like this into a more robust picture. Some thoughts:

  • Someone could survey groups on attrition rates
  • Someone could ask how many people group leaders recalled who were in each group type
3
DavidNash
4h
One category that you didn't include are people that agree with the ideas and take action, but don't want to or are too busy to attend lots of EA meetups.

I make a quick (and relatively uncontroversial) poll on how people are feeling about EA. I'll share if we get 10+ respondents.

Continue reading

Currently 27-ish[1] people have responded:

Full results: https://viewpoints.xyz/polls/ea-sense-check/results 

Statements people agree with:

Statements where there is significant conflict:

Statements where people aren't sure or dislike the statement:

  1. ^

    The applet makes it harder to track numbers than the full site. 

I've said that people voting anonymously is good, and I still think so, but when I have people downvoting me for appreciating little jokes that other people most on my shortform, I think we've become grumpy. 

Continue reading

In my experience, this forum seems kinda hostile to attempts at humour (outside of april fools day). This might be a contributing factor to the relatively low population here!

I get that, though it feels like shortforms should be a bit looser. 

Siao Si commented on Why I'm doing PauseAI 3h ago

GPT-5 training is probably starting around now. It seems very unlikely that GPT-5 will cause the end of the world. But it’s hard to be sure. I would guess that GPT-5 is more likely to kill me than an asteroid, a supervolcano, a plane crash or a brain tumor. We can predict...

Continue reading

If we are correct about the risk of AI, history will look kindly upon us (assuming we survive).

Perhaps not. It could be more like Y2K, where some believe problems were averted only by a great deal of effort and others believe there would have been minimal problems anyway.