10 Comments
User's avatar
ZlasoPoblima1907's avatar

1. What is the "anti-Christian" charity exactly? I haven't been able to identify it in the list.

2. In all nations, everywhere, the advancements of modernity (better education, healthcare, lower rate of child death, increases in longevity) have universally translated to lower fertility. This is true in Africa as well: fertility, while high, is decreasing basically everywhere. EA charities which help with Africa's development in this manner are thus leading to a long-term decrease in the number of African people. While there will be a temporary increase (those who would have otherwise died do not), the new general conditions come about quicker and decrease fertility sooner. Saying that these charities "create" africans is wrong.

3. No EA I have seen is a simple hedonic utilitarian. They care about all of the higher spects of living you care about - they all fall under "well-being" and the ability of humans to pursue higher goals. It just so happens that it is much harder to be beautiful, strong, virtuous and faithful when you are malnourished, diseased, or dead. These are the obvious first things to fix, to bring humans who need it on a better baseline from which they can follow other pursuits (or even if they do not - while not dignified, a person who feels base pleasures at a consistent rate is better than someone miserable or dead, if they do not cause harm). Also, only a few select monks and mystics are beyond pleasure and pain - you are not, I am not, an overwhelming majority of us are not. And that is fine - we can enjoy life and try not to suffer from it (EDIT: while still striving for more meaningful aspects of life, which give us pleasure. You seem to use the word "pleasure" to strictly refer to hedonic enjoynment, and not aesthetic appreciation or religious belonging or the satisfaction of doing something meaningful for humanity and the world etc. All of these fall under "pleasure", and because you do not properly define a lot of terms I do not know if I am correct or not in my interpretation.) If we have a basis for this, we will still have those monks and mystics to serve as examples and paragons. Taking forced misery out of the world will not change that.

4. Shrimps have different, more complex nervous systems than worms. There are stronger indicators of them being in pain than worms. This is not an arbitrary distinction (EDIT: I have found a detailed, multi-blog post discussions about the very topic of what constitutes pain in animals, how it can be recognized, what features of nervous systems indicate pain etc., in invertebrates less complex than shrimp: https://forum.effectivealtruism.org/s/sHWwN8XydhXyAnxFs - point is, EAs as a collective think deeply and scrupulously about this stuff, and would give you answers were you to bother checking or asking them). Furthermore, a worm causes infinitely more pain to a human than a shrimp can do. Even if the worm was similar in its experience of qualia, the harm it would do to humans would likely outweigh its own pain. You treat EA arguments with incredible facility, and have not asked them any of your (fair) questions - you just assume their answers are obviously wrong and stupid.

5. Lower IQ people do still have empathy (lesser than high IQ people, true). Everyone, to a large extent, wants factory farming because it is a cheap source of good food, and a higher percentage of geniuses will only partially alleviate that. Most reasons for opposing veganism that are not rational/philosophical (gut reactions, basically) happen across the board: scope insensitivity, being scared of being confronted with the possibility of having done great harm, easiness etc. Also, the work low-IQ people do, which you gloss over, is the reason economies of scale and all their wonders you enjoy can exist, and also why you have food to eat and heat to not die of cold (besides the research and ingenuity of high IQ people, of course). There is a point where you cannot automate everyone out of existence.

6. You dismiss all concerns of alignment people out of hand: the fact that their arguments can work without sci-fi grey goo (which only really Eliezer cares about anymore), the fact that current LLM trajectories do change forecasts and timelines, that we cannot tell currently if this peak/dip is truly the beginning of a new winter, that alignment concerns are more general beyond LLMs and that it is better to be prepared in all cases etc. Also, using terms like "Messiah" and "faith" is very strange since alignment people do not want AGI to happen as it is now, as they believe it will be misaligned and cause chaos. They are not waiting for it - many would be extremely relieved at a new winter and a longer timeline.

7. If AI will produce "a bit of research humans cannot understand", then it has basically hit human intelligence, and if it has hit human intelligence, there are essentially only computational barriers for it to become super-intelligent. You seem to think this is a large gap - do you have a specific reason for this? Most discussion (which I agree with) argues the opposite.

Expand full comment
Leon Voß's avatar

1. the anti-Christian one is the one that tries to get Christian donobuxx to go to EA stuff, which is intrinsically non-Christian.

2. We can only hope that nature will shield us from the consequences of rationalist actions. The point remains that if bantus were immune to demographic transition, these people would still do what they're doing. In fact, they're not sophisticated enough to predict that bantu TFR will become <2. It would be more secure to explicitly do eugenics on bantu populations -- uplift, but at the cost of demanding a 1 child policy for those who benefit from charity.

3. Many people are far less beautiful, strong, virtuous, intelligent, and faithful than others, independent of their nutrition and upbringing. Most variance is genetic. We should maximize the existence of the good genes instead of wasting money on nourishing bad genes.

4. One, I don't buy that measuring qualia in animals is accessible to EA bloggers. Two, it's speciesism to care about human qualia over worms. Might as well not blow money on shrimp welfare then. Overall, I've never been impressed by yeuer epistemology and have always found it inconsistent. https://www.leonvoss.com/p/against-west-coast-epistemology Basically ye just come up with stuff that sounds good to a 110 IQ blog reading audience with particular predispositions that aren't right, ie mostly liberal stuff like veganism and blank slate, and then ye just do circle jerky confirmation bias and blow money on shrimp welfare.

5. I disagree completely here, we will automate them out of existence, just like horses, and there are way too many pöbel right now, you could probably cut their numbers down to 10% of what it is now and the rate of research wouldn't fall at all. the pöbel x pöbel economy would disappear, for example rap music, sportsball, trash TV, but stuff high IQ people like would be fine because <10% of the present pöbel pop supports that stuff, including food and housing work etc. This would be epic, there would be as much or even more intelligence and high culture and research as before, but 1% of the crime, 1% of the factory farming, 1% of the pollution, 1% of the war, and 1% of the socialism (eg schools, welfare).

6. If you believe in apocalyptic AI, that's isomorphic to grey goo. Most smart people don't believe in apocalyptic AI at all.

7. Well if you scale up the amount of cats you don't get fire and cooking. If you scale up 100 IQ people you don't get 145 IQ results. If we have a 130 IQ research machine, just covering the planet in them won't produce 190 IQ results. There are things that are inaccessible to lower levels of intelligence, no matter how much you parallelize the lower level of intelligence.

It may be to make machine smarter than the smartest humans, you need to have an intelligent designer, eg something smarter can only make something dumber. In this case you have to breed smarter people to get smarter machines.

Expand full comment
ZlasoPoblima1907's avatar

1. Charity to the poor and loving your neighbour are Christian principles. Not the core, of course (EA is secular), but is there something else about EA that makes it intrinsically non-Christian (which is not the same as anti-Christian)?

2. I am not entirely sure about that. I have seen limited discussion of Malthusian traps and the like in EA circles, though not much. Also, I appreciate the humanity of your proposal here :)

3. I agree, but remember that a lot of the genetic gap in the West shows because environment has mostly been equalised. In Africa, random deadly events like war and disease just lead to people who could have become beautiful etc. dying or being crippled for life, and I do not think these events are precise enough filters for what you are getting at. Also, there are EA charities which invest into embryo selection and genetic engineering for the benefit of humanity.

4. We cannot access the qualia of anything or anyone, not even our fellow humans. There are, however, correlates, intuitions, and indications of pain that guide EA analyses. A lot of them are intelligent enough to read biology papers. As for the speciecism, a lot of EAs are concerned with not being that, but might still make the argument that humans suffer disproportionally more, all things being equal. I cannot access your article, but I feel you might be conflating Moldbug's writing (which is pure word vomit, I agree) with the rest of the rationalist sphere, which, while also very wordy, in my experience is a lot more structured and carefully argued. I do find your insistence on them being blank slatist somewhat strange, as hereditarianism has always somewhat been in the background of the subculture, to a degree or another (even Scott Alexander has talked about it recently; he even referenced one of your responses to Gusev).

5. The rate of research would fall, as global supply chains requiring growing food and extracting energy and mining minerals and manufacturing still currently depend on humans. Civilization would collapse. Food and resource extraction are not a "pöbel x pöbel economy", and rap and the things you mention make up a miniscule amount of the global economy. Yes, it would be better if everyone was more intelligent, obviously, but that does not mean you should treat those who are less intelligent with genocidal seething hatred for their crime of being born and existing. Like it or not, they work hard, and to a degree you depend on them in the present moment (just like me), even if not counterfactually. You can be an eugenicist without wishing to see oceans of blood, and there are pathways to a better, more eugenic future (genetic engineering, perhaps those Hansonian credits he touted) without such things. But perhaps this is a fundamental disagreement between the two of us.

6. There are non-sci-fi apocalypses possible (engineered pandemics, nuclear war etc.). I guess grey goo is isomorphic, but you mentioned grey goo to make fun of the whole thing, while the more realistic pathways towards catastrophe do not lend themselves to that. As for the smart people argument: most smart people in the 20s and 30s believed that Communism was the best thing ever - you would not say that they were right, would you?

7. Your argument is fair (there are alignment proposals which entail that what you are describing can work, and I am skeptical of them), though I was saying that a smart human with much longer time to think (while still being fast), much more cross-connections, and more well-organized memory could be thought of as super-intelligent. There is a correlation betwen higher IQ and brain size for a reason. Parallelization is important, but sheer scale is also part of it.

I do not agree with the point about being able to only create dumber things. Evolution as a mechanism is dumb, but it has created smart humans. We have made narrowly superhuman AI for certain tasks (chess, go, protein folding), and we have even made AIs which do things humans cannot (that simple CNN which could tell the gender of a person based on a photo of their eye, which humans cannot reliably do).

There are EA charities investing in genetic engineering, partially in the hopes of having humans smart enough to solve alignment in time, but they would certainly use their intelligence for smarter machines, correct. Saying, however, that a 190-IQ person can make a 190-IQ smart AI, but a 180-IQ person cannot, is somewhat arbitrary, as small changes in parameters and architectures can move the AI form one level to another. The main issue is if AI will be able to get even at the level of an average human in its capacity for memory, abstract thinking, and other marks of intelligence. That we will have to see.

Expand full comment
Leon Voß's avatar

1. Yes, EA is atheist while Christianity is about supernatural belief

2. Ok. You guys will agree to a lot of things but then not take it seriously. I think this is where the Aumann Agreement Theorem fails. Maybe our posteriors converge, but because we're genetically different, and belief is only a small component of behavior, ye continue to worry about alignment while I worry about dysgenics. Our posterior credible intervals might overlap on these issues substantially, but ye think dysgenics is mean, so ye fail to take it seriously.

3. ok

4. They're blank slatist, because, in line with point 2, they accept hereditarianism when backed into a corner, but they don't really take it seriously. My work is what it looks like for someone who is smarter than most rationalists, and familiar with their body of work, to take hereditarianism seriously. Does their work look like mine? Nope.

5. I think you're just quantitatively wrong here, maybe I'll write a blog post on it. How many pöbel work in food supply? How many would be required if all of the extraneous pöbel which food supply feeds were no longer eating? I think it's currently ~10%, and would be maybe 10% of that 10% under my hypothetical. You say "oceans of blood" but I never said that, I'm with Galton basically, nobody ever proposed "oceans of blood" eugenics, that's a serious straw man. A 1 child policy is not "oceans of blood", China proved this, I just want to do apply it eugenically, ie to a segment of the population, and apply and opposite policy to a different segment, as opposed to doing it to everyone like in China. Your massive strawmanning of me is a strong indicator of your innate bias against this topic.

6. Smart people didn't think communism was great, communists were rather stupid. That's why the smartest nations never became command economies. In general, only low IQ nations became command economies. The kinds of people who thought communism was great are the kinds of people in EA today. I bet a lot of their grandparents were CPUSA members, just like Moldbug

7. It seems that <120 IQ people can't pass a real analysis course, so no matter how long such an agent thinks, their thought will just never reach the level of complexity that is an undergraduate analysis textbook.

Expand full comment
ZlasoPoblima1907's avatar

1., 7. That is fair.

2. Bostrom, and several discussions of existential risks in EA circles, have brought up dysgenics as a potential source of humanity's downfall, and I have recently seen limited discussion on recent IQ drops (though most remain unconvinced and are waiting to see how long-term this is - if some are ignoring the issue because it is "mean", they have not made themselves known). In fairness, we are talking about a heterogenous semi-permeable group of people - of course most will not have similar priors. But I see where your position and concerns differ from the EA sphere.

4. The "politics is the mind-killer" thing has run deep, and I imagine they consider hereditarianism to be too tainted by broader political discussions - so they do not touch it, regardless of personal beliefs regarding it. This is an unfortunate effect of both the movement's origins and what hereditarianism has historically been associated with. Also, some smart individuals might have similar beliefs about it to you - they just believe that greater good will come from other means and are focusing their careers on non-genetics related things, which they believe can be more valuable.

5. I'd love to see that post. Remember, however - economies of scale. If 90% of the population were gone industrial agriculture, resource extraction, energy production and supply chains will break, and the new systems will be a lot more local, and more primitive. You need a baseline raw number of persons to be able to operate mines, trade routes, reactors and the like. I doubt a drastic reduction in population would sustain them (though I would be interested to see if you have numbers to the contrary).

Sorry for the straw-man, I did not intend it. But tweets I saw from you calling people below 120/130 IQ "without souls" have disturbed me deeply and led me to the belief that you would be ok with outright mass death. If that is not the case, I am sorry, I misrepresented you. Your concrete proposals, while some personally unpalatable to me (the sub-100 universal ban on procreation, not the others), do not reach this level, I agree. Thanks for clarifying, and I am sorry that I insulted you.

6. Most of the 120+ intelligentsia of the time was deeply attracted to communism, with its idealism, heady theory, and extensive, soporific economic analyses. And communism did nearly come to the smartest nations - France has brushed with it in some form during the Paris Commune, and soviets were established in Bavaria, other German places, and even Limerick in Ireland (though, obviously, they went nowhere). And I am not sure if (historical) Russia, Hungary, and China really count as "low-IQ" in your conception - correct me if I am wrong.

Expand full comment
Leon Voß's avatar

2 & 4: "politics is the mind-killer" means you can't challenge EA political assumptions, or else they get mad, not that it is actually hard to reason about politics. They just assume blank-slate things and get mad if you challenge it. I was banned from several rationalist forums for posting HBD, for example.

5. yeah but how many people work in supply chains? Then how many to do you need if a bunch of their customers are gone? The answer is clearly a fraction of a fraction.

>have disturbed me deeply and led me to the belief that you would be ok with outright mass death.

I believe technically that they mostly have a lower type of soul like an animal soul. I like animals though, I just don't want them to vote and you have to be realistic about overpopulation. Imagine if there were 10 billion chimpanzees. That's like 10 billion <100 IQ people. They can't reason, like they can't even really pass an algebra class for example. Nietzsche writes about this in Also Sprach Zarathustra. It was inspired by his reading of Galton

Also you were clearly mind-killed, I'm not though, I guess I just have a cooler head than EAs so they can't reason about interesting topics. In part it feels like you are strawmanning me since I have never supported violence any more than Paul R. Ehrlich

6. They weren't and a couple of communes means nothing, America has had communes too. The Eastern countries are clearly low IQ and were even more low IQ since they were peasant nations 100 years ago.

Expand full comment
ZlasoPoblima1907's avatar

2., 4. Partially, yes. I can see why that happened, sadly.

5. True - I just am not sure what the minimum is.

I have met <100 IQ people, and they are still above chimpanzees, can reason enough to get by. I am being emotional, but I lived in an area where those who were not smart enough for engineering remained unqualified mine workers, toiling day and night digging for coal. They could not pass an algebra exam, but they were warm and reasonable enough people, and I respect them deeply. I do not feel that they had an animal soul (unless they were wife-beaters or other types of awful criminals, though I feel the same even for high-IQ people who do that).

I do not consider myself mindkilled - I understood your actual proposals, and some might be necessary in the future (though I hope not). I just had a bad emotional reaction to your tweets (I am somewhat sensitive, I confess) and attributed unwarranted beliefs and character traits to you - again, I was wrong, I apologise.

6. Fair and fair. (EDIT 1/11: Though you are still not getting to the centre of the issue. For the average, non-pathological 70-80+ range, I do not believe that there are meaningfully heterogenous differences between the brains of functional humans. There is not a range where you have additional areas of the brain or organs which let you understand calculus and which people below do not have - it does not make sense why something like that should evolve if it does not grant meaningful improvements to survivability which additional general, raw brain power could much better. It is an issue of what I mentioned earlier: connectivity, storage, memory, prallelization and speed, all converging to get people the ability to do the same things as before, but better and at higher levels - again, larger brain size. Even dumb people can do simple maths and think somewhat analitically: at higher IQ levels, however, it is much easier to have longer, broader, more recurrent, more cross-domain, faster and less energy-draining thought patterns which give intelligent people their abilities. Maybe a dumb person could understand high-level algebra, at least parts of it, but it would take years or decades - of course they will not waste their time. The human core is the main threshold an AI needs to hit - going above that is relatively trivial. Though I guess that the moment this happens is when we could test our different hypotheses empirically - if it does happen, of course.)

Expand full comment
EliezerYudnerdsky's avatar

The Anti-Christian charity should be intimately similar , as you appear to be faggot ratsphere cuck. Juggle Scott Suskind's balls all you like, but please leave us out of it. The AI is gonna get out of the box!!! Better give Aella girl 100k to save humanity. Retard.

Expand full comment
Rainbow Roxy's avatar

Regarding the funding analysis presented, I wonder how the ethnic categorisations truly serve an equitible distribution goal. It reminds me of balancing biases in AI datasets, a complex undertaking.

Expand full comment
EliezerYudnerdsky's avatar

Great critique. Bantumaxxing is a classic term and I liked how you put in an Anti-Christian category for funding waste as well.

"GPT 5 was a major let-down compared to what Bay Area AI Messiah people were predicting. It basically proved their faith in AI is fake."

Gwern on suicide watch. If you see a 200lb red headed bearded man in a My Little Pony shirt with white stains on it in the Bay Area walking around, please contact a prevention center.

Expand full comment