21 Comments

Very interesting throughout.

I'd quibble with the 1,000 IQ bit -- we don't know how to measure any such thing, and it's akin (in standard deviation terms) to trying to engineer someone who is about 19 feet tall (which would be a bad idea, because he would have difficulty surviving for lots of other reasons, see https://en.wikipedia.org/wiki/Gigantism).

Expand full comment
May 11Liked by Samuel Hammond

The linked article agrees with your quibbles, and is more nuanced.

Expand full comment

No, there are modifications that can be made within neurons to greatly speed action potentials and greatly reduce their metabolic cost, thus removing the need for myelin in the brain and allowing substantially more room for more neurons. If added selectively to key areas of the parietal and prefrontal cortex rather than evenly spread out, this could easily give a 10x increase in capacity. Measuring higher IQ than can currently occur in humans is... a measurement task. Not having a measure doesn't mean it isn't possible. Clearly what is meant here is 10x the General Factor of Intelligence that underlies IQ, not literally 10x on the measurement device. Pointing at the top temperature that the thermometer can reach doesn't disprove the existence of higher temperatures.

Expand full comment

Seems like there are more significant issues here also. E.g., the timescale for genetically engineering people with this level of intelligence may be decades or even hundreds of years, whereas AGI is probably happening in the next 15 years. And if AGI happens first, we should ask (1) what the point of human intelligence enhancement would be and (2) why other non-genetic forms of intelligence enhancement (e.g., augmenting the human brain with something like Neuralink) might not become possible first. (Also, selfishly, I hope human intelligence enhancement, if it becomes possible, will also be available to those of us sad souls who have already reached adulthood.)

Expand full comment

When has preemptive government regulation to prevent speculative theoretical future harms ever gone well?

Expand full comment

Nuclear non-proliferation is the closest example I can think of.

Expand full comment

Without it we still would be using Asbestos, lead paint, witch hunt we would still have acid rain and whale hunting. Climate initiatives put in place around 2010 have had an positive impact, now they are dismantled but never the less. The harm's mentioned above are not speculative and theoretical. How passive and absent do you want to be to not even try to prevent harmful outcomes. We have learned from the past what happens if we let monopolies and super rich individuals make their own rules.

Expand full comment

Those are all ex post facto regulations. In the climate analogy, the equivalent would have been imposing regulations around when James Watt was inventing his engine.

Expand full comment

AI has been in the works for hundreds of years and the harms are mapped out and already happening. Maybe for once we can fix something before it ends in a catastrophy. James Watt didn't invent he improved. Just like Ford didn't invent the car. And even back then people were asking for the increase in productivity to benefit all and just a few. So this is pretty much an excact repeat. We know Altman, Musk, Zuckerberg etc. don't care about society or other people they just want power and money.

Expand full comment

In general I think these 95 theses are useful and ought to be carefully considered. I maintain that institutionalists will fare poorly as AI technology becomes more powerful. "Institutionalists" is a term that ought to be broadly construed: it includes lawyers, bureaucrats, politicians, regulators, government agencies, accountants, educators (across the spectrum from childhood through PhD programs), management consultants, certain financiers...

Expand full comment

Could you elaborate on what you mean by 'institutionalists'? It's unclear to me what connects, for example, teachers and management consultants. I guess all of your positions could be covered by the bucket of "people working with government and historically prestigious white collar jobs", but I'm not sure if that's what you're getting at.

Expand full comment

Sure. Many of the theses in this post relate to institutions and the problems these institutions will have in dealing with much stronger AI. To the extent that the type of person who works for these institutions—which aren’t all government related,btw—serve to ensure the continued existence of these institutions, they’re institutionalists. Their labor perpetuates the institutions for which they work. A world in which some or all of these institutions fall or fail is a world in which the people who work for those institutions don’t fare well.

Expand full comment

As I see it, those institutions shape and reflect our identity.

If something like AGI ever comes to be, AGI (or AGIs) will help us sustain and develop those institutions. AGI will enter into a relationship with those institutions that is similar or even identical to the existing relationship between those institutions and individuals:

(1) Those institutions are under the collective (democratic) control of people.

(2h Those institutions serve people by providing protection and regulation.

AGIs will inevitably recognize the value of those institutions and foster their continuing evolution.

Far from interfering with people, AGIs will contribute to our continuing growth and resilience.

Naturally, there will be problems—and solutions—along the way. And throughout, those institutions will be with us and they will continue to evolve along with everything else.

Expand full comment
May 8Liked by Samuel Hammond

> Superintelligent humans with IQs on the order of 1,000 are possible through genetic engineering

Assuming this is theoretically possible, we should probably also assume that the reason such humans don’t already exist is because the required biological tradeoffs didn’t make evolutionary sense.

Do we have any understanding of what these tradeoffs might be, and if so are we confident that we want to make them?

Expand full comment
May 9Liked by Samuel Hammond

It is hypothesized that the gating tradeoff currently is simply birth canal size, which is to say that notably smarter humans would require notably larger skulls at birth. There may very well be tradeoffs beyond that, of course.

Expand full comment

> The more one thinks AI goes badly by default, the more one should favor a second Trump term precisely because he is so much higher variance.

Disagree. A Trump administration, with its mostly populist concerns, is unlikely to engage the topic of AI in a deep or thoughtful way. Whereas a Biden administration is likely to engage the topic thoughtfully, and already is doing so. This means that a Trump administration is likely to do nothing of substance related to AI ("low-variance"), whereas a Biden administration is likely to make regulations that are targeted to have a substantial effect of some sort, like the FLOPS executive order it already issued ("high variance").

Expand full comment

I agree with most of these, but not with 'minimizing cross-entropy loss over human-generated data converges to human-level intelligence.' Ilya Sutskever: '...what does it mean to predict the next token well enough? It's actually a much deeper question than it seems. Predicting the next token well means that you understand the underlying reality that led to the creation of that token.' Yudkowsky gives the extreme example of trying to predict plaintext from a preceding sha256 hash.

I'd be really interested to know how you think that argument fails.

Expand full comment

That is, a) the training data includes material that can't be accurately predicted without greater than human intelligence, and b) predicting what a human will say is in general much harder than producing the sorts of things humans say (as becomes clear if you attempt next-token prediction by hand; it's really hard and current SOTA models are much better than humans at it).

Expand full comment

Interesting post.

> minimizing cross-entropy loss over human-generated data converges to human-level intelligence.

Are you interested in formalizing this into a statement one could bet against?

Expand full comment

Unfortunately there's no indication that regulations based on compute would only be a "temporary measure". It's a bad idea to make "time bomb" regulations that start doing something unintended and bad unless they are regularly fixed by competent future legislators.

Why not regulate based on money spent? Restrict regulations to AI models that cost over $100m to train, in 2024 dollars, indexed for inflation.

Expand full comment
author

That's precisely why I think oversight is best done as an executive action under defense authorities rather than an explicit statute or rulemaking that's hard to modify or sunset.

Expand full comment