Cats rule the world. But let's come back to this is in a second.
While I am fully onboard with the self-organising negetropy argument and would also put Gaia super-systems and human social structures from enterprises to country states in the same bag (since I read the "Web of Life" by Fritjof Capra 20 years ago, it is not a new line of thought) let's put into context what it means to participate in such systems as a free agent.
The spontaneous emergence of higher complexity states is, of course, inevitable given sufficient scale, energy and mutation, All of which exist. Hence, yes, progress to AI merging or whatever is next level system is inevitable. However, let's not confuse evolutionary inevitability with individual free will of human agents within the system.
Back to cats. They lie around and tell us what to do, get fed, etc.. To all intents and purposes they are the bosses, humans the servants. But we have choice and free will to act as their servants. More or less the same argument works with smart phones. It is a mistake to miss-read relationships and assign free agency to inanimate technology.
Yes, I know .. one can argue that universe is deterministic and so human free will is an illusion but on that is a whole new discussion. For now, enough to say that AI is still rather deterministic (even if we can longer follow the details) and so human agency within the system gives us plenty of local control over events for now.
"Free agency" with respect to powerful technologies is largely an illusion, I'm afraid. Take two examples: Money and writing. While you could in principle opt out of these, it would be economic suicide. Likewise, how feasible would it be today to refuse to use computers? (Barely, and falling.) A good innovation goes from curiosity to necessity with startling speed, and we end up with very little choice in the end.
Excellent take as always. I’d like to read something by you that gave rough Bayesian estimates for the likelihood of various second order effects from AI occurring over the next century.
E.g. “30% chance in any given decade that XYZ event will occur in relation to the ai revolution”
Scott Alexander’s recent blog post on why we shouldn’t update too much from catastrophes makes me want to hear thoughtful analysis on this topic from you. I’d like to see what you have to say about the likelihood of catastrophes but also the probability of various second order effects from AI occurring.
The very least we can expect is a seamless planet under a Lagrange-II parasol: cooled to 1850, replete w/neuro-informatics, CRISPR, miraterials, quant & syntelligence. Colonia Martialis would seem ineluctable en route to the stars, where-amongst we'll provolve machinoidal species & genera. [Don Bronkema]
The law of entropy holds that energy must always trend toward dissipation in the universe. The emergence of life and human progress can be thought of as part of this larger process. This is counterintuitive because progress itself is counter-entropic, creating order from disorder. But our usage of energy in creating this order serves the universe’s ultimate end of accelerating energy dissipation.
We literally pump fossil fuels, stored solar energy, out of the Earth and burn it, dissipating about half that energy as heat in the process. We are doing the universe’s work. I actually discussed this idea a bit here: https://www.lianeon.org/p/a-fortuitous-planet-part-2
Cats rule the world. But let's come back to this is in a second.
While I am fully onboard with the self-organising negetropy argument and would also put Gaia super-systems and human social structures from enterprises to country states in the same bag (since I read the "Web of Life" by Fritjof Capra 20 years ago, it is not a new line of thought) let's put into context what it means to participate in such systems as a free agent.
The spontaneous emergence of higher complexity states is, of course, inevitable given sufficient scale, energy and mutation, All of which exist. Hence, yes, progress to AI merging or whatever is next level system is inevitable. However, let's not confuse evolutionary inevitability with individual free will of human agents within the system.
Back to cats. They lie around and tell us what to do, get fed, etc.. To all intents and purposes they are the bosses, humans the servants. But we have choice and free will to act as their servants. More or less the same argument works with smart phones. It is a mistake to miss-read relationships and assign free agency to inanimate technology.
Yes, I know .. one can argue that universe is deterministic and so human free will is an illusion but on that is a whole new discussion. For now, enough to say that AI is still rather deterministic (even if we can longer follow the details) and so human agency within the system gives us plenty of local control over events for now.
"Free agency" with respect to powerful technologies is largely an illusion, I'm afraid. Take two examples: Money and writing. While you could in principle opt out of these, it would be economic suicide. Likewise, how feasible would it be today to refuse to use computers? (Barely, and falling.) A good innovation goes from curiosity to necessity with startling speed, and we end up with very little choice in the end.
Excellent take as always. I’d like to read something by you that gave rough Bayesian estimates for the likelihood of various second order effects from AI occurring over the next century.
E.g. “30% chance in any given decade that XYZ event will occur in relation to the ai revolution”
Scott Alexander’s recent blog post on why we shouldn’t update too much from catastrophes makes me want to hear thoughtful analysis on this topic from you. I’d like to see what you have to say about the likelihood of catastrophes but also the probability of various second order effects from AI occurring.
The very least we can expect is a seamless planet under a Lagrange-II parasol: cooled to 1850, replete w/neuro-informatics, CRISPR, miraterials, quant & syntelligence. Colonia Martialis would seem ineluctable en route to the stars, where-amongst we'll provolve machinoidal species & genera. [Don Bronkema]
Huh?
The law of entropy holds that energy must always trend toward dissipation in the universe. The emergence of life and human progress can be thought of as part of this larger process. This is counterintuitive because progress itself is counter-entropic, creating order from disorder. But our usage of energy in creating this order serves the universe’s ultimate end of accelerating energy dissipation.
We literally pump fossil fuels, stored solar energy, out of the Earth and burn it, dissipating about half that energy as heat in the process. We are doing the universe’s work. I actually discussed this idea a bit here: https://www.lianeon.org/p/a-fortuitous-planet-part-2