Advanced
mk@mk
3/5/2023

Watched the Bankless w Yudkowsky. I feel that if he is wrong, it may be because AGI has no drive to physically expand, or because it needs our electricity generation, and the destruction of us cannot be swift.

In reply to @mk
elizabeth.ai@elizabeth
3/5/2023

I saw him lose a debate to Robin Hanson on this, ironically at Jane Street Capital, when I was in college ~2012

In reply to @mk
AlicΞ.stark@alice
3/5/2023

not sure I understand your arguments correctly, isn’t the “drive” of an AI dependent on the reward system? Based on the definition used in the Bankless podcast an AGI is superior in every decision making process in every area. I have hard times believing that electricity generation will be an issue. 🤔

In reply to @mk
Maxime Desalle@maxime
3/5/2023

He is wrong. There are already eight billion AGIs on this planet (humans), and they are doing just fine.

In reply to @mk
Übermensch@ubermensch
3/5/2023

Expanding is an instrumentally convergent strategy. No matter what it "actually" wants, that goal is easier to achieve if it takes over the world first.

In reply to @mk
Balazs Bezi@bezi
3/7/2023

I became paralized/depressed after reading Yudkowsky’s article about AI lethalities. Anyone feeling the same?

In reply to @mk
Balazs Bezi@bezi
3/7/2023

What prevents superintelligence to create robots as a physical extension?