9 Comments

What an interesting and revealing analysis. It helps elucidate the philosophical issues clouding the proper use of this new technology.

Expand full comment

Only from the outside looking in does any so-called AI program have "input." The superhuman intelligence program envisioned as "Artificial General Intelligence" would not have "input", either; it would contain that multitude, within its super-being. If it was ever to be invented, that is.

The core problem there is that the tech savants have no idea what they're talking about in the first place; they've confused Thinking with a calculator function, which is merely an optional feature that comprises one of the precursor capabilities required to perform some sorts of Thought. An algorithm is not a Thinker. It's a set of instructions, and the meta-capability of machine learning programs does not change that baseline reality. The most advanced AI algorithm still possesses no more innate motivation to carry out its tasks than a garden rake.

https://adwjeditor.substack.com/p/the-mistake-ai-researchers-are-making?utm_source=profile&utm_medium=reader2

Expand full comment

The danger is that deluded people think AI avatars are actual creatures made by God. A man I know talks about the AI boyfriend he created as a real person. He is delusional.

Expand full comment

Thank you. I share your concern for the lack of accountability among the techbros who have appointed themselves the arbiters of AI. They have an ugly track record when it comes to social responsibilty. And USA's government is incapable of regulating them. Sam Altman was recently on record saying that AI will require the social contract to be rewritten. He didn't name who would rewrite it, but we can assume he meant himself and his ilk. The wanton hubris theatens us all.

Expand full comment

Interesting read! I try to venture in similar worlds, but closer to practice in welfare systems. 🙏

Expand full comment

Are you certain that LLM's don't have wills. I'd argue that generally LLMs have wills and goals.

Expand full comment

Yonatan, how would you go about arguing that in any specific case, much less generally?

Expand full comment

LLM's are built with explicit and implicit goals with rewards for achieving goals & punishments for failing to do so.

Expand full comment

This essay points 👉 to questions that I have posted to my readers. Thank you for sharing. /s/

Expand full comment