I have spent the past few months teaching an undergraduate philosophy seminar on angels and demons, a rather unusual topic for my secular department and university.
Only from the outside looking in does any so-called AI program have "input." The superhuman intelligence program envisioned as "Artificial General Intelligence" would not have "input", either; it would contain that multitude, within its super-being. If it was ever to be invented, that is.
The core problem there is that the tech savants have no idea what they're talking about in the first place; they've confused Thinking with a calculator function, which is merely an optional feature that comprises one of the precursor capabilities required to perform some sorts of Thought. An algorithm is not a Thinker. It's a set of instructions, and the meta-capability of machine learning programs does not change that baseline reality. The most advanced AI algorithm still possesses no more innate motivation to carry out its tasks than a garden rake.
The danger is that deluded people think AI avatars are actual creatures made by God. A man I know talks about the AI boyfriend he created as a real person. He is delusional.
Thank you. I share your concern for the lack of accountability among the techbros who have appointed themselves the arbiters of AI. They have an ugly track record when it comes to social responsibilty. And USA's government is incapable of regulating them. Sam Altman was recently on record saying that AI will require the social contract to be rewritten. He didn't name who would rewrite it, but we can assume he meant himself and his ilk. The wanton hubris theatens us all.
What an interesting and revealing analysis. It helps elucidate the philosophical issues clouding the proper use of this new technology.
Only from the outside looking in does any so-called AI program have "input." The superhuman intelligence program envisioned as "Artificial General Intelligence" would not have "input", either; it would contain that multitude, within its super-being. If it was ever to be invented, that is.
The core problem there is that the tech savants have no idea what they're talking about in the first place; they've confused Thinking with a calculator function, which is merely an optional feature that comprises one of the precursor capabilities required to perform some sorts of Thought. An algorithm is not a Thinker. It's a set of instructions, and the meta-capability of machine learning programs does not change that baseline reality. The most advanced AI algorithm still possesses no more innate motivation to carry out its tasks than a garden rake.
https://adwjeditor.substack.com/p/the-mistake-ai-researchers-are-making?utm_source=profile&utm_medium=reader2
The danger is that deluded people think AI avatars are actual creatures made by God. A man I know talks about the AI boyfriend he created as a real person. He is delusional.
Thank you. I share your concern for the lack of accountability among the techbros who have appointed themselves the arbiters of AI. They have an ugly track record when it comes to social responsibilty. And USA's government is incapable of regulating them. Sam Altman was recently on record saying that AI will require the social contract to be rewritten. He didn't name who would rewrite it, but we can assume he meant himself and his ilk. The wanton hubris theatens us all.
Interesting read! I try to venture in similar worlds, but closer to practice in welfare systems. 🙏
Are you certain that LLM's don't have wills. I'd argue that generally LLMs have wills and goals.
Yonatan, how would you go about arguing that in any specific case, much less generally?
LLM's are built with explicit and implicit goals with rewards for achieving goals & punishments for failing to do so.
This essay points 👉 to questions that I have posted to my readers. Thank you for sharing. /s/