Many, many good points. But I doubt that computers are capable of responding to questions that the programmers have not anticipated. The only thing that I can imagine computer to be reliable good at is storing information. Their ability to return all of that information is yet to be determined. Neither can they yet accurately rate the value, helpful or harmful, of their output. Human ability is limited enough in that respect. The story of Adam and Eve in Genesis 3 indicates that very clearly. Not only in their decision to eat the forbidden fruit, but in the ancient sage's conception of an omniscient God who, 1) put the tree in the garden, and 2) that he told the couple not to eat it, and 3) that he created Satan who could rebel. There is only so much weight that the argument that God would not have been satisfied with created (functionally) robots can bear. Surely an omniscient, omnipotent God could have found a solution.
One of the interesting aspect of “large language model” AI is that, because it is based on a huge statistical sampling of human linguistic patterns as they are formed into actual expressions of thought and ideas on the internet, these AIs can actually produce patterns that express reasonably accurate human-comprehensible thoughts and ideas - and, given their access to all that knowledge, can even do so with a high enough degree of accuracy that we can learn from them. In short, as teachers, the AIs are already smarter” than us - at least in their ability to respond accurately and completely to pretty much any question we can put to them most of the time.
And, for domains of knowledge where information is both extensive and limited, such as computer science, LLM AIs can be very accurate tutors, with almost instant access to a much wider body of knowledge than any human teacher can achieve. I already encourage my computer science students to use AI as an interactive tutor, as this is one of those domains (that I’ve described above) where the AI very often produces answers that are accurate and helpful.
All that being said, the natural tendency of LLM AIs to “hallucinate” - that is, to make up stuff, which is actually a fundamental part of their design - also means that we need to be careful when learning from them, double-check what they teach us, and generally still go to human beings for genuine knowledge, wisdom, and insight. I don’t see AIs ever replacing teachers. But they can and already do leverage human knowledge in ways that can be both instructive and helpful.
God did not make us as robots, but one could perhaps argue that other living creatures with more limited “programming” (such as plants and single-celled organisms) are, in some respect, “automata”… perhaps not robots, exactly, but sophisticated yet much more limited intelligences without the degree of free will that we enjoy. That might be a helpful way to think of AI as one of our limited but still more “intelligent” sub-creations.
I think attributing sentience to plant and animals is a stretch. I am watching a History of Earth series on BBC that does that a lot, even attributing it to rocks and soil. It is just another way of trying to replace God's creativeness with evolution. Not, that I do not question the ability of ant species to some rather dramatic ways of adapting to environmental changes. But I have yet to be convinced of any ability to transcend species identity. God's use of adaptation as a tool of creation is lacking in intellectual honesty, in my opinion.
Under supervision of humans, I concede that AI might become a valid tool of education. But it, like the education industry, lacks the discipline of deep love that parents have the capacity of in their children's education.
Heh. I wasn’t intending to attribute sentience to plants or animals… rather, I suggested they have a form of intelligence that in some ways parallels that of an AI system: i.e., responsive in a semi-deterministic sort of way. When it comes to the question of whether animal have souls, I don’t go beyond what Solomon says: “Who knows whether the spirit of man goes upward and the spirit of the beast goes down into the earth?” (Eccl. 3:21)
With regards to your second paragraph, I think we are in complete agreement. :)
The immediacy of internet communication inherently makes communication more difficult, both speaking and listening. Unintentional inferences are nearly unavoidable. When I spoke of attributing sentience, I was thinking about the BBC series, not especially about your thoughts. What you said triggered my memory of things I heard and reacted to while watching that series. While there might or not be anything in common to both conversations might not be relevant to my wording. Ideas tend to blead into each other in a subconscious way.
Anyway, what you responded to was not wrong to me. I understand both points of view. I am hypersensitive to what I see as errors in thinking behind the theory, or theories, of evolution. Evolutionists tend to replace God's design with intelligence in material itself. I'm thing there of plants consciousness in struggling to evolve to the next level. It's as though they couldn't adapt without some kind of intentionality. It seems more logical to me that the levels of species would come from an omniscient creator than from blind chance--indicated by implying intentiality to, for instance, plants or even animals.
Sorry for the wordiness; that's just the way I think.
I really like the point of sci-fi helping inform us as we navigate and engage with these technological changes. I need to read more sci-fi! Everyone does!
I’m generally not a fan of the mindset that there’s an inevitability to how this will all go down, but rather that we ought to participate in it with eyes open.
I like what Steve Jobs says, “Life can be so much broader, once you discover one simple fact, and that is that everything around you that you call ‘life’ was made up by people who were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.”
We’re not just watching something happen, we can influence it. We can bypass incentives and help push for a future that is most human!
A very hearty “Yea and amen!” to all of this, especially to the idea that our choices can and do influence how the world around us develops. That’s very much the idea behind my World of Code series: if we understand the World of Code that we live in, we can make more informed choices about how we engage with it, which will actually help to shape that world - hopefully in a more positive direction.
There are, of course, significant limitations to the degree to which we can influence the world around us, which we should also keep in mind: given that we are swimming in currents that are motivated by literally billions and trillions of dollars of cash-flow, voting with our wallets or our feet will only have a limited impact, even if we all work together - but even the choices that we make on the relatively limited level on which most of us our operating will have a significant impact at least on those in our immediate vicinity.
In more practical terms, while we might not be able to stuff the AI genie back into the bottle, we can work together to make sure our wishes make the world a better place - that is, a more self-sacrificially loving rather than a more selfish one.
"Saruman believes it is only great power that can hold evil in check, but that is now what I have found. I found it is the small everyday deeds of ordinary folk that keep the darkness at bay. Small acts of kindness and love."
Many, many good points. But I doubt that computers are capable of responding to questions that the programmers have not anticipated. The only thing that I can imagine computer to be reliable good at is storing information. Their ability to return all of that information is yet to be determined. Neither can they yet accurately rate the value, helpful or harmful, of their output. Human ability is limited enough in that respect. The story of Adam and Eve in Genesis 3 indicates that very clearly. Not only in their decision to eat the forbidden fruit, but in the ancient sage's conception of an omniscient God who, 1) put the tree in the garden, and 2) that he told the couple not to eat it, and 3) that he created Satan who could rebel. There is only so much weight that the argument that God would not have been satisfied with created (functionally) robots can bear. Surely an omniscient, omnipotent God could have found a solution.
One of the interesting aspect of “large language model” AI is that, because it is based on a huge statistical sampling of human linguistic patterns as they are formed into actual expressions of thought and ideas on the internet, these AIs can actually produce patterns that express reasonably accurate human-comprehensible thoughts and ideas - and, given their access to all that knowledge, can even do so with a high enough degree of accuracy that we can learn from them. In short, as teachers, the AIs are already smarter” than us - at least in their ability to respond accurately and completely to pretty much any question we can put to them most of the time.
And, for domains of knowledge where information is both extensive and limited, such as computer science, LLM AIs can be very accurate tutors, with almost instant access to a much wider body of knowledge than any human teacher can achieve. I already encourage my computer science students to use AI as an interactive tutor, as this is one of those domains (that I’ve described above) where the AI very often produces answers that are accurate and helpful.
All that being said, the natural tendency of LLM AIs to “hallucinate” - that is, to make up stuff, which is actually a fundamental part of their design - also means that we need to be careful when learning from them, double-check what they teach us, and generally still go to human beings for genuine knowledge, wisdom, and insight. I don’t see AIs ever replacing teachers. But they can and already do leverage human knowledge in ways that can be both instructive and helpful.
God did not make us as robots, but one could perhaps argue that other living creatures with more limited “programming” (such as plants and single-celled organisms) are, in some respect, “automata”… perhaps not robots, exactly, but sophisticated yet much more limited intelligences without the degree of free will that we enjoy. That might be a helpful way to think of AI as one of our limited but still more “intelligent” sub-creations.
I think attributing sentience to plant and animals is a stretch. I am watching a History of Earth series on BBC that does that a lot, even attributing it to rocks and soil. It is just another way of trying to replace God's creativeness with evolution. Not, that I do not question the ability of ant species to some rather dramatic ways of adapting to environmental changes. But I have yet to be convinced of any ability to transcend species identity. God's use of adaptation as a tool of creation is lacking in intellectual honesty, in my opinion.
Under supervision of humans, I concede that AI might become a valid tool of education. But it, like the education industry, lacks the discipline of deep love that parents have the capacity of in their children's education.
Heh. I wasn’t intending to attribute sentience to plants or animals… rather, I suggested they have a form of intelligence that in some ways parallels that of an AI system: i.e., responsive in a semi-deterministic sort of way. When it comes to the question of whether animal have souls, I don’t go beyond what Solomon says: “Who knows whether the spirit of man goes upward and the spirit of the beast goes down into the earth?” (Eccl. 3:21)
With regards to your second paragraph, I think we are in complete agreement. :)
The immediacy of internet communication inherently makes communication more difficult, both speaking and listening. Unintentional inferences are nearly unavoidable. When I spoke of attributing sentience, I was thinking about the BBC series, not especially about your thoughts. What you said triggered my memory of things I heard and reacted to while watching that series. While there might or not be anything in common to both conversations might not be relevant to my wording. Ideas tend to blead into each other in a subconscious way.
Anyway, what you responded to was not wrong to me. I understand both points of view. I am hypersensitive to what I see as errors in thinking behind the theory, or theories, of evolution. Evolutionists tend to replace God's design with intelligence in material itself. I'm thing there of plants consciousness in struggling to evolve to the next level. It's as though they couldn't adapt without some kind of intentionality. It seems more logical to me that the levels of species would come from an omniscient creator than from blind chance--indicated by implying intentiality to, for instance, plants or even animals.
Sorry for the wordiness; that's just the way I think.
I really like the point of sci-fi helping inform us as we navigate and engage with these technological changes. I need to read more sci-fi! Everyone does!
I’m generally not a fan of the mindset that there’s an inevitability to how this will all go down, but rather that we ought to participate in it with eyes open.
I like what Steve Jobs says, “Life can be so much broader, once you discover one simple fact, and that is that everything around you that you call ‘life’ was made up by people who were no smarter than you. And you can change it, you can influence it, you can build your own things that other people can use. Once you learn that, you’ll never be the same again.”
We’re not just watching something happen, we can influence it. We can bypass incentives and help push for a future that is most human!
A very hearty “Yea and amen!” to all of this, especially to the idea that our choices can and do influence how the world around us develops. That’s very much the idea behind my World of Code series: if we understand the World of Code that we live in, we can make more informed choices about how we engage with it, which will actually help to shape that world - hopefully in a more positive direction.
There are, of course, significant limitations to the degree to which we can influence the world around us, which we should also keep in mind: given that we are swimming in currents that are motivated by literally billions and trillions of dollars of cash-flow, voting with our wallets or our feet will only have a limited impact, even if we all work together - but even the choices that we make on the relatively limited level on which most of us our operating will have a significant impact at least on those in our immediate vicinity.
In more practical terms, while we might not be able to stuff the AI genie back into the bottle, we can work together to make sure our wishes make the world a better place - that is, a more self-sacrificially loving rather than a more selfish one.
"Saruman believes it is only great power that can hold evil in check, but that is now what I have found. I found it is the small everyday deeds of ordinary folk that keep the darkness at bay. Small acts of kindness and love."