By Joe Badalamenti
In the past century, technological advancement has been growing at an exponential rate. Artificial Intelligence (or AI), a specific application of computing technology, has been developed to complete increasingly complex tasks: DallE has the ability to generate unique and detailed paintings; ChatGPT has the ability to create essays, code slices, and other complex compositions, all the while newer, more advanced AI programs are being developed each day. Will Artificial Intelligence programs reach a point where they can completely impersonate and possibly replace humans? Or is this apocalyptic vision just a sci-fi fantasy?
We can begin this discussion with a definition of Artificial Intelligence. Artificial Intelligence is simply a set of algorithms executed by a computer to perform a task or set of tasks defined by the user. The complexity of the task depends on the breadth of the programming and models used by the AI program. One typical process of developing AI is known as machine learning, where “data,” also called “training sets,” are fed to a program in order to generate accurate responses and predictions. As the training sets come to resemble a realistic portrayal of the world, the AI program is able to produce better answers or predictions—one could say it becomes more intelligent. To fully understand why these predictions work, one would require some knowledge of advanced math and statistics. But for the sake of simplicity, I will not stray into those topics.
One of the most glaring limitations of AI is that these programs can only operate through the language of computing. This means that for a complicated task, the input must be converted into a language that the computer understands, then transcribed back into the desired medium. To understand this limitation, let’s take the generation of an AI painting (done using craiyon.com). First, I would need to type an input for the picture I want generated. If I want a picture of a monkey at a typewriter, I would have to type “funny monkey at a typewriter.” The program would then compile its training sets to determine what combination of pixels would resemble a monkey at a typewriter. After a while, the AI program would display the desired picture. (see pictures below).
Another limitation of AI is that of error, and there are two ways in which error affects the performance of AI. One type is error in training sets. This bias refers to any errors or discrepancies in the training sets used to create the model. Going back to the picture example, if a training set shows a stop sign to be slightly orange instead of red, the AI program will believe that a stop sign is orange and generate an orange stop sign whenever asked. Error in data sets can typically be resolved by using larger or more realistic data sets. A more significant type of error is creator bias: AI programs are ultimately created by humans, and as a result, AI programs will resemble the values and biases of their creators. This can become a significant problem if biases in humans/data sets deviate from true, objective reality.
With all that being said, will AI progress to the point where it can perfectly replicate human behavior? Put simply: No. Some scientists may argue that it’s a matter of better programming or technology or training sets, but this approach completely misunderstands the problem. To best understand the issue, one must pose the question: What defines a human? This question was best answered by scholastic psychology. The Scholastics were a collection of medieval philosophers who investigated spiritual and theological questions., Concerning the definition of humanity, St. Thomas Aquinas in his Summa Theologiae answers that a human must have a soul. A soul is an aspect of an organism; it is what separates living things from things that are not living. Unlike other organisms, humans have a rational soul, which means that it has unique abilities, specifically an intellect and a will. Intellect refers to the ability to have an imagination; it is the ability to comprehend concepts, ideas, and abstractions beyond physical objects. The will or free will refers to the ability to make choices freely or to have agency.
Now that I have defined the rational soul, I propose that any AI program, computer, robot, or any electronic device made now or in the future has neither an intellect nor a will. To illustrate this point, a civil society is not something physical, but with intelligence, one can comprehend the essence of society; with agency, one can choose to, or not to, live in society. If you examine an AI program you will see that it is limited only to its programming; it can only act as it is programmed to. This means that it can never have a will on par with Man, which has no such constraints. This also means that any AI program can, ironically enough, never possess the full faculty of intelligence. While AI programs can understand complex arrangements of numbers or sets or pieces of code, AI can not understand what those arrangements represent. Clever programmers may be able to create an AI program to generate an effective code segment or profound piece of art. In the long run however, AI is just a tool that follows a set of commands. This revelation has a number of implications, most importantly that AI can never fully replace humans.
While AI is ultimately just a tool, that does not mean that it isn’t dangerous. Much like modern weaponry, AI can be used to harm others whether maliciously or unintentionally. Thus, an ethical framework for AI development is necessary. Whatever the form that AI takes, it should primarily function to serve humanity, specifically the intellect and will of Man through its function. As AI continues to become more advanced, programmers should remember to maintain a moral framework during all stages of development.