
5251 words, 25 minute read.
Last week, the Vatican released a document entitled Antiqua et Nova about the ethical and philosophical implications of Artificial Intelligence (AI). It was initially written with technological experts and ethicists in mind but, as I read it, I realized that its reflections on intelligence, creativity, and human dignity apply to all of us. In fact, I found it to be a profound reflection more broadly on the human experience and its relationship with technological advancement.
What stood out to me was how the document encourages a thoughtful and hopeful approach to AI. Rather than seeing it as something to fear or blindly embrace, the Church invites us to consider how it can serve humanity when guided by ethics, empathy, and moral responsibility. The tone of the text felt inclusive and bridge-building, offering a perspective that welcomes questions and dialogue.
While the document is fairly extensive at 21K words, I would nonetheless recommend reading it in full. But if you’d like to start with the passages that spoke to me most, you can find them below:1
3. The Church encourages the advancement of science, technology, the arts, and other forms of human endeavor, viewing them as part of the “collaboration of man and woman with God in perfecting the visible creation.” As Sirach affirms, God “gave skill to human beings, that he might be glorified in his marvelous works” (Sir. 38:6). Human abilities and creativity come from God and, when used rightly, glorify God by reflecting his wisdom and goodness. In light of this, when we ask ourselves what it means to “be human,” we cannot exclude a consideration of our scientific and technological abilities.
5. [T]echnological advances should be directed toward serving the human person and the common good.
7. The concept of “intelligence” in AI has evolved over time, drawing on a range of ideas from various disciplines. While its origins extend back centuries, a significant milestone occurred in 1956 when the American computer scientist John McCarthy organized a summer workshop at Dartmouth University to explore the problem of “Artificial Intelligence,” which he defined as “that of making a machine behave in ways that would be called intelligent if a human were so behaving.” This workshop launched a research program focused on designing machines capable of performing tasks typically associated with the human intellect and intelligent behavior.
8. Since then, AI research has advanced rapidly, leading to the development of complex systems capable of performing highly sophisticated tasks. These so-called “narrow AI” systems are typically designed to handle specific and limited functions, such as translating languages, predicting the trajectory of a storm, classifying images, answering questions, or generating visual content at the user’s request. While the definition of “intelligence” in AI research varies, most contemporary AI systems—particularly those using machine learning—rely on statistical inference rather than logical deduction. By analyzing large datasets to identify patterns, AI can “predict” outcomes and propose new approaches, mimicking some cognitive processes typical of human problem-solving. Such achievements have been made possible through advances in computing technology (including neural networks, unsupervised machine learning, and evolutionary algorithms) as well as hardware innovations (such as specialized processors). Together, these technologies enable AI systems to respond to various forms of human input, adapt to new situations, and even suggest novel solutions not anticipated by their original programmers.
9. Due to these rapid advancements, many tasks once managed exclusively by humans are now entrusted to AI. These systems can augment or even supersede what humans are able to do in many fields, particularly in specialized areas such as data analysis, image recognition, and medical diagnosis. While each “narrow AI” application is designed for a specific task, many researchers aspire to develop what is known as “Artificial General Intelligence” (AGI)—a single system capable of operating across all cognitive domains and performing any task within the scope of human intelligence. Some even argue that AGI could one day achieve the state of “superintelligence,” surpassing human intellectual capacities, or contribute to “super-longevity” through advances in biotechnology. Others, however, fear that these possibilities, even if hypothetical, could one day eclipse the human person, while still others welcome this potential transformation.
10. Underlying this and many other perspectives on the subject is the implicit assumption that the term “intelligence” can be used in the same way to refer to both human intelligence and AI. Yet, this does not capture the full scope of the concept. In the case of humans, intelligence is a faculty that pertains to the person in his or her entirety, whereas in the context of AI, “intelligence” is understood functionally, often with the presumption that the activities characteristic of the human mind can be broken down into digitized steps that machines can replicate.
11. This functional perspective is exemplified by the “Turing Test,” which considers a machine “intelligent” if a person cannot distinguish its behavior from that of a human. However, in this context, the term “behavior” refers only to the performance of specific intellectual tasks; it does not account for the full breadth of human experience, which includes abstraction, emotions, creativity, and the aesthetic, moral, and religious sensibilities. Nor does it encompass the full range of expressions characteristic of the human mind. Instead, in the case of AI, the “intelligence” of a system is evaluated methodologically, but also reductively, based on its ability to produce appropriate responses—in this case, those associated with the human intellect—regardless of how those responses are generated.
12. AI’s advanced features give it sophisticated abilities to perform tasks, but not the ability to think. This distinction is crucially important, as the way “intelligence” is defined inevitably shapes how we understand the relationship between human thought and this technology. To appreciate this, one must recall the richness of the philosophical tradition and Christian theology, which offer a deeper and more comprehensive understanding of intelligence—an understanding that is central to the Church’s teaching on the nature, dignity, and vocation of the human person.
14. In the classical tradition, the concept of intelligence is often understood through the complementary concepts of “reason” (ratio) and “intellect” (intellectus). These are not separate faculties but, as Saint Thomas Aquinas explains, they are two modes in which the same intelligence operates: “The term intellect is inferred from the inward grasp of the truth, while the name reason is taken from the inquisitive and discursive process.” This concise description highlights the two fundamental and complementary dimensions of human intelligence. Intellectus refers to the intuitive grasp of the truth—that is, apprehending it with the “eyes” of the mind—which precedes and grounds argumentation itself. Ratio pertains to reasoning proper: the discursive, analytical process that leads to judgment. Together, intellect and reason form the two facets of the act of intelligere, “the proper operation of the human being as such.”
15. Describing the human person as a “rational” being does not reduce the person to a specific mode of thought; rather, it recognizes that the ability for intellectual understanding shapes and permeates all aspects of human activity. Whether exercised well or poorly, this capacity is an intrinsic aspect of human nature. In this sense, the “term ‘rational’ encompasses all the capacities of the human person,” including those related to “knowing and understanding, as well as those of willing, loving, choosing, and desiring; it also includes all corporeal functions closely related to these abilities.” This comprehensive perspective underscores how, in the human person, created in the “image of God,” reason is integrated in a way that elevates, shapes, and transforms both the person’s will and actions.
16. Christian thought considers the intellectual faculties of the human person within the framework of an integral anthropology that views the human being as essentially embodied. In the human person, spirit and matter “are not two natures united, but rather their union forms a single nature.” In other words, the soul is not merely the immaterial “part” of the person contained within the body, nor is the body an outer shell housing an intangible “core.” Rather, the entire human person is simultaneously both material and spiritual.
18. Human beings are “ordered by their very nature to interpersonal communion,” possessing the capacity to know one another, to give themselves in love, and to enter into communion with others. Accordingly, human intelligence is not an isolated faculty but is exercised in relationships, finding its fullest expression in dialogue, collaboration, and solidarity. We learn with others, and we learn through others.
19. The relational orientation of the human person is ultimately grounded in the eternal self-giving of the Triune God, whose love is revealed in creation and redemption. The human person is “called to share, by knowledge and love, in God’s own life.”
21. Moving beyond the limits of empirical data, human intelligence can “with genuine certitude attain to reality itself as knowable.” While reality remains only partially known, the desire for truth “spurs reason always to go further; indeed, it is as if reason were overwhelmed to see that it can always go beyond what it has already achieved.” Although Truth in itself transcends the boundaries of human intelligence, it irresistibly attracts it. Drawn by this attraction, the human person is led to seek “truths of a higher order.”
26. human intelligence becomes more clearly understood as a faculty that forms an integral part of how the whole person engages with reality. Authentic engagement requires embracing the full scope of one’s being: spiritual, cognitive, embodied, and relational.
27. This engagement with reality unfolds in various ways, as each person, in his or her multifaceted individuality, seeks to understand the world, relate to others, solve problems, express creativity, and pursue integral well-being through the harmonious interplay of the various dimensions of the person’s intelligence. This involves logical and linguistic abilities but can also encompass other modes of interacting with reality. Consider the work of an artisan, who “must know how to discern, in inert matter, a particular form that others cannot recognize” and bring it forth through insight and practical skill. Indigenous peoples who live close to the earth often possess a profound sense of nature and its cycles. Similarly, a friend who knows the right word to say or a person adept at managing human relationships exemplifies an intelligence that is “the fruit of self-examination, dialogue and generous encounter between persons.” As Pope Francis observes, “in this age of artificial intelligence, we cannot forget that poetry and love are necessary to save our humanity.”
30. In light of the foregoing discussion, the differences between human intelligence and current AI systems become evident. While AI is an extraordinary technological achievement capable of imitating certain outputs associated with human intelligence, it operates by performing tasks, achieving goals, or making decisions based on quantitative data and computational logic. For example, with its analytical power, AI excels at integrating data from a variety of fields, modeling complex systems, and fostering interdisciplinary connections. In this way, it can help experts collaborate in solving complex problems that “cannot be dealt with from a single perspective or from a single set of interests.”
31. However, even as AI processes and simulates certain expressions of intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. Human intelligence, in contrast, develops organically throughout the person’s physical and psychological growth, shaped by a myriad of lived experiences in the flesh. Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history.In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
32. Consequently, although AI can simulate aspects of human reasoning and perform specific tasks with incredible speed and efficiency, its computational abilities represent only a fraction of the broader capacities of the human mind. For instance, AI cannot currently replicate moral discernment or the ability to establish authentic relationships. Moreover, human intelligence is situated within a personally lived history of intellectual and moral formation that fundamentally shapes the individual’s perspective, encompassing the physical, emotional, social, moral, and spiritual dimensions of life. Since AI cannot offer this fullness of understanding, approaches that rely solely on this technology or treat it as the primary means of interpreting the world can lead to “a loss of appreciation for the whole, for the relationships between things, and for the broader horizon.”
35. Considering all these points, as Pope Francis observes, “the very use of the word ‘intelligence’” in connection with AI “can prove misleading” and risks overlooking what is most precious in the human person. In light of this, AI should not be seen as an artificial form ofhuman intelligence but as a product ofit.
38. Like any human endeavor, technological development must be directed to serve the human person and contribute to the pursuit of “greater justice, more extensive fraternity, and a more humane order of social relations,” which are “more valuable than advances in the technical field.”
40. Like any product of human creativity, AI can be directed toward positive or negative ends. When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used.
41. At the same time, it is not only the ends that are ethically significant but also the means employed to achieve them. Additionally, the overall vision and understanding of the human person embedded within these systems are important to consider as well. Technological products reflect the worldview of their developers, owners, users, and regulators, and have the power to “shape the world and engage consciences on the level of values.” On a societal level, some technological developments could also reinforce relationships and power dynamics that are inconsistent with a proper understanding of the human person and society.
42. Therefore, the ends and the means used in a given application of AI, as well as the overall vision it incorporates, must all be evaluated to ensure they respect human dignity and promote the common good. As Pope Francis has stated, “the intrinsic dignity of every man and every woman” must be “the key criterion in evaluating emerging technologies; these will prove ethically sound to the extent that they help respect that dignity and increase its expression at every level of human life,” including in the social and economic spheres. In this sense, human intelligence plays a crucial role not only in designing and producing technology but also in directing its use in line with the authentic good of the human person. The responsibility for managing this wisely pertains to every level of society, guided by the principle of subsidiarity and other principles of Catholic Social Teaching.
44. An evaluation of the implications of this guiding principle could begin by considering the importance of moral responsibility. Since full moral causality belongs only to personal agents, not artificial ones, it is crucial to be able to identify and define who bears responsibility for the processes involved in AI, particularly those capable of learning, correction, and reprogramming. While bottom-up approaches and very deep neural networks enable AI to solve complex problems, they make it difficult to understand the processes that lead to the solutions they adopted. This complicates accountability since if an AI application produces undesired outcomes, determining who is responsible becomes difficult. To address this problem, attention needs to be given to the nature of accountability processes in complex, highly automated settings, where results may only become evident in the medium to long term. For this, it is important that ultimate responsibility for decisions made using AI rests with the human decision-makers and that there is accountability for the use of AI at each stage of the decision-making process.
45. In addition to determining who is responsible, it is essential to identify the objectives given to AI systems. Although these systems may use unsupervised autonomous learning mechanisms and sometimes follow paths that humans cannot reconstruct, they ultimately pursue goals that humans have assigned to them and are governed by processes established by their designers and programmers. Yet, this presents a challenge because, as AI models become increasingly capable of independent learning, the ability to maintain control over them to ensure that such applications serve human purposes may effectively diminish. This raises the critical question of how to ensure that AI systems are ordered for the good of people and not against them.
46. While responsibility for the ethical use of AI systems starts with those who develop, produce, manage, and oversee such systems, it is also shared by those who use them. As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding.” Those who use AI to accomplish a task and follow its results create a context in which they are ultimately responsible for the power they have delegated. Therefore, insofar as AI can assist humans in making decisions, the algorithms that govern it should be trustworthy, secure, robust enough to handle inconsistencies, and transparent in their operation to mitigate biases and unintended side effects. Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability. Moreover, those using AI should be careful not to become overly dependent on it for their decision-making, a trend that increases contemporary society’s already high reliance on technology.
48. [T]he use of AI, as Pope Francis said, must be “accompanied by an ethic inspired by a vision of the common good, an ethic of freedom, responsibility, and fraternity, capable of fostering the full development of people in relation to others and to the whole of creation.”
51. Viewed through this lens, AI could “introduce important innovations in agriculture, education and culture, an improved level of life for entire nations and peoples, and the growth of human fraternity and social friendship,” and thus be “used to promote integral human development.” AI could also help organizations identify those in need and counter discrimination and marginalization. These and other similar applications of this technology could contribute to human development and the common good.
52. However, while AI holds many possibilities for promoting the good, it can also hinder or even counter human development and the common good. Pope Francis has noted that “evidence to date suggests that digital technologies have increased inequality in our world. Not just differences in material wealth, which are also significant, but also differences in access to political and social influence.” In this sense, AI could be used to perpetuate marginalization and discrimination, create new forms of poverty, widen the “digital divide,” and worsen existing social inequalities.
53. Moreover, the concentration of the power over mainstream AI applications in the hands of a few powerful companies raises significant ethical concerns. Exacerbating this problem is the inherent nature of AI systems, where no single individual can exercise complete oversight over the vast and complex datasets used for computation. This lack of well-defined accountability creates the risk that AI could be manipulated for personal or corporate gain or to direct public opinion for the benefit of a specific industry. Such entities, motivated by their own interests, possess the capacity to exercise “forms of control as subtle as they are invasive, creating mechanisms for the manipulation of consciences and of the democratic process.”
55. Therefore, rather than merely pursuing economic or technological objectives, AI should serve “the common good of the entire human family,” which is “the sum total of social conditions that allow people, either as groups or as individuals, to reach their fulfillment more fully and more easily.”
59. Because “true wisdom demands an encounter with reality,” the rise of AI introduces another challenge. Since AI can effectively imitate the products of human intelligence, the ability to know when one is interacting with a human or a machine can no longer be taken for granted. Generative AI can produce text, speech, images, and other advanced outputs that are usually associated with human beings. Yet, it must be understood for what it is: a tool, not a person. This distinction is often obscured by the language used by practitioners, which tends to anthropomorphize AI and thus blurs the line between human and machine.
60. Anthropomorphizing AI also poses specific challenges for the development of children, potentially encouraging them to develop patterns of interaction that treat human relationships in a transactional manner, as one would relate to a chatbot. Such habits could lead young people to see teachers as mere dispensers of information rather than as mentors who guide and nurture their intellectual and moral growth. Genuine relationships, rooted in empathy and a steadfast commitment to the good of the other, are essential and irreplaceable in fostering the full development of the human person.
61. In this context, it is important to clarify that, despite the use of anthropomorphic language, no AI application can genuinely experience empathy. Emotions cannot be reduced to facial expressions or phrases generated in response to prompts; they reflect the way a person, as a whole, relates to the world and to his or her own life, with the body playing a central role. True empathy requires the ability to listen, recognize another’s irreducible uniqueness, welcome their otherness, and grasp the meaning behind even their silences. Unlike the realm of analytical judgment in which AI excels, true empathy belongs to the relational sphere. It involves intuiting and apprehending the lived experiences of another while maintaining the distinction between self and other. While AI can simulate empathetic responses, it cannot replicate the eminently personal and relational nature of authentic empathy.
62. In light of the above, it is clear why misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts—such as in education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people.
66. Another area where AI is already having a profound impact is the world of work. As in many other fields, AI is driving fundamental transformations across many professions, with a range of effects. On the one hand, it has the potential to enhance expertise and productivity, create new jobs, enable workers to focus on more innovative tasks, and open new horizons for creativity and innovation.
67. However, while AI promises to boost productivity by taking over mundane tasks, it frequently forces workers to adapt to the speed and demands of machines rather than machines being designed to support those who work. As a result, contrary to the advertised benefits of AI, current approaches to the technology can paradoxically deskillworkers, subject them to automated surveillance, and relegate them to rigid and repetitive tasks. The need to keep up with the pace of technology can erode workers’ sense of agency and stifle the innovative abilities they are expected to bring to their work.
68. AI is currently eliminating the need for some jobs that were once performed by humans. If AI is used to replace human workers rather than complement them, there is a “substantial risk of disproportionate benefit for the few at the price of the impoverishment of many.” Additionally, as AI becomes more powerful, there is an associated risk that human labor may lose its value in the economic realm. This is the logical consequence of the technocratic paradigm: a world of humanity enslaved to efficiency, where, ultimately, the cost of humanity must be cut. Yet, human lives are intrinsically valuable, independent of their economic output. Nevertheless, the “current model,” Pope Francis explains, “does not appear to favor an investment in efforts to help the slow, the weak, or the less talented to find opportunities in life.” In light of this, “we cannot allow a tool as powerful and indispensable as Artificial Intelligence to reinforce such a paradigm, but rather, we must make Artificial Intelligence a bulwark against its expansion.”
85. AI could be used as an aid to human dignity if it helps people understand complex concepts or directs them to sound resources that support their search for the truth.
86. However, AI also presents a serious risk of generating manipulated content and false information, which can easily mislead people due to its resemblance to the truth. Such misinformation might occur unintentionally, as in the case of AI “hallucination,” where a generative AI system yields results that appear real but are not. Since generating content that mimics human artifacts is central to AI’s functionality, mitigating these risks proves challenging. Yet, the consequences of such aberrations and false information can be quite grave. For this reason, all those involved in producing and using AI systems should be committed to the truthfulness and accuracy of the information processed by such systems and disseminated to the public.
87. While AI has a latent potential to generate false information, an even more troubling problem lies in the deliberate misuse of AI for manipulation. This can occur when individuals or organizations intentionally generate and spread false content with the aim to deceive or cause harm, such as “deepfake” images, videos, and audio—referring to a false depiction of a person, edited or generated by an AI algorithm. The danger of deepfakes is particularly evident when they are used to target or harm others. While the images or videos themselves may be artificial, the damage they cause is real, leaving “deep scars in the hearts of those who suffer it” and “real wounds in their human dignity.”
102. At the same time, while the theoretical risks of AI deserve attention, the more immediate and pressing concern lies in how individuals with malicious intentions might misuse this technology. Like any tool, AI is an extension of human power, and while its future capabilities are unpredictable, humanity’s past actions provide clear warnings. The atrocities committed throughout history are enough to raise deep concerns about the potential abuses of AI.
103. Saint John Paul II observed that “humanity now has instruments of unprecedented power: we can turn this world into a garden, or reduce it to a pile of rubble.” Given this fact, the Church reminds us, in the words of Pope Francis, that “we are free to apply our intelligence towards things evolving positively,” or toward “decadence and mutual destruction.” To prevent humanity from spiraling into self-destruction, there must be a clear stand against all applications of technology that inherently threaten human life and dignity. This commitment requires careful discernment about the use of AI, particularly in military defense applications, to ensure that it always respects human dignity and serves the common good. The development and deployment of AI in armaments should be subject to the highest levels of ethical scrutiny, governed by a concern for human dignity and the sanctity of life.
104. Technology offers remarkable tools to oversee and develop the world’s resources. However, in some cases, humanity is increasingly ceding control of these resources to machines. Within some circles of scientists and futurists, there is optimism about the potential of artificial general intelligence (AGI), a hypothetical form of AI that would match or surpass human intelligence and bring about unimaginable advancements. Some even speculate that AGI could achieve superhuman capabilities. At the same time, as society drifts away from a connection with the transcendent, some are tempted to turn to AI in search of meaning or fulfillment—longings that can only be truly satisfied in communion with God.
105. However, the presumption of substituting God for an artifact of human making is idolatry, a practice Scripture explicitly warns against (e.g., Ex. 20:4; 32:1-5; 34:17). Moreover, AI may prove even more seductive than traditional idols for, unlike idols that “have mouths but do not speak; eyes, but do not see; ears, but do not hear” (Ps. 115:5-6), AI can “speak,” or at least gives the illusion of doing so (cf. Rev. 13:15). Yet, it is vital to remember that AI is but a pale reflection of humanity—it is crafted by human minds, trained on human-generated material, responsive to human input, and sustained through human labor. AI cannot possess many of the capabilities specific to human life, and it is also fallible. By turning to AI as a perceived “Other” greater than itself, with which to share existence and responsibilities, humanity risks creating a substitute for God. However, it is not AI that is ultimately deified and worshipped, but humanity itself—which, in this way, becomes enslaved to its own work.
108. Considering the various challenges posed by advances in technology, Pope Francis emphasized the need for growth in “human responsibility, values, and conscience,” proportionate to the growth in the potential that this technology brings—recognizing that “with an increase in human power comes a broadening of responsibility on the part of individuals and communities.”
109. At the same time, the “essential and fundamental question” remains “whether in the context of this progress man, as man, is becoming truly better, that is to say, more mature spiritually, more aware of the dignity of his humanity, more responsible, more open to others, especially the neediest and the weakest, and readier to give and to aid all.”
110. As a result, it is crucial to know how to evaluate individual applications of AI in particular contexts to determine whether its use promotes human dignity, the vocation of the human person, and the common good. As with many technologies, the effects of the various uses of AI may not always be predictable from their inception. As these applications and their social impacts become clearer, appropriate responses should be made at all levels of society, following the principle of subsidiarity. Individual users, families, civil society, corporations, institutions, governments, and international organizations should work at their proper levels to ensure that AI is used for the good of all.
112. AI should be used only as a tool to complement human intelligence rather than replace its richness.
117. From this perspective of wisdom, believers will be able to act as moral agents capable of using this technology to promote an authentic vision of the human person and society. This should be done with the understanding that technological progress is part of God’s plan for creation—an activity that we are called to order toward the Paschal Mystery of Jesus Christ, in the continual search for the True and the Good.
Notes
93. The term “bias” in this document refers to algorithmic bias (systematic and consistent errors in computer systems that may disproportionately prejudice certain groups in unintended ways) or learning bias (which will result in training on a biased data set) and not the “bias vector” in neural networks (which is a parameter used to adjust the output of “neurons” to adjust more accurately to the data).
- The introduction to this article was written by ChatGPT, while this footnote is written directly by me, Irene. Or is it? Can you tell? ↩︎