Abstract
Technology has become one of the main channels through which people engage in most of their everyday activities. When working, learning, or socializing, the affordances created by technological tools determine the way in which users interact with one another and their environment, thus favoring certain actions and behaviors, while discouraging others. The ethical dimension behind the use of technology has been already studied in recent works, but the question is often formulated in a protective way that focuses on shielding the users from potential detrimental effects. Nevertheless, when considering collateral ethical benefits that the use of technology could bring about, virtue ethics and the notions of “practice” and “practical wisdom” present new opportunities to harness this potential. By understanding the combination of technology, its users and their interactions as a system, technology can be seen as the space where most of its users’ daily practice happens. Through this practice, users can get the chance to collaterally develop and enhance their ethical awareness, sensitivity and reasoning capabilities. This work is shaped as a manifesto that provides the background, motivations and directions needed to ask a complementary question about the ethics of technology that aims towards the potentiality behind the use of technology. Instead of focusing on shielding the users, the proposed ethical idealist approach to the ethics of technology aims to empower them by understanding their use of technology as the space where the development of their practical wisdom, understood in the virtue ethics’ sense, takes place.
Similar content being viewed by others
Data Availability
Not applicable
Notes
Some contemporary authors use different terms that roughly point to this notion. In Garcés (2021, p. 19), the author refers to a “dignified”, or a “decent life” (translated from the Catalan term “vida digne”); in Han (2017, pp. 17–28), the author introduces a sort of negative counterpart as the “bare life” (which corresponds more to a matter of existing, rather than to experiencing any sort of flourishing and fulfillment); in Vallor (2016, pp. 21–22, 124), the author uses the term “flourishing” to refer to living a fulfilling life allowing both personal and collective growth—which, in a sense, is tied up to the existence of a possibility space enabling growth that certain ethical theories, like ethics of care (Held et al., 2006), or relational ethics (Metz & Miller, 2016), aim to create. In the present work, these notions will be used interchangeably.
The term “affordance” was introduced by the American psychologist James J. Gibson in the following way: “the affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill” (Gibson, 1979, pp. 127–137). In the field of human-computer interaction, an affordance is often understood in a similar way as the relation between the object and the user: “an affordance is what the user can do with an object based on the user’s capabilities. As such, an affordance is not a ‘property’ of an object (...) An affordance is, in essence, an action possibility in the relation between user and an object” (Interaction Design Foundation, n.d).
Different fields provide slightly different definitions of “autonomy”, but they always have a common core involving the notions of agency and the capacity to make an informed choice devoid of any form of coercion.
This is expressed quite succinctly in Pojman & Fieser (2016, p. 142) in the following way: “Deontological systems focus on an egoistic, minimal morality whose basic principles seem more preventative than positive”.
In Han (2018, p. 44), and referring to an idea of Alain Ehrenberg, the author points out how the nowadays culture centered around performance and optimization avoids devoting effort into conflict, as conflict usually requires time—time that could be seen as “wasted”, as it could have been used instead in a more productive way. Nevertheless, the author argues that conflicts are not destructive per se, as they also have a constructive side in which the subject grows and becomes more mature by facing them. Behind this reflection, one might find another clue regarding today’s preference for “externally driven” ethical theories, as they try to evade internal (subjective) conflict, whereas virtue ethics retain subjective conflict at its core—and as a prerequisite for the subject to develop their own practical wisdom.
In fact, since the last decades of the twentieth century there has been a contemporary renewal of interest in virtue ethics theories from different authors (Anscombe, 1958; MacIntyre, 2013; Vallor, 2016, pp. 20–23), probably motivated in part by the shortcomings identified in other, more popular approaches to ethics—such as utilitarian and deontological approaches.
It should be enough to recall how the recent COVID-19 global pandemic, or some of the current environmental challenges, requires an active and responsible participation from practically every member of society and how, beyond practices that can be regulated, each own autonomous decisions are key in order to try and face such challenges with some guarantees.
In fact, and while some authors like (Smart et al., 2017) see the Internet as a scaffolding for our own cognition and cognitive processes, other authors such as Krueger & Osler (2019) go one step beyond and suggest that the Internet is a scaffolding also for our affective processes—including our moods, emotions and our ability to regulate the way we feel.
Although some offline technologies could also be relevant for this work, the focus will be placed on online technologies, as these are the kind of technologies where interactions among users are more common. Furthermore, and because some ethical theories like ethics of care and relation ethics, which strongly resonate with the motivations behind this work, are strongly based on interactions and relations between people, technologies that allow such interactions are a natural first step. Ideally, and even if it is carried out through an online medium, the development of one’s practical wisdom, together with the enhancement of one’s ethical autonomy, should be ultimately translatable to offline contexts: as one becomes more aware and sensitive towards ethically relevant matters, one should be able to identify and react to them in an ethically desirable way both online and offline. In fact, and as it is pointed out in Floridi (2018), it can be considered that today we neither live offline, nor online, but onlife, as an almost inseparable mixture of the analogue and the digital, the offline and the online.
The term “ethically undesirable consequences” refers to those outcomes of certain decisions that have an effect that is detrimental for the user because it might be unfair, or biased, or be contrary to certain principle that would be beneficial for the user, among others.
In this work, the term “system” is used to refer to the structure that arises from a particular way of organizing a matter, whereas the term “status quo” refers to the specific instantiation of elements that occupy a place in such structure.
It is worth noting how “transparency” is often established as a requirement for “trust”, in the sense that automated processes in AI-driven technologies should be explicitly monitorable by external users. However, and as pointed out by Byung-Chul Han in Han (2020, p. 91), trust should not need transparency, as trust is built, precisely, under circumstances of asymmetrical information, when one party believes that the other one will do as promised without the need to supervise their work. Ironically, when one needs full transparency, trust is precisely what is missing.
There are already some works, like Mohamed et al. (2020), that combine theories about possible reconfigurations of power distributions in our society, such as decolonial theory, and the effects that such theories could have on technology and AI.
Needless to say, this same approach can also be used as marketing and consumerism strategies without any intention of nudging healthy habits (Wilkinson, 2013).
References
Acton, H. B., & Watkins, J. W. (1963). Symposium: negative utilitarianism. Proceedings of the Aristotelian Society, Supplementary Volumes, 37, 83–114.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica, May 23, 2016.
Anscombe, G. E. M. (1958). Modern moral philosophy. Philosophy, 33(124), 1–19.
Association for Computing Machinery (ACM) (2018). Acm code of ethics and professional conduct. https://www.acm.org/code-of-ethics/. Accessed 27 July 2022.
Association of Nordic Engineers (2021). Addressing ethical dilemmas in ai: Listening to engineers. https://nordicengineers.org/wp-content/uploads/2021/01/addressing-ethical-dilemmas-in-ai-listening-to-the-engineers.pdf. Accessed 26 Oct 2021.
Blaschke, L. M. (2012). Heutagogy and lifelong learning: A review of heutagogical practice and self-determined learning. The International Review of Research in Open and Distributed Learning, 13(1), 56–71.
Blaschke, L.M. (2018). Self-determined learning (heutagogy) and digital media creating integrated educational environments for developing lifelong learning skills. The digital turn in higher education (pp. 129–140). Wiesbaden, Springer.
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186.
Carr, A., Balasubramanian, K., Atieno, R., & Onyango, J. (2018). Lifelong learning to empowerment: beyond formal education. Distance Education, 39(1), 69–86.
Chirkov, V. I., Ryan, R., & Sheldon, K. M. (2010). Human autonomy in cross-cultural context: Perspectives on the psychology of agency, freedom, and well-being. Dordrecht: Springer.
Dancy, J. (1993). An ethic of prima facie duties. In: P. Singer (Ed.), A companion to ethics (pp. 219–229). Hoboken, N: John Wiley & Sons.
Davis, N. (1993). Contemporary deontology. In: P. Singer (Ed.), A companion to ethics (pp. 205–218). Hoboken: John Wiley & Sons.
Deci, E. L., & Ryan, R. M. (1995). Human autonomy. In M. H. Kernis (Ed.), Efficacy, agency, and self-esteem (pp. 31–49). Springer: New York.
EU Parliament (2021). Proposal for a regulation of the european parliament and of the council laying down harmonised rules on ai and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021PC0206&from=EN. Accessed 03 Nov 2021.
Favaretto, M., De Clercq, E., & Elger, B. S. (2019). Big data and discrimination: perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 1–27.
Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8.
Frischmann, B., & Selinger, E. (2018). Re-engineering humanity. Cambridge University Press.
Garcés, M. (2021). Nova il·lustració radical. Barcelona: Anagrama.
Gibson, J. J. (1979). The ecological approach to visual perception. New York: Houghton Mifflin Harcourt (HMH).
Gilbert, F. (1941). Political thought of the renaissance and reformation. The Huntington Library Quarterly, 443–468.
Han, B.-C. (2017). The agony of eros. Cambridge: MIT Press.
Han, B. C. (2018). The expulsion of the other: Society, perception and communication today. Hoboken: John Wiley & Sons.
Han, B. C. (2020). The transparency society. Stanford: Stanford University Press.
Held, V., et al. (2006). The ethics of care: Personal, political, and global. Oxford University Press on Demand.
HLEG on AI (2019). High-level expert group on artificial intelligence: Ethics guidelines for trustworthy AI. https://www.aepd.es/sites/default/files/2019-12/ai-ethics-guidelines.p df. Accessed 03 Nov 2021.
IEEE (2016). Ethically aligned design: Prioritizing human wellbeing with autonomous and intelligent systems. https://standards.ieee.org/content/dam/ieee-standards/standards/web/do cuments/other/ead1e.pdf. Accessed 15 Nov 2021.
Interaction Design Foundation (n.d). Interaction design foundation: Affordances. https://www.interaction-design.org/literature/topics/affordances. Accessed 29 Nov 2021.
Johnston, C. (2014). No dislike button for facebook, declares zuckerberg (the guardian). https://www.theguardian.com/technology/2014/dec/12/no-dislike-button-f or-facebook-declares-zuckerberg. Accessed 27 July 2022.
Krueger, J., & Osler, L. (2019). Engineering affect. Philosophical Topics, 47(2), 205–232.
MacIntyre, A. (2013). After virtue (3rd ed.), London, A&C Black. (First edition published in 1981).
McInnerney, J. M., & Roberts, T. S. (2004). Online learning: Social interaction and the creation of a sense of community. Journal of Educational Technology & Society, 7(3), 73–81.
Metz, T. & Miller, S.C. (2016). Relational ethics. The International Encyclopedia of Ethics, 1–10.
Mill, J. S. (1987). Utilitarianism and other essays. New York: Penguin Classics.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial ai: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684.
Pojman, L. P., & Fieser, J. (2016). Cengage advantage ethics: Discovering right and wrong. Cengage Learning.
Roshwald, M. (1971). Realism and idealism in politics. Social Science, 100–107.
Russell, S., Dewey, D., & Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Magazine, 36(4), 105–114.
Smart, P., Heersmink, R., & Clowes, R.W. (2017). The cognitive ecology of the internet. Cognition beyond the brain (pp. 251–282). Springer.
Square-Enix (n.d). Nier: Automata. https://nierautomata.square-enix-games.com. Accessed 27 July 2022.
Suciu, P. (2021). Youtube removed ‘dislikes’ button – it could impact ‘how to’ and ‘crafts’ videos (forbes). https://www.forbes.com/sites/petersuciu/2021/11/24/youtube-removed-dislikes-button--it-could-impact-how-to-and-crafts-videos/?sh=766c3f4c5a53. Accessed 27 July 2022.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. London: Yale University Press.
United Nations (1948). Universal declaration of human rights.https://www.un.org/en/about-us/universal-declaration-of-human-rights. Accessed 15 Oct 2021.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
Weinmann, M., Schneider, C., & Vom Brocke, J. (2016). Digital nudging. Business & Information Systems Engineering, 58(6), 433–436.
Wikipedia (n.d.-a). Earthbound.https://en.wikipedia.org/wiki/EarthBound. Accessed 27 July 2022.
Wikipedia (n.d.-b). Wii sports.https://en.wikipedia.org/wiki/Wii_Sports. Accessed 27 July 2022.
Wilkinson, T. M. (2013). Nudging and manipulation. Political Studies, 61(2), 341–355.
Yapo, A., & Weiss, J. (2018). Ethical implications of bias in machine learning. In Proceedings of the 51st Hawaii international conference on system sciences (pp. 5365–5372).
Zuolo, F. (2016). Realism and idealism. A companion to political philosophy. methods, tools, topics (pp. 75–85). London: Routledge.
Acknowledgements
The author wants to thank the two anonymous reviewers, as well as Prof. Josep M. Basart, for all their insightful comments on the first version of this manuscript, which were really helpful to improve and clarify some parts of it.
Funding
This work has been supported by a UOC postdoctoral stay.
Author information
Authors and Affiliations
Contributions
The author approved the final version of the manuscript.
Corresponding author
Ethics declarations
Ethics Approval and Consent to Participate
Not applicable
Consent for Publication
The author approved the consent for publication of this manuscript.
Competing Interests
The author declares no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Information in Interactions between Humans and Machines.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Casas-Roma, J. Ethical Idealism, Technology and Practice: a Manifesto. Philos. Technol. 35, 86 (2022). https://doi.org/10.1007/s13347-022-00575-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13347-022-00575-7