Contents
48 found
Order:
  1. AI-Based Solutions for Environmental Monitoring in Urban Spaces.Hilda Andrea - manuscript
    The rapid advancement of urbanization has necessitated the creation of "smart cities," where information and communication technologies (ICT) are used to improve the quality of urban life. Central to the smart city paradigm is data integration—connecting disparate data sources from various urban systems, such as transportation, healthcare, utilities, and public safety. This paper explores the role of Artificial Intelligence (AI) in facilitating data integration within smart cities, focusing on how AI technologies can enable effective urban governance. By examining the current (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  2. Artificial Leviathan: Exploring Social Evolution of LLM Agents Through the Lens of Hobbesian Social Contract Theory.Gordon Dai, Weijia Zhang, Jinhan Li, Siqi Yang, Chidera Ibe, Srihas Rao, Arthur Caetano & Misha Sra - manuscript
    The emergence of Large Language Models (LLMs) and advancements in Artificial Intelligence (AI) offer an opportunity for computational social science research at scale. Building upon prior explorations of LLM agent design, our work introduces a simulated agent society where complex social relationships dynamically form and evolve over time. Agents are imbued with psychological drives and placed in a sandbox survival environment. We conduct an evaluation of the agent society through the lens of Thomas Hobbes's seminal Social Contract Theory (SCT). We (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  3. What Good is Superintelligent AI?Tanya de Villiers-Botha - manuscript
    Extraordinary claims about both the imminenceof superintelligent AI systems and their foreseen capabilities have gone mainstream. It is even argued that we should exacerbate known risks such as climate change in the short term in the attempt to develop superintelligence (SI), which will then purportedly solve those very problems. Here, I examine the plausibility of these claims. I first ask what SI is taken to be and then ask whether such SI could possibly hold the benefits often envisioned. I conclude (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  4. What is AI safety? What do we want it to be?Jacqueline Harding & Cameron Domenico Kirk-Giannini - manuscript
    The field of AI safety seeks to prevent or reduce the harms caused by AI systems. A simple and appealing account of what is distinctive of AI safety as a field holds that this feature is constitutive: a research project falls within the purview of AI safety just in case it aims to prevent or reduce the harms caused by AI systems. Call this appealingly simple account The Safety Conception of AI safety. Despite its simplicity and appeal, we argue that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  5. On DeLancey’s The Passionate Engines: Affective engineering and counterfactual thinking. [REVIEW]Manh-Tung Ho - manuscript
    Craig Delancey's The Passionate Engines presents a comprehensive account of “what basic emotions reveal about central problems of the philosophy of mind” (2001, p. vii). The book discusses five major issues: The affect program theory, intentionality, phenomenal consciousness, and artificial intelligence (AI). In this essay, I would like to briefly review the major tenets in the book and then focus on its discussion of AI, which has not been reviewed in detail. I outline some of the recent developments in cognitive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  6. A taxonomy of epistemic injustice in the context of AI and the case for generative hermeneutical erasure.Warmhold Jan Thomas Mollema - manuscript
    Whether related to machine learning models’ epistemic opacity, algorithmic classification systems’ discriminatory automation of testimonial prejudice, the distortion of human beliefs via the hallucinations of generative AI, the inclusion of the global South in global AI governance, the execution of bureaucratic violence via algorithmic systems, or located in the interaction with conversational artificial agents epistemic injustice related to AI is a growing concern. Based on a proposed general taxonomy of epistemic injustice, this paper first sketches a taxonomy of the types (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  7. “i am a stochastic parrot, and so r u”: Is AI-based framing of human behaviour and cognition a conceptual metaphor or conceptual engineering?Warmhold Jan Thomas Mollema & Thomas Wachter - manuscript
    Understanding human behaviour, neuroscience and psychology using the concepts of ‘computer’, ‘software and hardware’ and ‘AI’ is becoming increasingly popular. In popular media and parlance, people speak of being ‘overloaded’ like a CPU, ‘computing an answer to a question’, of ‘being programmed’ to do something. Now, given the massive integration of AI technologies into our daily lives, AI-related concepts are being used to metaphorically compare AI systems with human behaviour and/or cognitive abilities like language acquisition. Rightfully, the epistemic success of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  8. Before the Systematicity Debate: Recovering the Rationales for Systematizing Thought.Matthieu Queloz - manuscript
    Over the course of the twentieth century, the notion of the systematicity of thought has acquired a much narrower meaning than it used to carry for much of its history. The so-called “systematicity debate” that has dominated the philosophy of language, cognitive science, and AI research over the last thirty years understands the systematicity of thought in terms of the compositionality of thought. But there is an older, broader, and more demanding notion of systematicity that is now increasingly relevant again. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  9. Sideloading: Creating A Model of a Person via LLM with Very Large Prompt.Alexey Turchin & Roman Sitelew - manuscript
    Sideloading is the creation of a digital model of a person during their life via iterative improvements of this model based on the person's feedback. The progress of LLMs with large prompts allows the creation of very large, book-size prompts which describe a personality. We will call mind-models created via sideloading "sideloads"; they often look like chatbots, but they are more than that as they have other output channels, like internal thought streams and descriptions of actions. -/- By arranging the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  10. A Hybrid Approach for Intrusion Detection in IoT Using Machine Learning and Signature-Based Methods.Janet Yan - manuscript
    Internet of Things (IoT) devices have transformed various industries, enabling advanced functionalities across domains such as healthcare, smart cities, and industrial automation. However, the increasing number of connected devices has raised significant concerns regarding their security. IoT networks are highly vulnerable to a wide range of cyber threats, making Intrusion Detection Systems (IDS) critical for identifying and mitigating malicious activities. This paper proposes a hybrid approach for intrusion detection in IoT networks by combining Machine Learning (ML) techniques with Signature-Based Methods. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  11. The Curious Case of Uncurious Creation.Lindsay Brainard - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper seeks to answer the question: Can contemporary forms of artificial intelligence be creative? To answer this question, I consider three conditions that are commonly taken to be necessary for creativity. These are novelty, value, and agency. I argue that while contemporary AI models may have a claim to novelty and value, they cannot satisfy the kind of agency condition required for creativity. From this discussion, a new condition for creativity emerges. Creativity requires curiosity, a motivation to pursue epistemic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   6 citations  
  12. The Foundations of the Mentalist Theory and the Statistical Machine Learning Challenge: Comments on Matthias Mahlmann’s Mind and Rights.Vincent Carchidi - forthcoming - Symposium on Matthias Mahlmann's Mind and Rights.
    Matthias Mahlmann’s Mind and Rights (M&R) argues that the mentalist theory of moral cognition—premised on an approach to the mind most closely associated with generative linguistics—is the appropriate lens through which to understand moral judgment’s roots in the mind. Specifically, he argues that individuals possess an inborn moral faculty responsible for the principled generation of moral intuitions. These moral intuitions, once sufficiently abstracted, generalized, and universalized by individuals, gave rise to the idea of human rights embodied in such conventions as (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  13. Conversations with Chatbots.P. Connolly - forthcoming - In Patrick Connolly, Sandy Goldberg & Jennifer Saul, Conversations Online. Oxford University Press.
    The problem considered in this chapter emerges from the tension we find when looking at the design and architecture of chatbots on the one hand and their conversational aptitude on the other. In the way that LLM chatbots are designed and built, we have good reason to suppose they don't possess second-order capacities such as intention, belief or knowledge. Yet theories of conversation make great use of second-order capacities of speakers and their audiences to explain how aspects of interaction succeed. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  15. Distributional Semantics, Holism, and the Instability of Meaning.Jumbly Grindrod, J. D. Porter & Nat Hansen - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
    Large Language Models are built on the so-called distributional semantic approach to linguistic meaning that has the distributional hypothesis at its core. The distributional hypothesis involves a holistic conception of word meaning: the meaning of a word depends upon its relations to other words in the model. A standard objection to holism is the charge of instability: any change in the meaning properties of a linguistic system (a human speaker, for example) would lead to many changes or a complete change (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  16. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  17. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Large Language models are stochastic measuring devices.Fintan Mallory - forthcoming - In Herman Cappelen & Rachel Sterken, Communicating with AI: Philosophical Perspectives. Oxford: Oxford University Press.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  19. Interventionist Methods for Interpreting Deep Neural Networks.Raphaël Millière & Cameron Buckner - forthcoming - In Gualtiero Piccinini, Neurocognitive Foundations of Mind. Routledge.
    Recent breakthroughs in artificial intelligence have primarily resulted from training deep neural networks (DNNs) with vast numbers of adjustable parameters on enormous datasets. Due to their complex internal structure, DNNs are frequently characterized as inscrutable ``black boxes,'' making it challenging to interpret the mechanisms underlying their impressive performance. This opacity creates difficulties for explanation, safety assurance, trustworthiness, and comparisons to human cognition, leading to divergent perspectives on these systems. This chapter examines recent developments in interpretability methods for DNNs, with a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   2 citations  
  20. Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo, Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  21. Generalization Bias in Large Language Model Summarization of Scientific Research.Uwe Peters & Benjamin Chin-Yee - forthcoming - Royal Society Open Science.
    Artificial intelligence chatbots driven by large language models (LLMs) have the potential to increase public science literacy and support scientific research, as they can quickly summarize complex scientific information in accessible terms. However, when summarizing scientific texts, LLMs may omit details that limit the scope of research conclusions, leading to generalizations of results broader than warranted by the original study. We tested 10 prominent LLMs, including ChatGPT-4o, ChatGPT-4.5, DeepSeek, LLaMA 3.3 70B, and Claude 3.7 Sonnet, comparing 4900 LLM-generated summaries to (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  22. Language and thought: The view from LLMs.Daniel Rothschild - forthcoming - In David Sosa & Ernie Lepore, Oxford Studies in Philosophy of Language Volume 3.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  23. From Enclosure to Foreclosure and Beyond: Opening AI’s Totalizing Logic.Katia Schwerzmann - forthcoming - AI and Society.
    This paper reframes the issue of appropriation, extraction, and dispossession through AI—an assemblage of machine learning models trained on big data—in terms of enclosure and foreclosure. While enclosures are the product of a well-studied set of operations pertaining to both the constitution of the sovereign State and the primitive accumulation of capital, here, I want to recover an older form of the enclosure operation to then contrast it with foreclosure to better understand the effects of current algorithmic rationality. I argue (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  24. Do Large Language Models Hallucinate Electric Fata Morganas?Kristina Šekrst - forthcoming - Journal of Consciousness Studies.
    This paper explores the intersection of AI hallucinations and the question of AI consciousness, examining whether the erroneous outputs generated by large language models (LLMs) could be mistaken for signs of emergent intelligence. AI hallucinations, which are false or unverifiable statements produced by LLMs, raise significant philosophical and ethical concerns. While these hallucinations may appear as data anomalies, they challenge our ability to discern whether LLMs are merely sophisticated simulators of intelligence or could develop genuine cognitive processes. By analyzing the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  25. Security practices in AI development.Petr Spelda & Vit Stritecky - forthcoming - AI and Society.
    What makes safety claims about general purpose AI systems such as large language models trustworthy? We show that rather than the capabilities of security tools such as alignment and red teaming procedures, it is security practices based on these tools that contributed to reconfiguring the image of AI safety and made the claims acceptable. After showing what causes the gap between the capabilities of security tools and the desired safety guarantees, we critically investigate how AI security practices attempt to fill (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  26. I Contain Multitudes: A Typology of Digital Doppelgängers.William D’Alessandro, Trenton W. Ford & Michael Yankoski - 2025 - American Journal of Bioethics 25 (2):132-134.
    Iglesias et al. (2025) argue that “some of the aims or ostensible goods of person-span expansion could plausibly be fulfilled in part by creating a digital doppelgänger”—that is, an AI system desig...
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  27. Materiality and Machinic Embodiment: A Postphenomenological Inquiry into ChatGPT’s Active User Interface.Selin Gerlek & Sebastian Weydner-Volkmann - 2025 - Journal of Human-Technology Relations 3 (1):1-15.
    The rise of ChatGPT affords a fundamental transformation of the dynamics in human-technology interaction, as Large Language Model (LLM) applications increasingly emulate our social habits in digital communication. This poses a challenge to Don Ihde’s explicit focus on material technics and their affordances: ChatGPT did not introduce new material technics. Rather, it is a new digital app that runs on the same physical devices we have used for years. This paper undertakes a re-evaluation of some postphenomenological concepts, introducing the notion (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  28. AI wellbeing.Simon Goldstein & Cameron Domenico Kirk-Giannini - 2025 - Asian Journal of Philosophy 4 (1):1-22.
    Under what conditions would an artificially intelligent system have wellbeing? Despite its clear bearing on the ethics of human interactions with artificial systems, this question has received little direct attention. Because all major theories of wellbeing hold that an individual’s welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   13 citations  
  29. Can AI Rely on the Systematicity of Truth? The Challenge of Modelling Normative Domains.Matthieu Queloz - 2025 - Philosophy and Technology 38 (34):1-27.
    A key assumption fuelling optimism about the progress of large language models (LLMs) in accurately and comprehensively modelling the world is that the truth is systematic: true statements about the world form a whole that is not just consistent, in that it contains no contradictions, but coherent, in that the truths are inferentially interlinked. This holds out the prospect that LLMs might in principle rely on that systematicity to fill in gaps and correct inaccuracies in the training data: consistency and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  30. (1 other version)Artificial Intelligence (AI) and Global Justice.Siavosh Sahebi & Paul Formosa - 2025 - Minds and Machines 35 (4):1-29.
    This paper provides a philosophically informed and robust account of the global justice implications of Artificial Intelligence (AI). We first discuss some of the key theories of global justice, before justifying our focus on the Capabilities Approach as a useful framework for understanding the context-specific impacts of AI on lowto middle-income countries. We then highlight some of the harms and burdens facing low- to middle-income countries within the context of both AI use and the AI supply chain, by analyzing the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  31. The AI-mediated communication dilemma: epistemic trust, social media, and the challenge of generative artificial intelligence.Siavosh Sahebi & Paul Formosa - 2025 - Synthese 205 (3):1-24.
    The rapid adoption of commercial Generative Artificial Intelligence (Gen AI) products raises important questions around the impact this technology will have on our communicative interactions. This paper provides an analysis of some of the potential implications that Artificial Intelligence-Mediated Communication (AI-MC) may have on epistemic trust in online communications, specifically on social media. We argue that AI-MC poses a risk to epistemic trust being diminished in online communications on both normative and descriptive grounds. Descriptively, AI-MC seems to (roughly) lower levels (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  32. Creative Minds Like Ours? Large Language Models and the Creative Aspect of Language Use.Vincent Carchidi - 2024 - Biolinguistics 18:1-31.
    Descartes famously constructed a language test to determine the existence of other minds. The test made critical observations about how humans use language that purportedly distinguishes them from animals and machines. These observations were carried into the generative (and later biolinguistic) enterprise under what Chomsky in his Cartesian Linguistics, terms the “creative aspect of language use” (CALU). CALU refers to the stimulus-free, unbounded, yet appropriate use of language—a tripartite depiction whose function in biolinguistics is to highlight a species-specific form of (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  33. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - 2024 - Topoi 43 (3).
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   3 citations  
  34. The FHJ debate: Will artificial intelligence replace clinical decision-making within our lifetimes?Joshua Hatherley, Anne Kinderlerer, Jens Christian Bjerring, Lauritz Munch & Lynsey Threlfall - 2024 - Future Healthcare Journal 11 (3):100178.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  35. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  36. (1 other version)Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - 2024 - Techné: Research in Philosophy and Technology 28 (2):219-235.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  37. Smart Route Optimization for Emergency Vehicles: Enhancing Ambulance Efficiency through Advanced Algorithms.R. Indoria - 2024 - Technosaga 1 (1):1-6.
    Emergency response times play a critical role in saving lives, especially in urban settings where traffic congestion and unpredictable events can delay ambulance arrivals. This paper explores a novel framework for smart route optimization for emergency vehicles, leveraging artificial intelligence (AI), Internet of Things (IoT) technologies, and dynamic traffic analytics. We propose a real-time adaptive routing system that integrates machine learning (ML) for predictive modeling and IoT-enabled communication with traffic infrastructure. The system is evaluated using simulated urban environments, achieving a (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  38. Is Alignment Unsafe?Cameron Domenico Kirk-Giannini - 2024 - Philosophy and Technology 37 (110):1–4.
    Inchul Yum (2024) argues that the widespread adoption of language agent architectures would likely increase the risk posed by AI by simplifying the process of aligning artificial systems with human values and thereby making it easier for malicious actors to use them to cause a variety of harms. Yum takes this to be an example of a broader phenomenon: progress on the alignment problem is likely to be net safety-negative because it makes artificial systems easier for malicious actors to control. (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  39. Imagination, Creativity, and Artificial Intelligence.Peter Langland-Hassan - 2024 - In Amy Kind & Julia Langkau, Oxford Handbook of Philosophy of Imagination and Creativity. Oxford University Press.
    This chapter considers the potential of artificial intelligence (AI) to exhibit creativity and imagination, in light of recent advances in generative AI and the use of deep neural networks (DNNs). Reasons for doubting that AI exhibits genuine creativity or imagination are considered, including the claim that the creativity of an algorithm lies in its developer, that generative AI merely reproduces patterns in its training data, and that AI is lacking in a necessary feature for creativity or imagination, such as consciousness, (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - 2024 - Computer Law and Security Review 55.
    The complexity and emergent autonomy of Generative AI systems introduce challenges in predictability and legal compliance. This paper analyses some of the legal and regulatory implications of such challenges in the European Union context, focusing on four areas: liability, privacy, intellectual property, and cybersecurity. It examines the adequacy of the existing and proposed EU legislation, including the Artificial Intelligence Act (AIA), in addressing the challenges posed by Generative AI in general and LLMs in particular. The paper identifies potential gaps and (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   4 citations  
  41. Chinese Chat Room: AI hallucinations, epistemology and cognition.Kristina Šekrst - 2024 - Studies in Logic, Grammar and Rhetoric 69 (1):365-381.
    The purpose of this paper is to show that understanding AI hallucination requires an interdisciplinary approach that combines insights from epistemology and cognitive science to address the nature of AI-generated knowledge, with a terminological worry that concepts we often use might carry unnecessary presuppositions. Along with terminological issues, it is demonstrated that AI systems, comparable to human cognition, are susceptible to errors in judgement and reasoning, and proposes that epistemological frameworks, such as reliabilism, can be similarly applied to enhance the (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  42. Personalized Patient Preference Predictors Are Neither Technically Feasible nor Ethically Desirable.Nathaniel Sharadin - 2024 - American Journal of Bioethics 24 (7):62-65.
    Except in extraordinary circumstances, patients' clinical care should reflect their preferences. Incapacitated patients cannot report their preferences. This is a problem. Extant solutions to the problem are inadequate: surrogates are unreliable, and advance directives are uncommon. In response, some authors have suggested developing algorithmic "patient preference predictors" (PPPs) to inform care for incapacitated patients. In a recent paper, Earp et al. propose a new twist on PPPs. Earp et al. suggest we personalize PPPs using modern machine learning (ML) techniques. In (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  43. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - 2024 - Teaching Philosophy 47 (2):143-171.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  44. Linguistic Competence and New Empiricism in Philosophy and Science.Vanja Subotić - 2023 - Dissertation, University of Belgrade
    The topic of this dissertation is the nature of linguistic competence, the capacity to understand and produce sentences of natural language. I defend the empiricist account of linguistic competence embedded in the connectionist cognitive science. This strand of cognitive science has been opposed to the traditional symbolic cognitive science, coupled with transformational-generative grammar, which was committed to nativism due to the view that human cognition, including language capacity, should be construed in terms of symbolic representations and hardwired rules. Similarly, linguistic (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  45. Interdisciplinary Communication by Plausible Analogies: the Case of Buddhism and Artificial Intelligence.Michael Cooper - 2022 - Dissertation, University of South Florida
    Communicating interdisciplinary information is difficult, even when two fields are ostensibly discussing the same topic. In this work, I’ll discuss the capacity for analogical reasoning to provide a framework for developing novel judgments utilizing similarities in separate domains. I argue that analogies are best modeled after Paul Bartha’s By Parallel Reasoning, and that they can be used to create a Toulmin-style warrant that expresses a generalization. I argue that these comparisons provide insights into interdisciplinary research. In order to demonstrate this (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  46. Large language models and the relative roles of formal and natural language in formalization.Bradley Allen - manuscript
    Formalizations serve as cognitive tools. By enabling algorithmic reasoning over sets of statements in a formal language, they provide a cognitive boost for human reasoners. We argue that the emergence of large language models (LLMs) as a technology for the analysis and generation of natural language provides a new perspective on the relative roles of formal and natural languages in formalization.
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark  
  47. Propositional interpretability in artificial intelligence.David J. Chalmers - manuscript
    Mechanistic interpretability is the program of explaining what AI systems are doing in terms of their internal mechanisms. I analyze some aspects of the program, along with setting out some concrete challenges and assessing progress to date. I argue for the importance of propositional interpretability, which involves interpreting a system’s mechanisms and behav- ior in terms of propositional attitudes: attitudes (such as belief, desire, or subjective probabil- ity) to propositions (e.g. the proposition that it is hot outside). Propositional attitudes are (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Ethics at the Frontier of Human-AI Relationships.Henry Shevlin - manuscript
    The idea that humans might one day form persistent and dynamic relationships in professional, social, and even romantic contexts is a longstanding one. However, developments in machine learning and especially natural language processing over the last five years have led to this possibility becoming actualised at a previously unseen scale. Apps like Replika, Xiaoice, and CharacterAI boast many millions of active long-term users, and give rise to emotionally complex experiences. In this paper, I provide an overview of these developments, beginning (...)
    Remove from this list   Download  
     
    Export citation  
     
    Bookmark