Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Tyler L Jaynes
  • +1 (801) 362-6460

Tyler L Jaynes

  • I am a non-traditional bioethicist who conducts research into new technologies that will impact citizens and patients... moreedit
  • Linda MacDonald Glennedit
One of the most significant questions our generation must grapple with is that of the legal personality we grant (or deny) artificial intelligence (AI). From pre-Millennium inquiries pondering the notion of legally allowing AI to serve as... more
One of the most significant questions our generation must grapple with is that of the legal personality we grant (or deny) artificial intelligence (AI). From pre-Millennium inquiries pondering the notion of legally allowing AI to serve as a proxy for financial transactions to today’s discourse on robotic citizenship and the rights held by AI, the bulk of AI’s legal personality is centred upon its use as an instrument by human actors. While there are still many questions surrounding the legitimacy of Sophia the Robot’s citizenship in Saudi Arabia or the viability of granting personhood to AI vis-á-vis corporate identity, it was generally understood that no government entity has codified whether AI can be granted personhood to date. However, two state-level bills passed in the USA had already upturned that assumption without notice—first in 2022 and now again in 2024.
The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI... more
The application of modern technologies with artificial intelligence (AI) in all spheres of human life is growing exponentially alongside concern for its controllability. The lack of public, state, and international control over AI technologies creates large-scale risks of using such software and hardware that (un)intentionally harm humanity. The events of recent months and years, specifically regarding the Russian Federation's war against its democratic neighbour Ukraine and other international conflicts of note, support the thesis that the uncontrolled use of AI, especially in the military sphere, may lead to deliberate disregard for the moral standards of controlled AI or the spontaneous emergence of aggressive autonomous AI. The development of legal regulation for the use of technologies with AI is prolonged concerning the rapid development of these artefacts, which simultaneously cover all areas of public relations. Therefore, control over the creation and use of AI should be carried out not only by purely technical regulation (e.g., technical standards and conformance assessments, corporate and developer regulations, requirements enforced through industry-wide ethical codes); but also by comprehensive legislation and intergovernmental oversight bodies that codify and enforce specific changes in the rights and duties of legal persons. This article shall present the “Morality Problem” and “Intentionality Problem” of AI, and reflect upon various lacunae that arise when implementing AI for military purposes.
Much research has been conducted on how patients may be served through new advances in perioperative anaesthetic care. However, adaptations of standardised care methodologies can only provide so many novel solutions for patients and... more
Much research has been conducted on how patients may be served through new advances in perioperative anaesthetic care. However, adaptations of standardised care methodologies can only provide so many novel solutions for patients and caregivers alike. Similarly, unique methods such as nanoscopic liposomal package delivery for analgesics and affective numbing agents pose a similar issue-specifically that we are still left with the dilemma of patients for whom analgesics and numbing agents are ineffective or harmful. An examination of the potential gains that may result from the targeted development of nanorobotics for anaesthesia in perioperative care will be presented in this essay to help resolve this pending conflict for the research community. This examination should therefore serve as a "call to action" for such research and a "primer" for those for whom the method's implementation would most directly impact.
Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity's focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most... more
Despite the reality that self-learning artificial intelligence systems (SLAIS) are gaining in sophistication, humanity's focus regarding SLAIS-human interactions are unnervingly centred upon transnational commercial sectors and, most generally, around issues of intellectual property law. But as SLAIS gain greater environmental interaction capabilities in digital spaces, or the ability to self-author code to drive their development as algorithmic models, a concern arises as to whether a system that displays a "deceptive" level of human-like engagement with users in our physical world ought to be uniquely protected. Although many voices in the legal and technology realms have continued to argue against unique protections for digital entities, the fact at hand is that SLAIS design is becoming increasingly anthropomorphic so as to make these systems more capable of interacting with a wide range of (potentially) vulnerable populations-generally as a means to enhance these populations' overall well-being. To frame this concern in a different way, the specific question at hand is whether a human's "ownership" of such an advanced SLAIS is legal, considering that it (or they) may possess intelligence on par with a human or a convincing-enough display of such behaviour. Given that "ownership" over entities with (seemingly) intelligent behaviours consistent with human populations has been effectively banned by the international community, an examination into this subject and its implications is wholly necessary given humanity's quest to exist solely in digital environments through whatever means possible.
What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is... more
What separates the unique nature of human consciousness and that of an entity that can only perceive the world via strict logic-based structures? Rather than assume that there is some potential way in which logic-only existence is non-feasible, our species would be better served by assuming that such sentient existence is feasible. Under this assumption, artificial intelligence systems (AIS), which are creations that run solely upon logic to process data, even with self-learning architectures, should therefore not face the opposition they have to gaining some legal duties and protections insofar as they are sophisticated enough to display consciousness akin to humans. Should our species enable AIS to gain a digital body to inhabit (if we have not already done so), it is more pressing than ever that solid arguments be made as to how humanity can accept AIS as being cognizant of the same degree as we ourselves claim to be. By accepting the notion that AIS can and will be able to fool our senses into believing in their claim to possessing a will or ego, we may yet have a chance to address them as equals before some unforgivable travesty occurs betwixt ourselves and these super-computing beings.
Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based... more
Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive technologies such as companion robots and bionics, our legal treatment of ACIS must also adapt-least society faces legal challenges that may potentially lead to legally sanctioned discriminatory treatment. For this reason, this article exposes the complexity of normalizing definitions of "natural" human subjects, clarifies how current bioethical discourse has been unable to effectively guide ACIS integration into implanted and external artefacts, and argues for the establishment of legal delineations between various ACIS-human mergers in reference to legal protections and obligations internationally.
The rapid advancement of artificial (computer) intelligence systems (CIS) has generated a means whereby assistive bionic prosthetics can become both more effective and practical for the patients who rely upon the use of such machines in... more
The rapid advancement of artificial (computer) intelligence systems (CIS) has generated a means whereby assistive bionic prosthetics can become both more effective and practical for the patients who rely upon the use of such machines in their daily lives. However, de lege lata remains relatively unspoken as to the legal status of patients whose devices contain self-learning CIS that can interface directly with the peripheral nervous system. As a means to reconcile for this lack of legal foresight, this article approaches the topic of CIS-nervous system interaction and the impacts it may have on the legal definition of “persons” under the law. While other literature of this nature centres upon notions of transhumanism or self-enhancement, the approach herein approached is designed to focus solely upon the legal nature of independent CIS actions when operating alongside human subjects. To this end, it is hoped that further discussion on the topic can be garnered outside of transhumanist discourse to expedite legal consideration for how these emerging relationships ought to be received by law-generating bodies internationally.
This addendum expands upon the arguments made in the author's 2020 essay, "Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule", in an effort to display the significance human augmentation technologies... more
This addendum expands upon the arguments made in the author's 2020 essay, "Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule", in an effort to display the significance human augmentation technologies will have on (feasibly) inadvertently providing legal protections to artificial intelligence systems (AIS)-a topic only briefly addressed in that work. It will also further discuss the impacts popular media have on imprinting notions of computerised behaviour and its subsequent consequences on the attribution of legal protections to AIS and on speculative technological advancement that would aid the sophistication of AIS.
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can... more
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship—a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades’ worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence—instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.
AI is increasingly becoming based upon Internet-dependent systems to handle the massive amounts of data it requires to function effectively regardless of the availability of stable Internet connectivity in every affected community. As... more
AI is increasingly becoming based upon Internet-dependent systems to handle the massive amounts of data it requires to function effectively regardless of the availability of stable Internet connectivity in every affected community. As such, sustainable development (SD) for rural and mountain communities will require more than just equitable access to broadband Internet connection. It must also include a thorough means whereby to ensure that affected communities gain the education and tools necessary to engage inclusively with new technological advances, whether they be focused on machine learning algorithms or community infrastructure, as they will be increasingly dependent upon the automational capabilities of AI. In this essay, an exploration will be conducted into the means whereby student-engaged learning (SEL) can effectively be utilized to provide targeted, inclusive education to rural and mountain communities regarding the implications of AI for SD.
How might continuous advancement of computational intelligence systems impact the way labour is conducted, and what glaring deficiencies might we be able to reconcile with to regain an even footing with the demands of a new form of... more
How might continuous advancement of computational intelligence systems impact the way labour is conducted, and what glaring deficiencies might we be able to reconcile with to regain an even footing with the demands of a new form of industrial workplace environments? This chapter provides a brief examination of both topics, and broaches the question of how integrating technological artefacts into our forms might further confuse our current understandings of what is required of us in the labour market.
Smart cities can be described as a smart system comprising numerous integrated smart systems that fuse and share data, including personal and potentially sensitive private information. Such circumstances could intrude on the rights to... more
Smart cities can be described as a smart system comprising numerous integrated smart systems that fuse and share data, including personal and potentially sensitive private information. Such circumstances could intrude on the rights to privacy, and human dignity, with disclosures potentially harmful to the individual, families, friends, associates, and communities. This workshop will examine ways to promote the best outcomes for the residents and visitors of smart cities through the lens of human rights. Affective rights will also be discussed as requisite to formulating the optimal smart city. Moreover, this workshop will foster discussion around the still relatively nascent technology of Affective Computing, which is the application of AI (Artificial Intelligence), ML (Machine Learning), biometric measurement, sentiment analysis, and psychological factor assessment in determining and interacting with the affective states of the individual. This workshop is open to all stakeholders in smart city development and management, including computer scientists, engineers, smart city integrators, application developers, third party vendors, ethicists, city managers and administrators. It should be especially informative for oversight and governance organizations providing auditing and performance evaluations.
Functional neuroimaging techniques are highly limited in today's world in terms of capability and breadth of function. As such, they must necessarily exclude certain groups of patients due to potential medical risks that could arise from... more
Functional neuroimaging techniques are highly limited in today's world in terms of capability and breadth of function. As such, they must necessarily exclude certain groups of patients due to potential medical risks that could arise from their use. Taking from a fictitious device found in various media formats, we can determine what qualities it may have and therefore make it a device that does more than allow the user to entertain themselves. While this device cannot effectively be make with current (or foreseeable) technologies, it is possible to make a facsimile that relies upon nanotechnology as opposed to electromagnetic radiation as envisioned by the device's creator. Other advances will need to be made to allow this device to penetrate the blood-brain barrier in a medically safe manner; however, this device will have the potential to reshape functional neuroimaging and conventional medicine as practiced today. As such, a viable concept needs to be generated to display the potential benefits, harms, and moral concerns that surround the development of such a device.
The seemingly abrupt advances made by DeepMind's AlphaFold project in 2020 appear not to be generating the wave of concern it ought to in the scientific community or extended ethical communities. Rather, the accuracy of protein-structure... more
The seemingly abrupt advances made by DeepMind's AlphaFold project in 2020 appear not to be generating the wave of concern it ought to in the scientific community or extended ethical communities. Rather, the accuracy of protein-structure prediction attained by the system is receiving more praise than scepticism by researchers and journalists alike. The dialogue presented in this essay aims to re-centre bioethical focus on the need to develop productive, well-rationalised, speculative thought as a reaction to this recent development in medical technology-as its potential for abuse may not receive proper attention by the bioethical community for several more months (if not years). With the field's current propensity to decry speculative thought as being misguided or too far-reaching, such a dialogue is vital to remind scholars of the benefits found in traditional, philosophically hypothetical, dialogues insofar as they are connected to feasible technological advances and real-world proof-of-concepts or ideas.
Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based... more
Although legal personality has slowly begun to be granted to non-human entities that have a direct impact on the natural functioning of human societies (given their cultural significance), the same cannot be said for computer-based intelligence systems. While this notion has not had a significantly negative impact on humanity to this point in time, that only remains the case because advanced computerised intelligence systems (ACIS) have not been acknowledged as reaching human-like levels. With the integration of ACIS in medical assistive technologies such as companion robots and bionics, our legal treatment of ACIS must also adapt-least society faces legal challenges that may potentially lead to legally sanctioned discriminatory treatment. For this reason, this article exposes the complexity of normalizing definitions of "natural" human subjects, clarifies how current bioethical discourse has been unable to effectively guide ACIS integration into implanted and external artefacts, and argues for the establishment of legal delineations between various ACIS-human mergers in reference to legal protections and obligations internationally.
The rapid advancement of artificial (computer) intelligence systems (CIS) has generated a means whereby assistive bionic prosthetics can become both more effective and practical for the patients who rely upon the use of such machines in... more
The rapid advancement of artificial (computer) intelligence systems (CIS) has generated a means whereby assistive bionic prosthetics can become both more effective and practical for the patients who rely upon the use of such machines in their daily lives. However, de lege lata remains relatively unspoken as to the legal status of patients whose devices contain self-learning CIS that can interface directly with the peripheral nervous system. As a means to reconcile for this lack of legal foresight, this article approaches the topic of CIS-nervous system interaction and the impacts it may have on the legal definition of "persons" under the law. While other literature of this nature centres upon notions of transhumanism or self-enhancement, the approach herein approached is designed to focus solely upon the legal nature of independent CIS actions when operating alongside human subjects. To this end, it is hoped that further discussion on the topic can be garnered outside of transhumanist discourse to expedite legal consideration for how these emerging relationships ought to be received by law-generating bodies internationally.
This addendum expands upon the arguments made in the author's 2020 essay, "Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule," in an effort to display the significance human augmentation technologies... more
This addendum expands upon the arguments made in the author's 2020 essay, "Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule," in an effort to display the significance human augmentation technologies will have on (feasibly) inadvertently providing legal protections to artificial intelligence systems (AIS)-a topic only briefly addressed in that work. It will also further discuss the impacts popular media have on imprinting notions of computerised behaviour and its subsequent consequences on the attribution of legal protections to AIS and on speculative technological advancement that would aid the sophistication of AIS.
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can... more
The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship-a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades' worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence-instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.