“Humanoid robots should ‘safeguard human dignity’ and ‘not threaten human security,’ exclaim the guidelines announced by the World Artificial Intelligence Conference (WAIC) Shanghai 2024. The following five steps are fundamental to effective governance of robotics efforts as enumerated by GARTNER in their 2019 White Paper – “Making Robotics a Reality” – 1. Centralize robotics governance; 2. Define key robotics-related roles; 3. Stay on top of new opportunities and evolving technologies; 4. Manage the people side of robotics; 5. Track performance and impact.
As technology progresses, the boundaries between humans and machines continue to blur.
Dr. Sindhu Bhaskar, Cambridge, MA, USA
Japan will start fixing its railway system using advanced humanoid robots. The railway worker robot is a joint venture between JR West, Jinki Ittai, and Nippon Signal. JR West said they developed the technology with the help of robotics company Jinki Ittai and tech company Nippon Signal to improve their employees’ safety and reduce the risk of work-related accidents. Starting this month, giant Mecha-style robots will perform various maintenance tasks on the company’s railway infrastructure, such as painting overhead support structures and removing tree branches that obstruct the tracks. The toiling mechanoid is operated by a human who can sit in its accompanying truck and control its movement using a joystick and VR goggles linked to a camera on the bot’s head.
These technologies promise immense benefits, including enhanced mobility for the physically impaired and advanced cognitive assistance. However, they also pose significant risks, particularly when AI systems exhibit disabilities or biases. It is important to read about the intersection of cutting-edge technologies and their integration via-a-vis ethical, societal, and practical implications. Equally important is to delve into the potential dystopian scenarios that may arise from AI’s shortcomings and examine how these risks can be mitigated to ensure a future where technology serves humanity rather than undermines it.
This article explores the potential dystopian scenarios that may arise from these shortcomings and examines solutions to mitigate these risks. We will delve into the concept of cyborgs—part human, part machine—and the impact of flawed AI on their lives, including the disturbing possibility of cyborg suicide. Whatever we are, we are creating ourselves or supplementing ourselves. This is the root cause of imbalances and biases.
Stuart Russell examined the challenges of ensuring that AI systems act in ways that are aligned with human values, emphasizing the need for control mechanisms to prevent unintended consequences in his book Human Compatible: Artificial Intelligence and the Problem of Control, Published by Viking in 2019.
Overview:
As the integration of artificial intelligence (AI) and human biology advances, the concept of cyborgs—beings with both organic and biomechatronic body parts—becomes increasingly plausible. Literature has long explored the dystopian potential of AI and the ethical implications of creating beings that blend human and machine. It has become a critical area of concern how the potential for disabilities within AI systems creates dystopian outcomes for cyborgs. Generative AI, a subset of artificial intelligence that focuses on creating data rather than just analyzing it, is pivotal in developing adaptive and personalized technologies. Yet, AI’s inherent vulnerabilities and limitations, such as biases in training data, susceptibility to malicious attacks, and unintended consequences of autonomous decision-making, can lead to significant issues.
Presently, South Korea’s Gumi City Council is investigating a “robot suicide” after a cyborg administrative officer supposedly jumped to their own death from a staircase. Work pressure is now getting to robots, too. South Korea’s Gumi City Council announced on June 26, 2024, that its premier administrative officer robot apparently dropped “dead’ after the cyborg seemingly leapt to its life’s end down a six-and-a-half foot flight of stairs. The city council is speculating if the now-defunct robot’s demise was, in fact, an act of suicide as an official caught the robot “circling in one spot as if something was there” before the supposed tragedy.”
William Gibson, who published his novel Neuromancer in 1984, narrates the story of Case, a washed-up computer hacker hired by a mysterious employer to pull off the ultimate hack. The novel, a cyberpunk genre, depicts a world where AI and cybernetics are deeply intertwined with society. In this novel, the dystopian elements evidenced in Gibson’s world are extreme social stratification, pervasive surveillance, and corporate dominance. AI systems and cybernetic enhancements are both tools of empowerment and oppression. Characters in the novel are often heavily augmented with cybernetic enhancements, leading to questions about identity, autonomy, and mental health. The blurred line between human and machine complicates the issue of suicide among cyborgs. So additionally, I shall try to examine whether cyborgs possess the right to disagree, protest, and commit suicide and explore the value systems that might govern their existence.
Robotics and Generative AI, the current scenario
Robotics Advances: Modern robotics has significantly progressed, particularly in assistive technologies. Robotic prosthetics, for example, have evolved from simple mechanical devices to sophisticated systems that interface with the human nervous system. Exoskeletons enable paraplegics to walk, while robotic arms provide dexterity to those who have lost limbs.
Generative AI: Generative AI refers to algorithms that create data, whether text, images or even music. These systems produce outputs that mimic human creativity. Applications range from automated content creation to designing personalized learning experiences.
Limitations and Vulnerabilities of AI
Despite their potential, AI systems are not infallible. They can exhibit disabilities, defined here as significant limitations in functionality and biases, which are systematic deviations from fairness and objectivity. These issues arise from several factors. Cathy O’Neil, in the book Weapons of Math Destruction, published in 2016 by Crown Publishing Group, discussed the dangers of biased algorithms in various sectors, including finance, education, and criminal justice, highlighting the potential for harm when AI systems are not carefully managed.
Data Bias: AI systems learn from data. If the training data is biased, the AI’s decisions will reflect those biases. For example, facial recognition systems have been shown to have higher error rates for people with darker skin tones, leading to misidentifications and discriminatory outcomes.
Lack of Transparency: AI algorithms are often black boxes, especially deep learning models. Their opaque decision-making processes make it difficult to identify and correct errors.
Security Vulnerabilities: AI systems can be manipulated through adversarial attacks. Malicious actors introduce subtle changes to input data and can alter thus harming the AI’s output.
Unintended Consequences: AI’s autonomous nature can lead to unforeseen outcomes. For instance, an AI tasked with optimizing a delivery route might disregard traffic laws or ethical considerations if not properly constrained.