Effects of AI


Effects of AI

Lack of Human Development

Skill decay and reduced critical thinking due to overreliance on AI, increased social isolation from reduced human interaction, potential for job displacement and increased inequality, perpetuation of biased algorithms and misinformation that hinder understanding and societal fairness, and risks to privacy and safety from data misuse and generative AI technologies. These challenges highlight the need for responsible development, strong ethical oversight, and education to promote critical engagement with AI systems.

Cognitive & Social Development

  • Reduced Critical Thinking: Overdependence on AI for information and decision-making can diminish individuals’ capacity for critical thinking, analytical skills, and independent problem-solving.
  • Skill Decay: Human skill degradation can occur as AI takes over tasks previously requiring human cognitive effort, leading to potential cognitive offloading.
  • Social Isolation: Increased interaction with virtual assistants and AI-driven platforms can reduce real-world human connection, impacting empathy and social skill development.

Societal & Economic Impacts

  • Job Displacement: AI’s ability to automate complex tasks can lead to job losses in various sectors, potentially increasing economic inequality.
  • Bias and Discrimination: AI models can reflect and amplify existing societal biases present in their training data, leading to unfair or discriminatory outcomes in areas like law enforcement, hiring, and education.
  • Misinformation & Manipulation: The ability of AI to generate convincing but false content, such as deepfakes, makes it harder to discern truth from falsehood, potentially fostering manipulation.

Privacy & Security

  • Privacy Violations: The widespread collection and analysis of personal data by AI systems raise significant concerns about misuse and potential violations of individual privacy.
  • Cybersecurity Threats: As AI becomes more integrated, it can also introduce new cybersecurity vulnerabilities and risks.

Ethical & Existential Risks

  • Lack of Accountability: In some AI applications, it is difficult to assign responsibility for errors or harmful outcomes, posing challenges for accountability and transparency.
  • Existential Risks: While debated, some experts identify long-term, potentially existential risks from increasingly powerful and autonomous AI systems.

Why Is Excessive Use of AI Bad?

Lessening of Individual Autonomy and Human Interaction: Some experts fear that with increasing automation, humans will become bored and lazy. We are not too far away from the reality that the world will be dominated and run more by artificial intelligence. What will this mean for the human race? Will we become passive, bored, lazy, and out of touch? Will this disempowerment create and invite a takeover by artificial intelligence systems? Or, will a few elites, using these new and powerful systems, be in total charge with the result that individual freedom will be severely limited?

An example of how AI can empower those whose agenda is to gain control over people and limit freedom is found in Carol Roth’s NY Times Best Seller, You Will Own Nothing: Your War with a New Financial World Order and How to Fight Back. The author and entrepreneur paints a picture of what would happen if a new financial world order controlled by global elites is able to gain the type of control made possible, for example, by eliminating all hard currency and enforcing a system of digital currency under the control of governments, international organisations, businesses, and technology elites.

She argues that a system of ‘social credits’ would accompany such a system, enabling the elites to shut down dissent and control the general population. It will result in debt, deprivation and desperation. It will mean people own fewer assets and have less control over their lives, thereby reducing the ability to protect one’s wealth now or for future generations.

These questions raise important philosophical, psychological, moral, governance, legal, and ethical questions that must be addressed if society is to fully benefit from AI while at the same time managing the significant risks involved.

Mass Unemployment

As mentioned above, the workforce in an AI-dominated world may be a dystopia for those who lose their jobs. A lot of industry disruption and job destruction will have to be carefully managed. Governments will have to carefully plan the transition for those industries that are severely disrupted.

Major service functions such as customer service centres are likely to be heavily hit as AI applications coupled with robotics rapidly replace many of these roles. For example, most readers have experienced a chat with a company’s automated answering service. AI ChatBots will become increasingly popular and will replace many of the humans now filling those roles. A 2022 study from Gartner predicted that chatbots will be the main customer service channel for roughly 25% of companies by 2027.

Another example is the fast-food industry. Today, customers order via on-screen kiosks. In China, AI robotic chefs can cook your food. Other robots can wait on tables and take your order.

At the other end of the spectrum, there are likely to be significant shortages of those people who have the skills to take up the new positions required to support the growth in AI-related fields. There will be a significant battle for talent, and as in all battles, there will be winners and losers, the dominant and the dominated.

Education

AI presents many challenges for education. Students, faculty and administration will see their systems severely challenged by new AI applications. For academics, it will significantly change aspects of research and scholarship. Many of these developments will bring improvements. The downside, however, is that riding this major wave of innovation will require vision, talent, resources, training, and resilience. Many will fail. The system will be disrupted. The impact of how this all plays out remains unknown and unpredictable.

A case in point is the decades-long development of computer-assisted learning packages. They have been very slow to catch on due to many factors: teacher unions fear job losses; the current model is built on assumptions based on age level rather than development/skill level; educators do not have adequate technology skills; financial models have not been right; and people are generally highly resistant to change.

The other challenge for education is to define how it can best serve society by providing the learning, research, and training required to meet the multiple needs and demands if society is to gain the benefits of AI and manage the potential disadvantages and harms that may be caused.

Law

At the macro level, laws and regulations will be required to provide the governance framework to guide society in the Age of the Machine. This will be especially challenging given the geographic limits on the application of law and the fact that different countries will have different approaches. Underlying the formal legal regime will also be the reality that countries will have different ethical standards. Some countries will see it in their interest to press ahead, despite the risks, in order to gain a competitive advantage over others.

At the ground level, it will also be important for designers and users of AI systems to be aware of new problems that might emerge. AI systems can and do discriminate. AI tools have the potential to embed unlawful biases and discrimination and do so on a system-wide scale and in a non-transparent way.

This can impact decisions on who gets a loan, who gets hired, who gets favourable administrative decisions, who gets monitored by the police, etc. AI systems also use information, pictures and other intellectual property, all of which is loaded into the application. This raises serious issues regarding potential intellectual property violations that might occur. AI also creates IP, thus raising further questions about whether the existing legal regime will include IP creation by non-humans.

Ethical and Personal Issues

The growing development and application of AI to all sectors of human activity raises many ethical issues. It threatens the degree of human autonomy, challenges existing rules, laws and standards in society, threatens a loss of control, and challenges expectations of privacy. A major question is the extent to which we can achieve agreement among nations or even between public and private sectors regarding central ethical issues raised by AI.

For example, what should be the degree of transparency underlying the use of AI systems and applications? How can principles of justice and fairness be promoted and protected by AI development? How can we regulate and promote fairness, non-maleficence, responsibility and privacy in the development and use of AI?

There is also a psychological dimension to AI adoption that must be considered. While AI and its applications have grown rapidly, one should not underestimate the limitations and challenges stemming from the natural tendency of humans to cope with, resist, and even fear change.