an essay on will artificial intelligence take over human intelligence

Will AI ever reach human-level intelligence? We asked five experts

an essay on will artificial intelligence take over human intelligence

Digital Culture Editor

Interviewed

an essay on will artificial intelligence take over human intelligence

Biomedical Engineer and Neuroscientist, University of Sydney

an essay on will artificial intelligence take over human intelligence

Lecturer in AI and Data Science, Swinburne University of Technology

an essay on will artificial intelligence take over human intelligence

Lecturer in Business Analytics, University of Sydney

an essay on will artificial intelligence take over human intelligence

Professor and Head of the Department of Philosophy, and Co-Director of the Macquire University Ethics & Agency Research Centre, Macquarie University

an essay on will artificial intelligence take over human intelligence

Professor of Artificial Intelligence, Faculty of Business and Hospitality, Torrens University Australia

View all partners

Artificial intelligence has changed form in recent years.

What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a more than US$100 billion industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem intent on out-competing one another.

The result has been increasingly sophisticated large language models, often released in haste and without adequate testing and oversight.

These models can do much of what a human can, and in many cases do it better. They can beat us at advanced strategy games , generate incredible art , diagnose cancers and compose music.

Read more: Text-to-audio generation is here. One of the next big AI disruptions could be in the music industry

There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans?

There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.

AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness.

We asked five experts if they think AI will ever reach AGI, and five out of five said yes.

an essay on will artificial intelligence take over human intelligence

But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes “intelligence”, anyway?

Here are their detailed responses:

Read more: Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?

  • Artificial intelligence (AI)
  • Artificial general intelligence
  • Five Experts
  • Human intelligence

an essay on will artificial intelligence take over human intelligence

Professor of Indigenous Cultural and Creative Industries (Identified)

an essay on will artificial intelligence take over human intelligence

Communications Director

an essay on will artificial intelligence take over human intelligence

Associate Director, Post-Award, RGCF

an essay on will artificial intelligence take over human intelligence

University Relations Manager

an essay on will artificial intelligence take over human intelligence

2024 Vice-Chancellor's Research Fellowships

How close are we to AI that surpasses human intelligence?

Subscribe to the center for technology innovation newsletter, jeremy baum and jeremy baum undergraduate student - ucla, researcher - ucla institute for technology, law, and policy john villasenor john villasenor nonresident senior fellow - governance studies , center for technology innovation.

July 18, 2023

  • Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction.
  • AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development.
  • The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.

For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.

Today, public interest in AI is at an all-time high. With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: a rtificial general intelligence , or AGI. But what exactly is AGI, and how close are today’s technologies to achieving it?

Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. As a post from IBM explains, “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” However, the ability of an AI system to generate content does not necessarily mean that its intelligence is general.

To better understand artificial general intelligence, it helps to first understand how it differs from today’s AI, which is highly specialized. For example, an AI chess program is extraordinarily good at playing chess, but if you ask it to write an essay on the causes of World War I, it won’t be of any use. Its intelligence is limited to one specific domain. Other examples of specialized AI include the systems that provide content recommendations on the social media platform TikTok, navigation decisions in driverless cars, and purchase recommendations from Amazon.

AGI: A range of definitions

By contrast, AGI refers to a much broader form of machine intelligence. There is no single, formally recognized definition of AGI—rather, there is a range of definitions that include the following:

“…highly autonomous systems that outperform humans at most economically valuable work”
“[a] hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.”
“…any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.”
“…systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.”

While the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are nowhere near that capable. Consider Indeed’s list of the most common jobs in the U.S. As of March 2023, the first 10 jobs on that list were: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These jobs require not only intellectual capacity but, crucially, most of them require a far higher degree of manual dexterity than today’s most advanced AI robotics systems can achieve.

None of the other AGI definitions in the table specifically mention economic value. Another contrast evident in the table is that while the OpenAI AGI definition requires outperforming humans, the other definitions only require AGI to perform at levels comparable to humans. Common to all of the definitions, either explicitly or implicitly, is the concept that an AGI system can perform tasks across many domains, adapt to the changes in its environment, and solve new problems—not only the ones in its training data.

GPT-4: Sparks of AGI?

A group of industry AI researchers recently made a splash when they published a preprint of an academic paper titled, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” GPT-4 is a large language model that has been publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers noted that “GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” exhibiting “strikingly close to human-level performance.” They concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

Of course, there are also skeptics: As quoted in a May New York Times article , Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks underscored that in evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

GPT-4 and beyond

While the version of GPT-4 currently available to the public is impressive, it is not the end of the road. There are groups working on additions to GPT-4 that are more goal-driven, meaning that you can give the system an instruction such as “Design and build a website on (topic).” The system will then figure out exactly what subtasks need to be completed and in what order in order to achieve that goal. Today, these systems are not particularly reliable, as they frequently fail to reach the stated goal. But they will certainly get better in the future.

In a 2020 paper , Yoshihiro Maruyama of the Australian National University identified eight attributes a system must have for it to be considered AGI: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The last two attributes—embodiment and embeddedness—refer to having a physical form that facilitates learning and understanding of the world and human behavior, and a deep integration with social, cultural, and environmental systems that allows adaption to human needs and values.

It can be argued that ChatGPT displays some of these attributes, like logic. For example, GPT-4 with no additional features reportedly scored a 163 on the LSAT and 1410 on the SAT . For other attributes, the determination is tied as much to philosophy as much as to technology. For instance, is a system that merely exhibits what appears to be morality actually moral? If asked to provide a one-word answer to the question “is murder wrong?” GPT-4 will respond by saying “Yes.” This is a morally correct response, but it doesn’t mean that GPT-4 itself has morality, but rather that it has inferred the morally correct answer through its training data.

A key subtlety that often goes missing in the “How close is AGI?” discussion is that intelligence exists on a continuum, and therefore assessing whether a system displays AGI will require considering a continuum. On this point, the research done on animal intelligence offers a useful analog. We understand that animal intelligence is far too complex to enable us to meaningfully convey animal cognitive capacity by classifying each species as either “intelligent” or “not intelligent:” Animal intelligence exists on a spectrum that spans many dimensions, and evaluating it requires considering context. Similarly, as AI systems become more capable, assessing the degree to which they display generalized intelligence will be involve more than simply choosing between “yes” and “no.”

AGI: Threat or opportunity?

Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.

AGI can also help broaden access to services that previously were accessible only to the most economically privileged. For instance, in the context of education, AGI systems could put personalized, one-on-one tutoring within easy financial reach of everyone, resulting in improved global literacy rates. AGI could also help broaden the reach of medical care by bringing sophisticated, individualized diagnostic care to much broader populations.

Regulating emergent AGI systems

At the May 2023 G7 summit in Japan, the leaders of the world’s seven largest democratic economies issued a communiqué that included an extended discussion of AI, writing that “international governance of new digital technologies has not necessarily kept pace.” Proposals regarding increased AI regulation are now a regular feature of policy discussions in the United States , the European Union , Japan , and elsewhere.

In the future, as AGI moves from science fiction to reality, it will supercharge the already-robust debate regarding AI regulation. But preemptive regulation is always a challenge, and this will be particularly so in relation to AGI—a technology that escapes easy definition, and that will evolve in ways that are impossible to predict.

An outright ban on AGI would be bad policy. For example, AGI systems that are capable of emotional recognition could be very beneficial in a context such as education, where they could discern whether a student appears to understand a new concept, and adjust an interaction accordingly. Yet the EU Parliament’s AI Act, which passed a major legislative milestone in June, would ban emotional recognition in AI systems (and therefore also in AGI systems) in certain contexts like education.

A better approach is to first gain a clear understanding of potential misuses of specific AGI systems once those systems exist and can be analyzed, and then to examine whether those misuses are addressed by existing, non-AI-specific regulatory frameworks (e.g., the prohibition against employment discrimination provided by Title VII of the Civil Rights Act of 1964). If that analysis identifies a gap, then it does indeed make sense to examine the potential role in filling that gap of “soft” law (voluntary frameworks) as well as formal laws and regulations. But regulating AGI based only on the fact that it will be highly capable would be a mistake.

Related Content

Cameron F. Kerry

July 7, 2023

Alex Engler

June 16, 2023

Darrell M. West

May 17, 2023

Artificial Intelligence Technology Policy & Regulation

Governance Studies

Center for Technology Innovation

Nicol Turner Lee

September 13, 2024

Chiraag Bains

Chinasa T. Okolo

September 10, 2024

Why Artificial Intelligence Will Not Replace Human in Near Future? Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

The development and ubiquity of artificial intelligence make people worldwide wonder if it will replace humans shortly and lead to massive job loss. Even though “the modern project of creating human-like artificial intelligence (AI) started after World War II” (Fjelland, 2020), the magnitude of these reflections has gotten such traction only in the 21st century. The changing nature of work is now occupying global organizations. It is being discussed in forums such as Davos, and the World Bank devoted its World Development Report 2019 to this topic (The World Bank, 2019: The Changing Nature of Work, n.d.). However, the major worry of the third millennium, the takeover of the globe by AI, remains a neurotic fantasy. Indeed, developing and implementing algorithms on a global scale necessitates a great deal of attention and presents new obstacles. The primary reasons AI will not replace humans shortly are empathy, creative problem solving, ethics, and decision-making.

In all areas, humanity is not yet ready to bring algorithms into industrial processes, let alone management or essential decision-making. New examples of algorithms acting independently raise concerns, primarily as developers aim to empower AI to make decisions on its own. For example, flaws in games, social networks, and creativity can still be hidden or remedied painlessly. In that case, the hazards of applying AI in areas related to human well-being necessitate a great deal of attention and algorithm transparency. The issue of explainability is how AI makes decisions in the first place.

Artificial intelligence, in this sense, concerns both developers and ordinary people. This was acknowledged by the researchers of the AI Now Research Institute at the University of New York. The black boxes that construct closed algorithms inside themselves are the primary source of concern. Explainable Al is designed to demystify the decision-making process by describing the reasoning process and the conclusion. When Facebook’s chatbots hit a roadblock and could not receive permission from human operators, they started making new requests to get around it (Reisman, Schultz, Crawford & Whittaker, 2018). The developers uncovered a bot interaction in which the phrases were meaningless to humans, yet the bots were cooperating for an unknown reason. Even though the bots were turned off, the silt persisted. Algorithms hunt for optimal solutions to situations where a person does not offer hard constraints: they act like hackers and look for unforeseen changes. That is why, for the time being, AI will not be permitted to pilot aircraft in autonomous mode. Aviation is an excellent example of an industry that necessitates extreme precision in decision-making due to the high stakes involved. The biggest constraint on algorithms’ access to control systems appears to be in areas where transparency of the decision-making process and subsequent explanation of their actions are essential.

Another important reason why AI will not be able to replace humans is what is known as emotional intelligence. The human’s ability to respond to a situation quickly with innovative ideas and empathy is unparalleled, and it cannot be replicated by any computer on the planet. According to Beck and Libert’s (2017) article in Harvard Business Review about the importance of emotional intelligence, “skills like persuasion, social understanding, and empathy are going to become differentiators as artificial intelligence and machine learning take over our other tasks” (2017, p. 4). For example, AI can search a million psychology textbooks and provide information on everything one needs to know about depression symptoms and remedies in seconds; however, only a person can read the expression on a person’s face and know exactly what needs to be said in this situation. The same is applicable for occupations that involve emotion and empathy, such as psychology, teaching, coaching, and HR management, where motivating people is part of the job. Thus, in the future, emotional intelligence will be one of the competitive advantages of a person over technology when seeking a job.

Even if AI is eventually integrated into many aspects of human existence, it is critical to recognize that technologies alone will not solve all of humanity’s problems and that we must consider issues such as ethics and law. Today, Microsoft, Google, Facebook, and others have ethical committees, which act as a type of internal censorship of AI’s potential threats. Ethics committees highlight that technologies are the result of human labor and that they are the responsibility of all participants in global interaction, including both end-users and institutional entities such as business, government, and society. Technologies will not assist in the resolution of social issues; instead, they will further amplify existing tensions and conflicts while also creating new ones. In fact, “the challenge posed by AI-equipped machines can be addressed by the kind of ethical choices made by human beings” (Etzioni & Etzioni, 2017). It is advocated that to overcome the technocratic approach to AI development, one needs to look at how different ethical dilemmas are managed in different societies. Even if developers begin to put a lot of effort into making such recommendations, it will considerably challenge AI development in general because social processes have many unknown, poorly considered, and unpredictable aspects. Therefore, the human areas of responsibility connected with communication skills, teamwork, morals, and ethics will become more valuable. Even after reaching the point of singularity, this indicates that a human will be present for an extended period.

Overall, shortly, robots will not be able to completely replace humans because they lack emotional intelligence, and cannot make important decisions and solve ethical problems. This indicates that human interactions are paramount in the contemporary world, and robots can only assist. Technological advancements do not constitute a broad threat and will not result in the early replacement of human work; however, this does not minimize the necessity for citizens to be prepared for quick changes in the job structure. Moreover, robots will provide many new opportunities for humans, particularly programmers and engineers. Automation is inevitable; it is part of progress; on the other hand, people’s goal is to establish such artificial intelligence foundations that AI in the future will not consider replacing humanity. Digital change is happening slowly and unevenly, and it is hitting many roadblocks, both technological and social. Automation and robotization are increasingly replacing manual labor and creating new jobs, but AI developments are taking place at different speeds and are having unanticipated social consequences right now. As a result, the entrance of AI into life does not only not push a person to the margins of existence but also forces them to grow in new areas.

Beck, M., & Libert, B. (2017). The rise of AI makes emotional intelligence more important. Harvard Business Review , 15 .

Etzioni, A., & Etzioni, O. (2017). Incorporating ethics into artificial intelligence. The Journal of Ethics , 21 (4), 403-418.

Fjelland, R. (2020) Why general artificial intelligence will not be realized . Humanit Soc Sci Commun , 7 (10). Web.

Reisman, D., Schultz, J., Crawford, K., & Whittaker, M. (2018). Algorithmic impact assessments: A practical framework for public agency accountability . AI Now.

The World Bank (2019) The changing nature of work . Web.

  • The Importance of Trust in AI Adoption
  • Leaders’ Attitude Toward AI Adoption in the UAE
  • Artificial Intelligence's Impact on Communication
  • The Age of Artificial Intelligence (AI)
  • Will Robots Ever Replace Humans?
  • Artificial Intelligence for the Future of Policing
  • Artificial Intelligence in Self Driving Cars
  • Artificial Intelligence in “I, Robot” by Alex Proyas
  • Dangers of Logic and Artificial Intelligence
  • Big Data and Artificial Intelligence
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, July 18). Why Artificial Intelligence Will Not Replace Human in Near Future? https://ivypanda.com/essays/why-artificial-intelligence-will-not-replace-human-in-near-future/

"Why Artificial Intelligence Will Not Replace Human in Near Future?" IvyPanda , 18 July 2022, ivypanda.com/essays/why-artificial-intelligence-will-not-replace-human-in-near-future/.

IvyPanda . (2022) 'Why Artificial Intelligence Will Not Replace Human in Near Future'. 18 July.

IvyPanda . 2022. "Why Artificial Intelligence Will Not Replace Human in Near Future?" July 18, 2022. https://ivypanda.com/essays/why-artificial-intelligence-will-not-replace-human-in-near-future/.

1. IvyPanda . "Why Artificial Intelligence Will Not Replace Human in Near Future?" July 18, 2022. https://ivypanda.com/essays/why-artificial-intelligence-will-not-replace-human-in-near-future/.

Bibliography

IvyPanda . "Why Artificial Intelligence Will Not Replace Human in Near Future?" July 18, 2022. https://ivypanda.com/essays/why-artificial-intelligence-will-not-replace-human-in-near-future/.

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson Erik Brynjolfsson

Erik Brynjolfsson , director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Marina Gorbis , executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Judith Donath , author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

  • Science & Environment
  • History & Culture
  • Opinion & Analysis
  • Destinations
  • Activity Central
  • Creature Features
  • Earth Heroes
  • Survival Guides
  • Travel with AG
  • Travel Articles
  • About the Australian Geographic Society
  • AG Society News
  • Sponsorship
  • Fundraising
  • Australian Geographic Society Expeditions
  • Sponsorship news
  • Our Country Immersive Experience
  • AG Nature Photographer of the Year
  • View Archive
  • Web Stories
  • Adventure Instagram

Home Topics Science & Environment Will AI ever reach human-level intelligence?

Will AI ever reach human-level intelligence?

an essay on will artificial intelligence take over human intelligence

Artificial intelligence has changed form in recent years.

What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a more than US$100 billion industry where the heavy hitters – Microsoft, Google and OpenAI, to name a few – seem intent on out-competing one another.

The result has been increasingly sophisticated large language models, often released in haste and without adequate testing and oversight.

These models can do much of what a human can, and in many cases do it better. They can beat us at advanced strategy games , generate incredible art , diagnose cancers and compose music.

There’s no doubt AI systems appear to be “intelligent” to some extent. But could they ever be as intelligent as humans?

There’s a term for this: artificial general intelligence (AGI). Although it’s a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalised cognitive capabilities. In other words, it’s the point where AI can tackle any intellectual task a human can.

AGI isn’t here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness.

We asked five experts if they think AI will ever reach AGI, and five out of five said ‘yes’.

But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes “intelligence”, anyway?

Here are their detailed responses:

Paul Formosa

Professor in Philosophy and Co-Director of the Centre for Agency, Values and Ethics (CAVE), Macquarie University

AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many  language performance  benchmarks, and write  passable undergraduate  university essays.

Of course, it can also make things up, or “hallucinate”, and get things wrong – but so can humans (although not in the same ways).

Given a long enough timescale, it seems likely AI will achieve AGI, or “human-level intelligence”. That is, it will have achieved proficiency across enough of the interconnected domains of intelligence humans possess. Still, some may worry that – despite AI achievements so far – AI will not really be “intelligent” because it doesn’t (or can’t) understand what it’s doing, since it isn’t conscious.

However, the rise of AI suggests we can have intelligence without consciousness, because intelligence can be understood in functional terms. An intelligent entity can do intelligent things such as learn, reason, write essays, or use tools.

The AIs we create may never have consciousness, but they are increasingly able to do intelligent things. In some cases, they already do them at a level beyond us, which is a trend that will likely continue.

Christina Maher

Computational Neuroscientist and Biomedical Engineer, University of Sydney

AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience.

AI already ticks many of these boxes. What’s left is for AI models to learn inherent human traits such as critical reasoning, and understanding what emotion is and which events might prompt it.

As humans, we learn and experience these traits from the moment we’re born. Our first experience of “happiness” is too early for us to even remember. We also learn critical reasoning and emotional regulation throughout childhood, and develop a sense of our “emotions” as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence.

AI hasn’t acquired these capabilities yet. But if humans can learn these traits, AI probably can too – and maybe at an even faster rate. We are still discovering how AI models should be built, trained, and interacted with in order to develop such traits in them. Really, the big question is not if AI will achieve human-level intelligence, but when – and how.

Seyedali Mirjalili

Professor, Director of Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia

I believe AI will surpass human intelligence. Why? The past offers insights we can’t ignore. A lot of people believed tasks such as playing computer games, image recognition and content creation (among others) could only be done by humans – but technological advancement proved otherwise.

Today the rapid advancement and adoption of AI algorithms, in conjunction with an abundance of data and computational resources, has led to a level of intelligence and automation previously unimaginable. If we follow the same trajectory, having more generalised AI is no longer a possibility, but a certainty of the future.

It is just a matter of time. AI has advanced significantly, but not yet in tasks requiring intuition, empathy and creativity, for example. But breakthroughs in algorithms will allow this.

Moreover, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve themselves with minimal to no human involvement. This kind of “automation of intelligence” will profoundly change the world.

Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that must be addressed very carefully as we continue to advance towards it.

Dana Rezazadegan

Lecturer in AI and Data Science, Swinburne University of Technology

Yes, AI is going to get as smart as humans in many ways – but exactly how smart it gets will be decided largely by advancements in  quantum computing .

Human intelligence isn’t as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition, which current AI models can mimic, but can’t match. That said, AI has advanced massively and this trend will continue.

Current models are limited by relatively small and biased training datasets, as well as limited computational power. The emergence of quantum computing will transform AI’s capabilities. With quantum-enhanced AI, we’ll be able to feed AI models multiple massive datasets that are comparable to humans’ natural multi-modal data collection achieved through interacting with the world. These models will be able to maintain fast and accurate analyses.

Having an advanced version of continual learning should lead to the development of highly sophisticated AI systems which, after a certain point, will be able to improve themselves without human input.

As such, AI algorithms running on stable quantum computers have a high chance of reaching something similar to generalised human intelligence – even if they don’t necessarily match every aspect of human intelligence as we know it.

Marcel Scharth

Lecturer in Business Analytics, University of Sydney

I think it’s likely AGI will one day become a reality, although the timeline remains highly uncertain. If AGI is developed, then surpassing human-level intelligence seems inevitable.

Humans themselves are proof that highly flexible and adaptable intelligence is allowed by the laws of physics. There’s no  fundamental reason  we should believe that machines are, in principle, incapable of performing the computations necessary to achieve human-like problem solving abilities.

Furthermore, AI has  distinct advantages  over humans, such as better speed and memory capacity, fewer physical constraints, and the potential for more rationality and recursive self-improvement. As computational power grows, AI systems will eventually surpass the human brain’s computational capacity.

Our primary challenge then is to gain a better understanding of intelligence itself, and knowledge on how to build AGI. Present-day AI systems have many limitations and are nowhere near being able to master the different domains that would characterise AGI. The path to AGI will likely require unpredictable breakthroughs and innovations.

The median predicted date for AGI on  Metaculus , a well-regarded forecasting platform, is 2032. To me, this seems too optimistic. A 2022  expert survey  estimated a 50% chance of us achieving human-level AI by 2059. I find this plausible.

Noor Gillani is the Technology Editor at The Conversation .

This article is republished from The Conversation under a Creative Commons license. Read the original article .

Antarctic Beech Tree in Gondwana rainforest.

Gondwanaland: the search for a land before (human) time  

The Gondwana supercontinent broke up millions of years ago. Now, researchers are piecing it back together again.  

Scarlet Honeyeater perched on tree

Colour and song most at risk as birdlife declines due to poor urban design

The birds that fill our mornings with songs and our parks and gardens with colour are disappearing from our cities, a new study has found.

an essay on will artificial intelligence take over human intelligence

Searching for Aussie dinosaurs

Our understanding of where to find ancient life in Australia has been turned on its head by a new appreciation of the country’s geology. Now the world is looking to our vast outback as the latest hotspot to locate fossils.

Watch Latest Web Stories

Image for article: Birds of Stewart Island / Rakiura

Birds of Stewart Island / Rakiura

Image for article: Endangered fairy-wrens survive Kimberley floods

Endangered fairy-wrens survive Kimberley floods

Image for article: Australia’s sleepiest species

Australia’s sleepiest species

Shop offer details

2024 Calendars & Diaries - OUT NOW

Our much loved calendars and diaries are now available for 2024. Adorn your walls with beautiful artworks year round. Order today.

Shop offer details

In stock now: Hansa Soft Toys and Puppets

From cuddly companions to realistic native Australian wildlife, the range also includes puppets that move and feel like real animals.

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Five Experts Explain Whether AI Could Ever Become as Intelligent as Humans

A human hand reaching out to a fake hand extending from a laptop screen.

Artificial intelligence has changed form in recent years.

What started in the public eye as a burgeoning field with promising (yet largely benign) applications, has snowballed into a more than US $100 billion industry where the heavy hitters – Microsoft, Google, and OpenAI, to name a few – seem intent on out-competing one another.

The result has been increasingly sophisticated large language models, often released in haste and without adequate testing and oversight.

These models can do much of what a human can, and in many cases do it better. They can beat us at advanced strategy games , generate incredible art , diagnose cancers and compose music.

There's no doubt AI systems appear to be "intelligent" to some extent. But could they ever be as intelligent as humans?

There's a term for this: artificial general intelligence (AGI). Although it's a broad concept, for simplicity you can think of AGI as the point at which AI acquires human-like generalized cognitive capabilities. In other words, it's the point where AI can tackle any intellectual task a human can.

AGI isn't here yet; current AI models are held back by a lack of certain human traits such as true creativity and emotional awareness.

We asked five experts if they think AI will ever reach AGI, and five out of five said yes.

But there are subtle differences in how they approach the question. From their responses, more questions emerge. When might we achieve AGI? Will it go on to surpass humans? And what constitutes "intelligence", anyway?

Here are their detailed responses:

Paul Formosa

AI and Philosophy of Technology

AI has already achieved and surpassed human intelligence in many tasks. It can beat us at strategy games such as Go, chess, StarCraft and Diplomacy, outperform us on many language performance benchmarks, and write passable undergraduate university essays.

Of course, it can also make things up, or "hallucinate", and get things wrong – but so can humans (although not in the same ways).

Given a long enough timescale, it seems likely AI will achieve AGI, or "human-level intelligence". That is, it will have achieved proficiency across enough of the interconnected domains of intelligence humans possess. Still, some may worry that – despite AI achievements so far – AI will not really be "intelligent" because it doesn't (or can't) understand what it's doing, since it isn't conscious.

However, the rise of AI suggests we can have intelligence without consciousness , because intelligence can be understood in functional terms. An intelligent entity can do intelligent things such as learn, reason, write essays, or use tools.

The AIs we create may never have consciousness, but they are increasingly able to do intelligent things. In some cases, they already do them at a level beyond us, which is a trend that will likely continue.

Christina Maher

Computational Neuroscience and Biomedical Engineering

AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience.

AI already ticks many of these boxes. What's left is for AI models to learn inherent human traits such as critical reasoning, and understanding what emotion is and which events might prompt it.

As humans, we learn and experience these traits from the moment we're born. Our first experience of "happiness" is too early for us to even remember. We also learn critical reasoning and emotional regulation throughout childhood, and develop a sense of our "emotions" as we interact with and experience the world around us. Importantly, it can take many years for the human brain to develop such intelligence.

AI hasn't acquired these capabilities yet. But if humans can learn these traits, AI probably can too – and maybe at an even faster rate. We are still discovering how AI models should be built, trained, and interacted with in order to develop such traits in them. Really, the big question is not if AI will achieve human-level intelligence, but when – and how.

Seyedali Mirjalili

AI and Swarm Intelligence

I believe AI will surpass human intelligence. Why? The past offers insights we can't ignore. A lot of people believed tasks such as playing computer games, image recognition and content creation (among others) could only be done by humans – but technological advancement proved otherwise.

Today the rapid advancement and adoption of AI algorithms, in conjunction with an abundance of data and computational resources, has led to a level of intelligence and automation previously unimaginable. If we follow the same trajectory, having more generalised AI is no longer a possibility, but a certainty of the future.

It is just a matter of time. AI has advanced significantly, but not yet in tasks requiring intuition, empathy and creativity, for example. But breakthroughs in algorithms will allow this.

Moreover, once AI systems achieve such human-like cognitive abilities, there will be a snowball effect and AI systems will be able to improve themselves with minimal to no human involvement. This kind of "automation of intelligence" will profoundly change the world.

Artificial general intelligence remains a significant challenge, and there are ethical and societal implications that must be addressed very carefully as we continue to advance towards it.

Dana Rezazadegan

AI and Data Science

Yes, AI is going to get as smart as humans in many ways – but exactly how smart it gets will be decided largely by advancements in quantum computing .

Human intelligence isn't as simple as knowing facts. It has several aspects such as creativity, emotional intelligence and intuition, which current AI models can mimic, but can't match. That said, AI has advanced massively and this trend will continue.

Current models are limited by relatively small and biased training datasets, as well as limited computational power. The emergence of quantum computing will transform AI's capabilities. With quantum-enhanced AI, we'll be able to feed AI models multiple massive datasets that are comparable to humans' natural multi-modal data collection achieved through interacting with the world. These models will be able to maintain fast and accurate analyses.

Having an advanced version of continual learning should lead to the development of highly sophisticated AI systems which, after a certain point, will be able to improve themselves without human input.

As such, AI algorithms running on stable quantum computers have a high chance of reaching something similar to generalised human intelligence – even if they don't necessarily match every aspect of human intelligence as we know it.

Marcel Scharth

Machine Learning and AI Alignment

I think it's likely AGI will one day become a reality, although the timeline remains highly uncertain. If AGI is developed, then surpassing human-level intelligence seems inevitable.

Humans themselves are proof that highly flexible and adaptable intelligence is allowed by the laws of physics. There's no fundamental reason we should believe that machines are, in principle, incapable of performing the computations necessary to achieve human-like problem solving abilities.

Furthermore, AI has distinct advantages over humans, such as better speed and memory capacity, fewer physical constraints, and the potential for more rationality and recursive self-improvement. As computational power grows, AI systems will eventually surpass the human brain's computational capacity.

Our primary challenge then is to gain a better understanding of intelligence itself, and knowledge on how to build AGI. Present-day AI systems have many limitations and are nowhere near being able to master the different domains that would characterise AGI. The path to AGI will likely require unpredictable breakthroughs and innovations.

Noor Gillani , Technology Editor, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article .

Score Card Research NoScript

Table of Contents

What is artificial intelligence, what is human intelligence, artificial intelligence vs. human intelligence: a comparison, what brian cells can be tweaked to learn faster, artificial intelligence vs. human intelligence: what will the future of human vs ai be, impact of ai on the future of jobs, will ai replace humans, upskilling: the way forward, learn more about ai with simplilearn, ai vs human intelligence: key insights and comparisons.

Artificial Intelligence vs. Human Intelligence

From the realm of science fiction into the realm of everyday life, artificial intelligence has made significant strides. Because AI has become so pervasive in today's industries and people's daily lives, a new debate has emerged, pitting the two competing paradigms of AI and human intelligence. 

While the goal of artificial intelligence is to build and create intelligent systems that are capable of doing jobs that are analogous to those performed by humans, we can't help but question if AI is adequate on its own. This article covers a wide range of subjects, including the potential impact of AI on the future of work and the economy, how AI differs from human intelligence, and the ethical considerations that must be taken into account.

The term artificial intelligence may be used for any computer that has characteristics similar to the human brain, including the ability to think critically, make decisions, and increase productivity. The foundation of AI is human insights that may be determined in such a manner that machines can easily realize the jobs, from the most simple to the most complicated. 

Insights that are synthesized are the result of intellectual activity, including study, analysis, logic, and observation. Tasks, including robotics, control mechanisms, computer vision, scheduling, and data mining , fall under the umbrella of artificial intelligence.

The origins of human intelligence and conduct may be traced back to the individual's unique combination of genetics, upbringing, and exposure to various situations and environments. And it hinges entirely on one's freedom to shape his or her environment via the application of newly acquired information.

The information it provides is varied. For example, it may provide information on a person with a similar skill set or background, or it may reveal diplomatic information that a locator or spy was tasked with obtaining. After everything is said and done, it is able to deliver information about interpersonal relationships and the arrangement of interests.

The following is a table that compares human intelligence vs artificial intelligence:

Evolution

The cognitive abilities to think, reason, evaluate, and so on are built into human beings by their very nature.

Norbert Wiener, who hypothesized critique mechanisms, is credited with making a significant early contribution to the development of artificial intelligence (AI).

Essence

The purpose of human intelligence is to combine a range of cognitive activities in order to adapt to new circumstances.



The goal of artificial intelligence (AI) is to create computers that are able to behave like humans and complete jobs that humans would normally do.

Functionality

People make use of the memory, processing capabilities, and cognitive talents that their brains provide.

The processing of data and commands is essential to the operation of AI-powered devices.

Pace of operation

When it comes to speed, humans are no match for artificial intelligence or robots.

Computers have the ability to process far more information at a higher pace than individuals do. In the instance that the human mind can answer a mathematical problem in five minutes, artificial intelligence is capable of solving ten problems in one minute.

Learning ability

The basis of human intellect is acquired via the process of learning through a variety of experiences and situations.

Due to the fact that robots are unable to think in an abstract manner or make conclusions based on the experiences of the past. They are only capable of acquiring knowledge via exposure to material and consistent practice, although they will never create a cognitive process that is unique to humans.

Choice Making

It is possible for subjective factors that are not only based on numbers to influence the decisions that humans make.

Because it evaluates based on the entirety of the acquired facts, AI is exceptionally objective when it comes to making decisions.

Perfection

When it comes to human insights, there is almost always the possibility of "human mistake," which refers to the fact that some nuances may be overlooked at some time or another.

The fact that AI's capabilities are built on a collection of guidelines that may be updated allows it to deliver accurate results regularly.

Adjustments 

The human mind is capable of adjusting its perspectives in response to the changing conditions of its surroundings. Because of this, people are able to remember information and excel in a variety of activities.

It takes artificial intelligence a lot more time to adapt to unneeded changes.

Flexibility

The ability to exercise sound judgment is essential to multitasking, as shown by juggling a variety of jobs at once.

In the same way that a framework may learn tasks one at a time, artificial intelligence is only able to accomplish a fraction of the tasks at the same time.

Social Networking

Humans are superior to other social animals in terms of their ability to assimilate theoretical facts, their level of self-awareness, and their sensitivity to the emotions of others. This is because people are social creatures.

Artificial intelligence has not yet mastered the ability to pick up on associated social and enthusiastic indicators.

Operation

It might be described as inventive or creative.

It improves the overall performance of the system. It is impossible for it to be creative or inventive since robots cannot think in the same way that people can.

According to the findings of recent research, altering the electrical characteristics of certain cells in simulations of neural circuits caused the networks to acquire new information more quickly than in simulations with cells that were identical. They also discovered that in order for the networks to achieve the same outcomes, a smaller number of the modified cells were necessary and that the approach consumed fewer resources than models that utilized identical cells.

These results not only shed light on how human brains excel at learning but may also help us develop more advanced artificial intelligence systems, such as speech and facial recognition software for digital assistants and autonomous vehicle navigation systems.

The researchers focused on adjusting the "time constant," or the pace at which one cell makes a decision about its own fate based on the actions of its associated cells. Some cells make decisions rapidly, while others take longer to respond and base their choice on the actions of nearby cells.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Industry-recognized AI Engineer Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

The capabilities of AI are constantly expanding. It takes a significant amount of time to develop AI systems, which is something that cannot happen in the absence of human intervention. All forms of artificial intelligence, including self-driving vehicles and robotics, as well as more complex technologies like computer vision, and natural language processing , are dependent on human intellect.

1. Automation of Tasks

The most noticeable effect of AI has been the result of the digitalization and automation of formerly manual processes across a wide range of industries. These tasks, which were formerly performed manually, are now performed digitally. Currently, tasks or occupations that involve some degree of repetition or the use and interpretation of large amounts of data are communicated to and administered by a computer, and in certain cases, the intervention of humans is not required in order to complete these tasks or jobs.

2. New Opportunities

Artificial intelligence is creating new opportunities for the workforce by automating formerly human-intensive tasks . The rapid development of technology has resulted in the emergence of new fields of study and work, such as digital engineering. Therefore, although traditional manual labor jobs may go extinct, new opportunities and careers will emerge.

3. Economic Growth Model

When it's put to good use, rather than just for the sake of progress, AI has the potential to increase productivity and collaboration inside a company by opening up vast new avenues for growth. As a result, it may spur an increase in demand for goods and services, and power an economic growth model that spreads prosperity and raises standards of living.

4. Role of Work

In the era of AI, recognizing the potential of employment beyond just maintaining a standard of living is much more important. It conveys an understanding of the essential human need for involvement, co-creation, dedication, and a sense of being needed, and should therefore not be overlooked. So, sometimes, even mundane tasks at work become meaningful and advantageous, and if the task is eliminated or automated, it should be replaced with something that provides a comparable opportunity for human expression and disclosure.

5. Growth of Creativity and Innovation

Experts now have more time to focus on analyzing, delivering new and original solutions, and other operations that are firmly in the area of the human intellect, while robotics, AI, and industrial automation handle some of the mundane and physical duties formerly performed by humans.

While AI has the potential to automate specific tasks and jobs, it is likely to replace humans in some areas. AI is best suited for handling repetitive, data-driven tasks and making data-driven decisions. However, human skills such as creativity, critical thinking, emotional intelligence, and complex problem-solving still need to be more valuable and easily replicated by AI.

The future of AI is more likely to involve collaboration between humans and machines, where AI augments human capabilities and enables humans to focus on higher-level tasks that require human ingenuity and expertise. It is essential to view AI as a tool that can enhance productivity and facilitate new possibilities rather than as a complete substitute for human involvement.

Supercharge your career in Artificial Intelligence with our comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Master's Program Post Graduate Program In Artificial Intelligence Post Graduate Program In Artificial Intelligence Geo All Geos All Geos IN/ROW University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 11 Months Coding Experience Required Basic Basic No Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 8+ skills including Supervised & Unsupervised Learning Deep Learning Data Visualization, and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance Upto 14 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$$ Explore Program Explore Program Explore Program

Artificial intelligence is revolutionizing every sector and pushing humanity forward to a new level. However, it is not yet feasible to achieve a precise replica of human intellect. The human cognitive process remains a mystery to scientists and experimentalists. Because of this, the common sense assumption in the growing debate between AI and human intelligence has been that AI would supplement human efforts rather than immediately replace them. Check out the Post Graduate Program in AI and Machine Learning at Simplilearn if you are interested in pursuing a career in the field of artificial intelligence. 

Our AI & ML Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

14 weeks€ 1,999

Cohort Starts:

16 weeks€ 2,490

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

11 Months€ 3,990

Cohort Starts:

11 months€ 2,290

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

11 months€ 2,990
11 Months€ 1,490

Get Free Certifications with free video courses

Machine Learning using Python

AI & Machine Learning

Machine Learning using Python

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Fast-Track Your Gen AI & ML Career to Success in 2024 with IIT Kanpur

Kickstart Your Gen AI & ML Career on a High-Growth Path in 2024 with IIT Guwahati

Ethics in Generative AI: Why It Matters and What Benefits It Brings

Recommended Reads

Artificial Intelligence Career Guide: A Comprehensive Playbook to Becoming an AI Expert

Data Science vs Artificial Intelligence: Key Differences

How Does AI Work

Introduction to Artificial Intelligence: A Beginner's Guide

What is Artificial Intelligence and Why Gain AI Certification

Discover the Differences Between AI vs. Machine Learning vs. Deep Learning

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 23 February 2022

Human autonomy in the age of artificial intelligence

  • Carina Prunkl   ORCID: orcid.org/0000-0002-0123-9561 1  

Nature Machine Intelligence volume  4 ,  pages 99–101 ( 2022 ) Cite this article

1930 Accesses

19 Citations

23 Altmetric

Metrics details

  • Science, technology and society

Current AI policy recommendations differ on what the risks to human autonomy are. To systematically address risks to autonomy, we need to confront the complexity of the concept itself and adapt governance solutions accordingly.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

Human Autonomy at Risk? An Analysis of the Challenges from AI

  • Carina Prunkl

Minds and Machines Open Access 24 June 2024

Rethinking Health Recommender Systems for Active Aging: An Autonomy-Based Ethical Analysis

  • Simona Tiribelli
  •  &  Davide Calvaresi

Science and Engineering Ethics Open Access 27 May 2024

Artificial intelligence and human autonomy: the case of driving automation

  • Fabio Fossa

AI & SOCIETY Open Access 16 May 2024

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 digital issues and online access to articles

111,21 € per year

only 9,27 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Raz, J. The Morality of Freedom (Clarendon Press, 1986).

Korsgaard, C. M., Cohen, G. A., Geuss, R., Nagel, T. Williams, T. & O’Neilk, O. The Sources of Normativity (Cambridge Univ. Press, 1996).

Christman, J. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford Univ., 2020); https://plato.stanford.edu/entries/autonomy-moral/

Roessler, B. Autonomy: An Essay on the Life Well-Lived (John Wiley, 2021).

Susser, D., Roessler, B. & Nissenbaum, H. Technology, Autonomy, and Manipulation (Technical Report) (Social Science Research Network, Rochester, NY, 2019).

Kramer, A. D. I., Guillory, J. E. & Hancock, J. T. Proc. Natl Acad. Sci. USA 111 , 8788–8790 (2014).

Article   Google Scholar  

European Commission High-Level Experts Group (HLEG). Ethics Guidelines for Trustworthy AI (Technical Report B-1049) (EC, Brussels, 2019).

Association for Computing Machinery (ACM). ACM Code of Ethics and Professional Conduct (ACM, 2018).

Université de Montréal. Montreal Declaration for a Responsible Development of AI (Forum on the Socially Responsible Development of AI (Université de Montréal, 2017).

European Committee of the Regions. White Paper on Artificial Intelligence - A European approach to excellence and trust (EC, 2020).

Organisation for Economic Co-operation and Development. Recommendation of the Council on Artificial Intelligence (Technical Report OECD/LEGAL/0449) (OECD 2019); https://oecd.ai/en/ai-principles

European Commission, Directorate-General for Research and Innovation, European Group on Ethics in Science and New Technologies. Statement on artificial intelligence, robotics and ‘autonomous’ systems (EC, 2018).

Floridi, L. & Cowls, J. Harvard Data Sci. Rev 1 , 1–13 (2019).

Google Scholar  

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A. & Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (SSRN Scholarly Paper ID 3518482) (Social Science Research Network, Rochester, NY, 2020); https://papers.ssrn.com/abstract=3518482

Milano, S., Taddeo, M. & Floridi, L. Recommender Systems and their Ethical Challenges (SSRN Scholarly Paper ID 3378581) (Social Science Research Network, Rochester, NY, 2019).

Calvo, R. A., Peters, D. & D’Mello, S. Commun. ACM 58 , 41–42 (2015).

Mik, E. Law Innov. Technol. 8 , 1–38 (2016).

Helberger, N. Profiling and Targeting Consumers in the Internet of Things – A New Challenge for Consumer Law (Technical Report) (Social Science Research Network, Rochester, NY, 2016).

Burr, C., Morley, J., Taddeo, M. & Floridi, L. IEEE Trans. Technol. Soc. 1 , 21–33 (2020).

Morley, J. & Floridi, L. Sci. Eng. Ethics 26 , 1159–1183 (2020).

Brownsword, R. in Law, Human Agency and Autonomic Computing (eds Hildebrandt., M. & Rouvroy, A.) 80–100 (Routledge, 2011).

Calvo, R., Peters, D., Vold, K. V. & Ryan, R. in Ethics of Digital Well-Being (Philosophical Studies Series, vol. 140) (eds Burr, C. & Floridi, L.) 31–54 (Springer, 2020).

Rubel, A., Castro, C. & Pham, A. Algorithms and Autonomy: The Ethics of Automated Decision Systems (Cambridge Univ. Press, 2021).

Dworkin, G. The Theory and Practice of Autonomy (Cambridge Univ. Press. 1988).

Mackenzie, C. Three Dimensions of Autonomy: A Relational Analysis (Oxford Univ. Press, 2014).

Noggle, R. Am. Philos. Q. 33 , 43–55 (1996).

Elster, J. Sour Grapes: Studies in the Subversion of Rationality (Cambridge Univ. Press, 1985).

Adomavicius, G., Bockstedt, J. C., Curley, S. P. & Zhang, J. Info. Syst. Res 24 , 956–975 (2013).

Ledford, H. Nature 574 , 608–609 (2019).

Dworkin, G. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) (Metaphysics Research Lab, Stanford Univ. Press, 2020; https://plato.stanford.edu/archives/fall2020/entries/paternalism/

Kühler, M. Bioethics 36 , 194–200 (2021).

Christman, J. The Politics of Persons: Individual Autonomy and Socio-Historical Selves (Cambridge Univ. Press, 2009.)

Download references

Acknowledgements

The author thanks J. Tasioulas, M. Philipps-Brown, C. Veliz, T. Lechterman, A. Dafoe and B. Garfinkel for their helpful comments. Funding: No external funding sources.

Author information

Authors and affiliations.

Institute for Ethics in AI, University of Oxford, Oxford, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carina Prunkl .

Ethics declarations

Competing interests.

The author declares no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks the anonymous reviewers for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Prunkl, C. Human autonomy in the age of artificial intelligence. Nat Mach Intell 4 , 99–101 (2022). https://doi.org/10.1038/s42256-022-00449-9

Download citation

Published : 23 February 2022

Issue Date : February 2022

DOI : https://doi.org/10.1038/s42256-022-00449-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

AI & SOCIETY (2024)

  • Davide Calvaresi

Science and Engineering Ethics (2024)

Minds and Machines (2024)

A principles-based ethics assurance argument pattern for AI and autonomous systems

  • Ibrahim Habli
  • Marten Kaas

AI and Ethics (2024)

Assessing deep learning: a work program for the humanities in the age of artificial intelligence

  • Jan Segessenmann
  • Thilo Stadelmann
  • Oliver Dürr

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

an essay on will artificial intelligence take over human intelligence

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

AI Won’t Replace Humans — But Humans With AI Will Replace Humans Without AI

The first step business leaders must take is to experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees.

Karim Lakhani is a professor at Harvard Business School who specializes in workplace technology and particularly AI. He’s done pioneering work in identifying how digital transformation has remade the world of business, and he’s the co-author of the 2020 book Competing in the Age of AI . Customers will expect AI-enhanced experiences with companies, he says, so business leaders must experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees. Change and change management are skills that are no longer optional for modern organizations.

Just as the internet has drastically lowered the cost of information transmission, AI will lower the cost of cognition. That’s according to Harvard Business School professor Karim Lakhani, who has been studying AI and machine learning in the workplace for years. As the public comes to expect companies that deliver seamless, AI-enhanced experiences and transactions, leaders need to embrace the technology, learn to harness its potential, and develop use cases for their businesses. “The places where you can apply it?” he says. “Well, where do you apply thinking?”

Partner Center

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

an essay on will artificial intelligence take over human intelligence

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

  • Work & Careers
  • Life & Arts

Elon Musk predicts AI will overtake human intelligence next year

Limited time offer, save 50% on standard digital, explore more offers..

Then $75 per month. Complete digital access to quality FT journalism. Cancel anytime during your trial.

Premium Digital

Complete digital access to quality FT journalism with expert analysis from industry leaders. Pay a year upfront and save 20%.

  • Global news & analysis
  • Expert opinion
  • FT App on Android & iOS
  • FT Edit app
  • FirstFT: the day's biggest stories
  • 20+ curated newsletters
  • Follow topics & set alerts with myFT
  • FT Videos & Podcasts
  • 20 monthly gift articles to share
  • Lex: FT's flagship investment column
  • 15+ Premium newsletters by leading experts
  • FT Digital Edition: our digitised print edition

FT Digital Edition

10% off your first year. The new FT Digital Edition: today’s FT, cover to cover on any device. This subscription does not include access to ft.com or the FT App.

Terms & Conditions apply

Explore our full range of subscriptions.

Why the ft.

See why over a million readers pay to read the Financial Times.

Will AI Take Over The World? Or Will You Take Charge Of Your World?

Forbes Books

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

There’s been a lot of scary talk going around lately. Artificial intelligence is getting more powerful — especially the new generative AI that can write code, write stories, and generate outputs ranging from pretty pictures to product designs. The greatest concern is not so much that computers will become smarter than humans, it’s that they will be unpredictably smart, or unpredictably foolish, due to quirks in the AI's code. Experts worry that if we keep entrusting key tasks to them, they could trigger what Elon Musk has called “ civilization destruction .”

This worst-case scenario needs to be addressed but will not happen soon. If you own or manage a midsize company, the pressing issue is how new developments in AI will affect your business. Our view, which reflects a consensus view, says to handle this change in the environment the way any big change should be handled. Don’t ignore it, or try to resist it, or get stuck on what it might do to you. Instead, look at what you can do with the change. Embrace it. Leverage it to your advantage.

Here’s a brief overview that should make clear a couple of key points. Although the recent surge in AI may seem like it came out of the blue, it’s really just the next step in a long process of evolutionary change. Not only can midsize companies participate in the evolution, they will have to in order to stay fit to survive.

How we got here … and where we can go next

Artificial intelligence—the creation of software and hardware able to simulate human smarts—isn’t new. Crucial core technologies for today’s AI were first conceived in the 1970s and ‘80s. In the 1990s, IBM’s Deep Blue chess machine played and beat the reigning world champion, setting a milestone for AI researchers. Since then, AI has continued to improve while moving into new realms, some of which we now take for granted. By the 2010s, natural language processing was refined to the point where Siri and Alexa could be your virtual assistants.

What’s new lately is that major tech-industry players have been ramping up investment at the frontiers of AI. Elon Musk is a leader in the field despite his reservations. He has launched a deep-pocketed startup, X.ai, to focus solely on cutting-edge AI. Microsoft is the lead investor in OpenAI. Amazon, Google/Alphabet, and others are placing big bets in the race as well.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

This raises an oft-heard concern. Will the tech heavyweights dominate the future of AI , just as they’ve dominated so much else? And will that, in turn, leave midsize-to-small companies in the dust?

Do not worry. A key distinction must be recognized. The R&D efforts are being led by big players because they have the resources needed: basic research in advanced AI is expensive. Certainly the big firms will also use the fruits of that R&D in their own products and services. But the results of their work will come to market—indeed, are already coming to market—in forms that are highly affordable.

Over the past few years, our consulting firm has helped midsized companies apply AI to analyze customer data for targeted marketing. Many of the new generative AI tools, such as ChatGPT, are free or cost little. In a podcast hosted by Harvard Business Review , guest experts agreed that generative AI is actually “ democratizing access to the highest levels of technology ,” rather than shutting out the little guys. Companies can even find cost-effective ways to tailor a general, open-source AI tool (a “foundation model”) for their own specific uses. We’re now seeing an expanding galaxy of possible business uses.

An in-depth report from McKinsey & Company in May 2023 put the situation bluntly: “CEOs should consider exploration of generative AI a must, not a maybe... The economics and technical requirements to start are not prohibitive, while the downside of inaction could be quickly falling behind competitors.”

Companies can begin by exploring simple, easy-to-do applications that promise tangible paybacks, and then move up the sophistication ladder as desired. Just two examples of potential uses: AIs that write code can be used in paired programming, to check, improve, and speed up the work of a human developer. And while AI is already widely used in marketing and sales, generative AI could help you raise your game. Imagine you’re on a sales call. You have your laptop open and an AI is listening in. The AI might guide you through the call with real-time screen prompts attuned to what the customer is saying, as well as what’s in the database.

Now is the time to start your exploration, if you haven’t yet. The sooner you embrace this technology and the faster you learn to work with it, the more likely you are to get a leg up.

A final point to keep in mind is one we mentioned earlier. The future of AI is unpredictable . Change is constant and nobody knows for sure where it will take us next. This means being ready to do more than embrace the latest new thing. It means embracing change as a fundamental part of your company’s DNA. Evolve and prosper!

Bhopi Dhall and Saurajit Kanungo

  • Editorial Standards
  • Reprints & Permissions

May 25, 2023

Here’s Why AI May Be Extremely Dangerous—Whether It’s Conscious or Not

Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity

By Tamlyn Hunt

Human face behind binary code

devrimb/Getty Images

“The idea that this stuff could actually get smarter than people.... I thought it was way off…. Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence scientists, also known as “ the godfather of AI ,” said after he quit his job in April so that he can warn about the dangers of this technology .

He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development may result in a “nuclear-level catastrophe.” Almost 28,000 people have signed on to an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a researcher in consciousness, I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Why are we all so concerned? In short: AI development is going way too fast.

The key issue is the profoundly rapid improvement in conversing among the new crop of advanced "chatbots," or what are technically called “large language models” (LLMs). With this coming “AI explosion,” we will probably have just one chance to get this right.

If we get it wrong, we may not live to tell the tale. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, AI will be able to improve itself with no human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

A team of Microsoft researchers analyzing OpenAI’s GPT-4 , which I think is the best of the new advanced chatbots currently available, said it had, "sparks of advanced general intelligence" in a new preprint paper .

In testing GPT-4, it performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10 percent in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”

This pace of change is why Hinton told the New York Times : "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That’s scary.” In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation “crucial.”

Once AI can improve itself, which may be not more than a few years away, and could in fact already be here now, we have no way of knowing what the AI will do or how we can control it. This is because superintelligent AI (which by definition can surpass humans in a broad range of activities) will—and this is what I worry about the most—be able to run circles around programmers and any other human by manipulating humans to do its will; it will also have the capacity to act in the virtual world through its electronic connections, and to act in the physical world through robot bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book Superintelligence for a good overview ) and has been studied and argued about by philosophers and scientists, such as Bostrom, Seth Baum and Eliezer Yudkowsky , for decades now.

I think of it this way: Why would we expect a newborn baby to beat a grandmaster in chess? We wouldn’t. Similarly, why would we expect to be able to control superintelligent AI systems? (No, we won’t be able to simply hit the off switch, because superintelligent AI will have thought of every possible way that we might do that and taken actions to prevent being shut off.)

Here’s another way of looking at it: a superintelligent AI will be able to do in about one second what it would take a team of 100 human software engineers a year or more to complete. Or pick any task, like designing a new advanced airplane or weapon system, and superintelligent AI could do this in about a second.

Once AI systems are built into robots, they will be able to act in the real world, rather than only the virtual (electronic) world, with the same degree of superintelligence, and will of course be able to replicate and improve themselves at a superhuman pace.

Any defenses or protections we attempt to build into these AI “gods,” on their way toward godhood, will be anticipated and neutralized with ease by the AI once it reaches superintelligence status. This is what it means to be superintelligent.

We won’t be able to control them because anything we think of, they will have already thought of, a million times faster than us. Any defenses we’ve built in will be undone, like Gulliver throwing off the tiny strands the Lilliputians used to try and restrain him.

Some argue that these LLMs are just automation machines with zero consciousness , the implication being that if they’re not conscious they have less chance of breaking free from their programming. Even if these language models, now or in the future, aren’t at all conscious, this doesn’t matter. For the record, I agree that it’s unlikely that they have any actual consciousness at this juncture—though I remain open to new facts as they come in.

Regardless, a nuclear bomb can kill millions without any consciousness whatsoever. In the same way, AI could kill millions with zero consciousness, in a myriad ways, including potentially use of nuclear bombs either directly (much less likely) or through manipulated human intermediaries (more likely).

So, the debates about consciousness and AI really don’t figure very much into the debates about AI safety.

Yes, language models based on GPT-4 and many other models are already circulating widely . But the moratorium being called for is to stop development of any new models more powerful than 4.0—and this can be enforced, with force if required. Training these more powerful models requires massive server farms and energy. They can be shut down.

My ethical compass tells me that it is very unwise to create these systems when we know already we won’t be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We should not open Pandora’s box any more than it already has been opened.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of  Scientific American.

 alt=

  • Undergraduate
  • Postdoctoral Programs
  • Future Engineers
  • Professional Education
  • Open Access
  • Global Experiences
  • Student Activities
  • Leadership Development
  • Graduate Student Fellowships
  • Aeronautics and Astronautics
  • Biological Engineering
  • Chemical Engineering
  • Civil and Environmental Engineering
  • Electrical Engineering and Computer Science
  • Institute for Medical Engineering and Science
  • Materials Science and Engineering
  • Mechanical Engineering
  • Nuclear Science and Engineering
  • Industry Collaborations
  • Engineering in Action
  • In The News
  • Video Features
  • Newsletter: The Infinite
  • Ask an Engineer
  • Facts and Figures
  • Diversity, Equity & Inclusion
  • Staff Spotlights
  • Commencement 2024

Ask An Engineer

When will AI be smart enough to outsmart people?

Related Questions

  • Can a computer generate a truly random number?
  • Can we use artificial intelligence to generate new ideas?
  • How were we able to navigate from the Earth to the Moon with such precision?
  • Does the outside edge of a ceiling fan blade move faster than the inside edge?
  • Is computer software always a step ahead of hardware?
  • How do computers perform complex mathematical operations?
  • How did people in the olden days create software without any programming software?
  • Why is speed at sea measured in knots?
  • How can I tell if a certain tree is big enough to support a 30-foot zip line?
  • Is chaos an actual state, or just a name for rules we haven’t discovered yet?

When will AI be smart enough to outsmart people?

In some ways, it’s already happening. In other ways, it depends on your definition of “outsmart.”

In a paper published last year, titled, “When Will AI Exceed Human Performance? Evidence from AI Experts,” elite researchers in artificial intelligence predicted that “human level machine intelligence,” or HLMI, has a 50 percent chance of occurring within 45 years and a 10 percent chance of occurring within 9 years. But anyone who has ever had a conversation with Siri or Cortana, (some of the virtual assistants on the market today), might argue that HLMI is already here.

Eliza Kosoy, a researcher in MIT’s Center for Brains, Minds, and Machines, points out that machines are already surpassing humans in some domains. They can beat us at many strategy games like chess, the board game Go, and some Atari video games. Machines can even perform surgery and fly airplanes. Recently, machines have started driving cars and trucks—though some of them might have issues passing driver’s ed. Despite this, Kosoy believes, “with enough data and the correct machine learning algorithms, machines can make life more enjoyable for humans.”

Kosoy’s objective is to better understand the way in which humans learn so that it can be applied to machines. She does this by studying intuitive physics and one-shot learning.

Intuitive physics refers to the way in which humans are able to predict certain, dynamic changes in their physical environment, and then react in kind to these changes. For example, being able to sense the trajectory of a falling tree and therefore knowing the direction to move in to avoid being hit.

One-shot learning is the ability to learn object categories from only a few examples. This seems to be a capability that the machines are lacking…at least for the time being. Kosoy explains that the best algorithms today need to be exposed to thousands of data sets in order to learn the difference between, say, an apple and an orange. Children, however, can tell the difference after only a few introductions. Kosoy says she is “personally very curious about how children are able to learn so quickly and how we can extract that process in order to build faster machine learning that doesn’t require as much data.”

Another caveat to the machine-versus-human intelligence race is the incorporation of emotion. In 1997, when the IBM computer Deep Blue beat the Russian world champion chess player, Garry Kasparov, Kasparov was so distraught that he never played quite the same again. Sure, Deep Blue was able to “outsmart” Kasparov, but did its programing have the emotional intelligence to graciously show good sportsmanship so as not to crush Kasparov’s sprit? To put it another way: when you have a bad day at work, can you really count on Siri to empathize? “Human empathy and kindness are an important part of intelligence,” Kosoy notes. “In this domain, I doubt AI will ever outsmart us.”

And of course, there’s more. What about the relationship between creativity and intelligence? Scientists in Germany have trained computers to paint in the style of Van Gogh and Picasso, and the computers’ images aren’t all that bad. But, is teaching a machine to mimic creativity true creativity?

When it comes to raw computational power, machines are well on their way. And there’s no doubt that they will continue to make life more pleasurable and easier for humans. But will a machine ever write the next Tony Award winning play? Or break into an impromptu dance in the rain when an unexpected shower strikes? It’s clear that the human brain is a magnificent thing that is capable of enjoying the simple pleasures of being alive. Ironically, it’s also capable of creating machines that, for better or worse, become smarter and more and more lifelike every day.

Thanks to Ojas Sharma, Age 13, from Bakersfield, CA for the question.

 alt=

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

an essay on will artificial intelligence take over human intelligence

  • Writing Correction
  • Online Prep Platform
  • Online Course
  • Speaking Assessment
  • Ace The IELTS
  • Target Band 7
  • Practice Tests Downloads
  • IELTS Success Formula
  • Essays Band 9 IELTS Writing Task 2 samples – IELTS Band 9 essays
  • Essays Band 8 IELTS Writing – samples of IELTS essays of Band 8
  • Essays Band 7 IELTS Writing – samples of IELTS essays of Band 7
  • Essays Band 6 IELTS Writing – samples of IELTS essays of Band 6
  • Essays Band 5 IELTS Writing – samples of IELTS essays of Band 5
  • Reports Band 9 IELTS Writing – samples of IELTS reports of Band 9 (Academic Writing Task 1)
  • Reports Band 8 IELTS Writing – samples of IELTS reports of Band 8
  • Reports Band 7 IELTS Writing – samples of IELTS reports of Band 7
  • Letters Band 9 IELTS Writing Task 1 – samples of IELTS letters of Band 9
  • Letters Band 8 IELTS Writing – samples of IELTS letters of Band 8
  • Letters Band 7 IELTS Writing – samples of IELTS letters of Band 7
  • Speaking Samples
  • Tests Samples
  • 2023, 2024 IELTS questions
  • 2022 IELTS questions
  • 2021 IELTS questions
  • 2020 IELTS questions
  • High Scorer’s Advice IELTS high achievers share their secrets
  • IELTS Results Competition
  • IELTS-Blog App

IELTS essay, topic: Artificial Intelligence will take over the role of teachers (agree/disagree)

  • IELTS Essays - Band 9
  • by Simone Braverman

This is a model response to a Writing Task 2 topic from High Scorer’s Choice IELTS Practice Tests book series (reprinted with permission). This answer is close to IELTS Band 9.

Set 6 Academic book, Practice Test 26

Writing Task 2

You should spend about 40 minutes on this task.

Write about the following topic:

Some people feel that with the rise of artificial intelligence, computers and robots will take over the roles of teachers. To what extent do you agree or disagree with this statement?

Give reasons for your answer and include any relevant examples from your knowledge or experience.

You should write at least 250 words.

an essay on will artificial intelligence take over human intelligence

Sample Band 9 Essay

With ever increasing technological advances, computers and robots are replacing human roles in different areas of society. This trend can also be seen in education, where interactive programs can enhance the educational experience for children and young adults. Whether, however, this revolution can also take over the role of the teacher completely is debatable, and I oppose this idea as it is unlikely to serve students well.

The roles of computers and robots can be seen in many areas of the workplace. Classic examples are car factories, where a lot of the repetitive precision jobs done on assembly lines have been performed by robots for many years, and medicine, where diagnosis, and treatment, including operations, have also been assisted by computers for a long time. According to the media, it won’t also be long until we have cars that are driven automatically.

It has long been discussed whether robots and computers can do this in education. It is well known that the complexity of programs can now adapt to so many situations that something can already be set up that has the required knowledge of the teacher, along with the ability to predict and answer all questions that might be asked by students. In fact, due to the nature of computers, the knowledge levels can far exceed a teacher’s and have more breadth, as a computer can have equal knowledge in all the subjects that are taught in school, as opposed to a single teacher’s specialisation. It seems very likely, therefore, that computers and robots should be able to deliver the lessons that teachers can, including various ways of differentiating and presenting materials to suit varying abilities and ages of students.

Where I am not convinced is in the pastoral role of teachers. Part of teaching is managing behaviour and showing empathy with students, so that they feel cared for and important. Even if a robot or computer can be programmed to imitate these actions, students will likely respond in a different way when they know an interaction is part of an algorithm rather than based on human emotion.

Therefore, although I feel that computers should be able to perform a lot of the roles of teachers in the future, they should be used as educational tools to assist teachers and not to replace them. In this way, students would receive the benefits of both ways of instruction.

Go here for more IELTS Band 9 Essays

Related posts:

  • IELTS essay, topic: Celebrities can be poor role models for teenagers (agree/disagree) This essay topic was seen in a recent IELTS test...
  • IELTS essay, topic: Having a salaried job is better than being self-employed (agree/disagree) This is a model response to a Writing Task 2...
  • IELTS essay, topic: Individuals should be responsible for funding their own retirement (agree/disagree) This is a model response to a Writing Task 2...
  • IELTS essay, topic: Only people over 18 years old should be allowed to use social media (agree/disagree) This is a model response to a Writing Task 2...
  • IELTS essay, topic: Women, not men, should stay at home to care for children (agree/disagree) This is a model response to a Writing Task 2...

Leave a Reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Simone Braverman is the founder of IELTS-Blog.com and the author of several renowned IELTS preparation books, including Ace the IELTS, Target Band 7, the High Scorer's Choice practice test series, and IELTS Success Formula. Since 2005, Simone has been committed to making IELTS preparation accessible and effective through her books and online resources. Her work has helped 100,000's of students worldwide achieve their target scores and live their dream lives. When Simone isn't working on her next IELTS book, video lesson, or coaching, she enjoys playing the guitar or rollerblading.

COMMENTS

  1. Artificial Versus Human Intelligence

    Human intelligence lies in the basis of such developments and represents the collective knowledge gained from the analysis of experiences people live through. In turn, AI is an outcome of this progression, which allows humanity to put this data in a digital form that possesses some autonomous qualities. As a result, AI also contains limitations ...

  2. AI Should Augment Human Intelligence, Not Replace It

    In an economy where data is changing how companies create value — and compete — experts predict that using artificial intelligence (AI) at a larger scale will add as much as $15.7 trillion to ...

  3. Will AI ever reach human-level intelligence? We asked five experts

    In other words, it's the point where AI can tackle any intellectual task a human can. AGI isn't here yet; current AI models are held back by a lack of certain human traits such as true ...

  4. How close are we to AI that surpasses human intelligence?

    July 18, 2023. Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction. AGI may still be far off, but the ...

  5. Why Artificial Intelligence Will Not Replace Human in Near Future? Essay

    Another important reason why AI will not be able to replace humans is what is known as emotional intelligence. The human's ability to respond to a situation quickly with innovative ideas and empathy is unparalleled, and it cannot be replicated by any computer on the planet. According to Beck and Libert's (2017) article in Harvard Business ...

  6. Artificial Intelligence and the Future of Humans

    Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

  7. Will AI ever reach human-level intelligence?

    Christina Maher. Computational Neuroscientist and Biomedical Engineer, University of Sydney. AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from ...

  8. The present and future of AI

    How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

  9. AI Is an Existential Threat—Just Not the Way You Think

    The following essay is reprinted with permission from The Conversation, an online publication covering the latest research.. The rise of ChatGPT and similar artificial intelligence systems has ...

  10. AI now beats humans at basic tasks

    Artificial intelligence (AI) systems, such as the chatbot ChatGPT, have become so advanced that they now very nearly match or exceed human performance in tasks including reading comprehension ...

  11. Five Experts Explain Whether AI Could Ever Become as Intelligent as

    AI will achieve human-level intelligence, but perhaps not anytime soon. Human-level intelligence allows us to reason, solve problems and make decisions. It requires many cognitive abilities including adaptability, social intelligence and learning from experience. AI already ticks many of these boxes.

  12. How soon will machines outsmart humans? The biggest brains in AI disagree

    A string of breakthroughs in artificial intelligence over the past year has raised expectations for the arrival of machines that can outperform expert humans across a range of intellectual tasks.

  13. AI vs Human Intelligence 2024: A Comparative Study

    Last updated on Aug 31, 2024 107730. From the realm of science fiction into the realm of everyday life, artificial intelligence has made significant strides. Because AI has become so pervasive in today's industries and people's daily lives, a new debate has emerged, pitting the two competing paradigms of AI and human intelligence.

  14. Human autonomy in the age of artificial intelligence

    Progress in the development of artificial intelligence (AI) opens up new opportunities for supporting and fostering autonomy, but it simultaneously poses significant risks. Recent incidents of AI ...

  15. AI Won't Replace Humans

    The first step business leaders must take is to experiment, create sandboxes, run internal bootcamps, and develop AI use cases not just for technology workers, but for all employees. August 04 ...

  16. Artificial intelligence is transforming our world

    When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI's capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on ...

  17. Elon Musk predicts AI will overtake human intelligence next year

    The capability of new artificial intelligence models will surpass human intelligence by the end of next year, so long as the supply of electricity and hardware can satisfy the demands of the ...

  18. Will AI Take Over The World? Or Will You Take Charge Of Your ...

    Artificial intelligence—the creation of software and hardware able to simulate human smarts—isn't new. Crucial core technologies for today's AI were first conceived in the 1970s and '80s ...

  19. Here's Why AI May Be Extremely Dangerous--Whether It's Conscious or Not

    survey of AI experts. found that 36 percent fear that AI development may result in a "nuclear-level catastrophe.". Almost 28,000 people have signed on to an. open letter. written by the Future ...

  20. MIT School of Engineering

    In other ways, it depends on your definition of "outsmart.". In a paper published last year, titled, "When Will AI Exceed Human Performance? Evidence from AI Experts," elite researchers in artificial intelligence predicted that "human level machine intelligence," or HLMI, has a 50 percent chance of occurring within 45 years and a 10 ...

  21. The impact of artificial intelligence on human society and bioethics

    This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships.

  22. A systematic literature review on the impact of artificial intelligence

    Artificial intelligence (AI) can bring both opportunities and challenges to human resource management (HRM). While scholars have been examining the impact of AI on workplace outcomes more closely over the past two decades, the literature falls short in providing a holistic scholarly review of this body of research. Such a review is needed in order to: (a) guide future research on the effects ...

  23. IELTS essay, topic: Artificial Intelligence will take over the role of

    Sample Band 9 Essay With ever increasing technological advances, computers and robots are replacing human roles in different areas of society. This trend can also be seen in education, where interactive programs can enhance the educational experience for children and young adults.