- Citing Sources
- Image Databases
- Chicago/Turabian
- Annotated Bibliography
- Citing the Use of an Artificial Intelligence


Related guides
Citation styles.
General Style Guides

AP - Associated Press
APA - American Psychological Association
Chicago
MLA - Modern Language Association
Citing an Artificial Intelligence
Submitting writing or other content generated by an artificial intelligence as your own work is a violation of Columbia College Chicago’s academic integrity policy .
Check your course’s syllabus or ask your instructor fo r their policy on using artificial intelligence in your work.
If you do use generative AI in your work, be sure to cite its use.
Remember that citing the use of an AI is not the same as citing the source of a piece of information in its output.
Be aware of the following shortcomings of current generative AI technology:
They may not provide specific sources or citations for the information they output. Their responses are an amalgam of content pulled from across the Internet.
It has been demonstrated that when an AI does provide academic citations, they are sometimes fake or “hallucinations.”
The Internet content that AI is trained on is dated. The information that an AI is using to generate its responses is sometimes several years old.
AI can demonstrate the same social and intellectual biases of its source material.

MLA offers citation guidelines for the following scenarios, visit the link above for detailed explanations:
When you paraphrase or quote the output of a generative AI in your text, use the following format:
Works-cited-list entry format:
“The prompt you used” prompt. Title of AI , Version, Publisher of the AI, Date the content was generated, URL of AI tool.
Works-cited-list entry example:
“Describe the symbolism of the green light in the book The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT , 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
When including an AI-generated image in your work, use the following format if a caption is needed:
Caption format:
Fig. Number of the figure. “Description of the prompt you used” prompt, Title of AI , Version, Publisher of the AI, Date the work was generated, URL of AI tool.
Caption example:
Fig. 1. “Pointillist painting of a sheep in a sunny field of blue flowers” prompt, DALL-E , version 2, OpenAI, 8 Mar. 2023, labs.openai.com/.
If you have an AI generate a work and you wish to cite it, use the following format:
Citation format:
“Title or brief description of the work” prompt description. Title of AI , Version, Publisher of AI, Date work was generated, URL of AI tool.
Citation example:
“Upon the shore . . .” Shakespearean sonnet about seeing the ocean. ChatGPT , 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.

APA recommends describing how you used a generative AI in a method section or the introduction of your paper. In the text of your paper, you should include the prompt used along with the relevant text from the AI’s response.
For long responses, consider including the full text of the response in an appendix or with online supplemental materials.
Example from APA:
When prompted with “Is the left brain right brain divide real or a metaphor?” the ChatGPT-generated text indicated that although the two brain hemispheres are somewhat specialized, “the notation that people can be characterized as ‘left-brained’ or ‘right-brained’ is considered to be an oversimplification and a popular myth” (OpenAI, 2023).
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Use the following formatting for reference and in-text citations:
APA Reference format:
Author of AI. (Year of the version). Title of AI (Version number or date) [Description of AI model]. URL of AI tool
APA Reference example:
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Parenthetical citation format:
(Author of AI, Year of version)
Parenthetical citation example:
(OpenAI, 2023)
Narrative citation format:
Author of AI (Year of version)
Narrative citation example:
OpenAI (2023)
The Chicago Manual of Style

The Chicago Manual of Style offers the following, interim guidance for citing a generative AI in their two methods of citation (notes & author-date). See the link above for more information.
According to their current advice, you should not include a generative AI in your bibliography or reference list. As the generated output of AI chatbots is not retrievable to others, it’s considered more like a type of personal communication. However, you must cite the use of a generative AI's output in the text of your paper in either a note or a parenthetical text reference.
Numbered footnote or endnote format:
Note number. Title of the AI, Date the text was generated, Publisher of the AI, URL of AI tool.
Numbered footnote or endnote example:
1. Text generated by ChatGPT, March 7, 2023, OpenAI, https://chat.openai.com/chat .
If you did not include the prompt you used in the text of your paper, you can include it in your note like so:
1. ChatGPT, response to “Explain how to make pizza dough from common household ingredients,” March 7, 2023, OpenAI.
When using the author-date citation style, put any information not mentioned in your text in a parenthetical text citation.
Parenthetical text reference format:
(Title of the AI, Date text was generated)
Parenthetical text reference example:
(ChatGPT, March 7, 2023)
Creative Commons License

This research guide by Columbia College Chicago Library is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License .
- << Previous: Annotated Bibliography
- Next: Tools >>
- Last Updated: Nov 15, 2023 11:08 AM
- URL: https://libguides.colum.edu/citingsources
- USC Libraries
- Research Guides
Organizing Your Social Sciences Research Paper
Generative ai and writing.
- Purpose of Guide
- Design Flaws to Avoid
- Independent and Dependent Variables
- Glossary of Research Terms
- Reading Research Effectively
- Narrowing a Topic Idea
- Broadening a Topic Idea
- Extending the Timeliness of a Topic Idea
- Academic Writing Style
- Choosing a Title
- Making an Outline
- Paragraph Development
- Research Process Video Series
- Executive Summary
- The C.A.R.S. Model
- Background Information
- The Research Problem/Question
- Theoretical Framework
- Citation Tracking
- Content Alert Services
- Evaluating Sources
- Primary Sources
- Secondary Sources
- Tiertiary Sources
- Scholarly vs. Popular Publications
- Qualitative Methods
- Quantitative Methods
- Insiderness
- Using Non-Textual Elements
- Limitations of the Study
- Common Grammar Mistakes
- Writing Concisely
- Avoiding Plagiarism
- Footnotes or Endnotes?
- Further Readings
- USC Libraries Tutorials and Other Guides
- Bibliography
Research Writing and Generative AI Large Language Models
A new and rapidly evolving phenomenon impacting higher education is the availability of generative artificial intelligence systems [such as Chat Generative Pre-trained Transformer or ChatGPT]. They have been developed from scanning text from millions of books, web sites, and other sources to enable algorithms within the system to learn patterns in how words and sentences are constructed. This allows these systems to respond to a broad range of questions and prompts, generate stories, compose essays, create lists, and more. Generative AI systems are not actually thinking or understanding like a human, but they are good at mimicking written text based on what it has learned from the sources of input data used to build and enhance its artificial intelligence algorithms, protocols, and standards.
As such, generative AI systems [a.k.a., “Large Language Models”] have emerged , depending on one’s perspective, as either a threat or an opportunity in how faculty create or modify class assignments and how students approach the task of writing a college-level research paper. We are in the very earliest stages of understanding how LLMs may impact learning outcomes associated with information literacy, i.e., fluency in effectively applying the skills needed to effectively identify, gather, organize, critically evaluate, interpret, and report information. However, before this is understood, these systems will continue to improve and become more sophisticated, as will academic integrity detection programs used to identify AI generated text in student papers.
When assigned to write a research paper, it is up to your professor if using ChatGTP is permitted or not. Some professors embrace using these systems as part of an in-class writing exercise to help understand their limitations, while others will warn against its use because of their current defects and biases. That said, the future of information seeking using LLMs means that the intellectual spaces associated with research and writing will likely collapse into a single online environment in which students will be able to perform in-depth searches for information connected to the Libraries' many electronic resources.
As LLMs become more sophisticated, here are some potential ways generative artificial intelligence programs could facilitate organizing and writing your social sciences research paper:
- Explore a Topic – develop a research problem related to the questions you have about a general subject of inquiry.
- Formulate Ideas – obtain background information and explore ways to place the research problem within specific contexts .
- Zero in on Specific Research Questions and Related Sub-questions – create a query-based framework for how to investigate the research problem.
- Locate Sources to Answer those Questions – begin the initial search for sources concerning your research questions.
- Obtain Summaries of Sources – build a synopsis of the sources to help determine their relevance to the research questions underpinning the problem.
- Outline and Structure an Argument – present information that assists in formulating an argument or an explanation for a stated position.
- Draft and Iterate on a Final Essay – create a final essay based on a process of repeating the action of text generation on the results of each prior action [i.e., ask follow up questions to build on or clarify initial results].
Despite their power to create text, generative AI systems are far from perfect and their ability to “answer” questions can be misleading, deceiving, or outright false. Described below are some current problems adapted from an essay written by Bernard Marr at Forbes Magazine and reiterated by researchers studying LLMs and writing. These issues focus on problems with using ChatGPT, but they are applicable to any current Large Language Model program .
- Not Connected to the Internet . Although the generative AI systems may appear to possess a significant amount of information, most LLM’s are currently not mining the Internet for that information [note that this is changing quickly. For example, an AI chatbot feature is now embedded into Microsoft’s Bing search engine, but you'll probably need to pay for this feature in the future]. Without a connection to the Internet, LLMs cannot provide real-time information about a topic. As a result, the scope of research is limited and any new developments in a particular field of study will not be included in the responses. In addition, the LLMs can only accept input in text format. Therefore, other forms of knowledge such as videos, web sites, audio recordings, or images, are excluded as part of the inquiry prompts.
- The Time-consuming Consequences of AI Generated Hallucinations . If proofreading AI generated text results in discovering nonsensical information or an invalid list of scholarly sources [e.g., the title of a book is not in the library catalog or found anywhere online], you obviously must correct these errors before handing in your paper. The challenge is that you must replace nonsensical or false statements with accurate information and you must support any AI generated declarative statements [e.g., "Integrated reading strategies are widely beneficial for children in middle school"] with citations to valid academic research that supports this argument . This requires reviewing the literature to locate real sources and real information, which is time consuming and challenging if you didn't actually compose the text. And, of course, if your professor asks you to show what page in a book or journal article you got the information from to support a generated statement of fact, well, that's a problem. Given this, ChatGPT and other systems should be viewed as a help tool and never a shortcut to actually doing the work of investigating a research problem.
- Trouble Generating Long-form, Structured Content . ChatGPT and other systems are inadequate at producing long-form content that follows a particular structure, format, or narrative flow. The models are capable of creating coherent and grammatically correct text and, as a result, they are currently best suited for generating shorter pieces of content like summaries of topics, bullet point lists, or brief explanations. However, they are poor at creating a comprehensive, coherent, and well-structured college-level research paper.
- Limitations in Handling Multiple Tasks . Generative AI systems perform best when given a single task or objective to focus on. If you ask LLMs to perform multiple tasks at the same time [e.g., a question that includes multiple sub-questions], the models struggle to prioritize them, which will lead to a decrease in the accuracy and reliability of the results.
- Biased Responses . This is important to understand. While ChatGPT and other systems are trained on a large set of text data, that data has not been widely shared so that it can be reviewed and critically analyzed. You can ask the systems what sources they are using, but any responses can not be independently verified. Therefore, it is not possible to identify any hidden biases or prejudices that exist within the data [i.e., it doesn't cite its sources]. This means the LLM may generate responses that are biased, discriminatory, or inappropriate in certain contexts .
- Accuracy Problems or Grammatical Issues . The sensitivity to typographical errors, grammatical errors, and misspellings is currently very limited in LLMs. The models may produce responses that are technically correct, but they may not be entirely accurate in terms of context or relevance. This limitation can be particularly challenging when processing complex or specialized information where accuracy and precision are essential. Given this, never take the information that is generated at face value; always proofread and verify the results!
As they currently exist, ChatGPT and other Large Language Models truly are artificial in their intelligence. They cannot express thoughts, feelings, or other affective constructs that help a reader intimately engage with the author's written words; the output contains text, but the systems are incapable of producing creative expressions or thoughts, such as, conveying the idea of willful deception and other narrative devices that you might find in a poem or song lyric. Although these devices are rarely used in academic writing, it does illustrate that personalizing your research [e.g., sharing a personal story relating to the significance of the topic or being asked to write a reflective paper ] cannot be generated artificially.
Ethical Considerations
In the end, the ethical choice of whether to use ChatGTP or similar platforms to help write your research paper is up to you; it’s an introspective negotiation between you and your conscience. As noted by Bjork (2023) and others, though, it is important to keep in mind the overarching ethical problems related to the use of LLMs. These include:
- LLMs Do Not Understand the Meaning of Words . Without meaning as a guide, these systems use algorithms that rely on formulating context clues, stylistic structures, writing forms, linguistic patterns, and word frequency in determining how to respond to queries. This functionality means that, by default, LLMs perpetuate dominant modes of writing and language use while minimizing or hiding less common ones. As a result,...
- LLMs Prioritize Standard American English . White English-speaking men have dominated most writing-intensive sectors of the knowledge economy, such as, journalism, law, politics, medicine, academia, and perhaps most importantly, computer programming. As a result, writers and speakers of African American, Indigenous English, and other sociolinguistic dialects that use forms of language with its own grammar, lexicon, slang, and history of resistance within the dominant culture, are penalized and shamed for writing as they speak. The default functionality and outputs of LLMs, therefore, can privilege forms of English writing developed primarily by the dominant culture.
- LLMs Do Not Protect User Privacy . ChatGPT and other platforms record and retain the entire content of your conversations with the systems. This means any information you enter, including personal information or, for example, any documents you ask the systems to revise is retained and cannot be removed. Although the American Data Privacy and Protection Act was being considered within the 117th Congress, there is no federal privacy law that regulates how these for-profit companies can store, use, or possibly sell information entered into their platforms. Given this, it is highly recommended that personal information should never be included in any queries.
NOTE : If your professor allows you to use generative AI programs or you decide on your own to use an LLM for a writing assignment, then this fact should be cited in your research paper, just as any other source of information used to write your paper should be acknowledged. Why? Because unlike grammar or citation tools, such as Grammarly or Citation Machine that correct text you've already written, generative AI programs are creating new content that is not in your own words. Currently, the American Psychological Association (APA), Modern Language Association (MLA) and the Chicago Manual of Style provide recommendations on how to cite generated text.
ANOTHER NOTE : LLMs have significant deficiencies that still require attention to thorough proofreading and source verification, an ability to discern quality information from misleading, false, irrelevant, or even made up information, a capacity to interpret and critically analyze what you have found, and the skills required to extrapolate meaning from the research your have conducted. For help with any or all of these elements of college-level research and writing, you should still contact a librarian for help.
YET ANOTHER NOTE : Researchers are finding early evidence that suggests over-reliance on ChatGPT and other LLM platforms for even the simplest writing task may, over time, undermine confidence in a student's own writing ability. Just like getting better at giving a class presentation or working on a group project, good writing is an acquired skill that can only be improved upon through the act of doing; the more you write, the more comfortable and confident you become expressing your own ideas, opinions, and judgements applied to the problem you have researched. Substituting LLMs with your own voice can inhibit your growth as a writer, so give yourself room to write creatively and with confidence by accepting LLMs as a tool rather than a definitive source of text.
For more information about Generative AI platforms and guidance on their ethical use in an academic setting, review the USC Libraries' Using Generative AI in Research guide for students and faculty.
Introduction to ChatGPT for Library Professionals. Mike Jones and Curtis Fletcher. USC Libraries, Library Forum, May 18, 2023; Aikins, Ross and Albert Kuo. “What Students Said About the Spring of ChatGPT.” Inside Higher Education , September 3, 2023; Baugh, John. “Linguistic Profiling across International Geopolitical Landscapes.” 152 Dædalus (Summer 2023): 167-177; ChatGPT. Library, Wesleyan University; Bjork, Collin. "ChatGPT Threatens Language Diversity." The Conversation , February 9, 2023; Understanding AI Writing Tools and their Uses for Teaching and Learning at UC Berkeley . Center for Teaching & Learning, University of California, Berkeley; Ellis, Amanda R., and Emily Slade. "A New Era of Learning: Considerations for ChatGPT as a Tool to Enhance Statistics and Data Science Education." Journal of Statistics and Data Science Education 31 (2023): 1-10; Ray, Partha Pratim. “ChatGPT: A Comprehensive Review on Background, Applications, Key Challenges, Bias, Ethics, Limitations and Future Scope.” Internet of Things and Cyber-Physical Systems (2023); Uzun, Levent. "ChatGPT and Academic Integrity Concerns: Detecting Artificial Intelligence Generated Content." Language Education and Technology 3, no. 1 (2023); Lund, Brady D. Et al. “ChatGPT and a New Academic Reality: Artificial Intelligence Written Research Papers and the Ethics of the Large Language Models in Scholarly Publishing.” Journal of the Association for Information Science and Technology 74 (February 2023): 570–581; Rasul, Tareq et al. "The Role of ChatGPT in Higher Education: Benefits, Challenges, and Future Research Directions.” Journal of Applied Learning and Teaching 6 (2023); Rudolph, Jürgen, Samson Tan, and Shannon Tan. "ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher Education?" Journal of Applied Learning and Teaching 6, no. 1 (2023): 342-362; Marr, Bernard. “The Top 10 Limitations Of ChatGPT.” Forbes (March 3, 2023): https://www.forbes.com/sites/bernardmarr/2023/03/03/the-top-10-limitations-of-chatgpt/?sh=41ae78e8f355; Thinking about ChatGPT? Academic Integrity at UBC, Office of the Provost and Vice-President Academic, University of British Columbia.
- << Previous: Further Readings
- Next: Acknowledgments >>
- Last Updated: Oct 10, 2023 1:30 PM
- URL: https://libguides.usc.edu/writingguide

Artificial Intelligence in Cardiothoracic Imaging pp 567–574 Cite as
How to Write and Review an Artificial Intelligence Paper
- Thomas Weikert 5 &
- Tim Leiner 6
- First Online: 22 April 2022
1376 Accesses
Part of the Contemporary Medical Imaging book series (CMI)
The purpose of this chapter is to provide medical imaging professionals with the tools to write a research article in the field of artificial intelligence. At the same time, this can help readers to assess the quality of a publication. To this end, the chapter discusses 12 key considerations in detail, ranging from defining a research objective to public sharing of software code. Furthermore, a checklist of 25 items based on the standard structure of a research articles is derived from these considerations to provide writers and readers with an easily applicable tool.
- Cardiovascular
- Artificial intelligence
- Machine learning
- Quality standards
This is a preview of subscription content, access via your institution .
Buying options
- Available as PDF
- Read on any device
- Instant download
- Own it forever
- Available as EPUB and PDF
- Compact, lightweight edition
- Dispatched in 3 to 5 business days
- Free shipping worldwide - see info
- Durable hardcover edition
Tax calculation will be finalised at checkout
Purchases are for personal use only
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Zheng X. TensorFlow: large-scale machine learning on heterogeneous distributed systems. 2016.
Google Scholar
Balki I, Amirabadi A, Levman J, Martel AL, Emersic Z, Meden B, Garcia-Pedrero A, Ramirez SC, Kong D, Moody AR, Tyrrell PN. Sample-size determination methodologies for machine learning in medical imaging research: a systematic review. Canadian Associat Radiolog J. 2019;70(4):344–53). Canadian Medical Association. https://doi.org/10.1016/j.carj.2019.06.002 .
CrossRef Google Scholar
Bluemke DA, Moy L, Bredella MA, Ertl-Wagner BB, Fowler KJ, Goh VJ, Halpern EF, Hess CP, Schiebler ML, Weiss CR. Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers—from the radiology editorial board. Radiology. 2020;294(3):487–9. https://doi.org/10.1148/radiol.2019192515 .
CrossRef PubMed Google Scholar
Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig L, Lijmer JG, Moher D, Rennie D, de Vet HCW, Kressel HY, Rifai N, Golub RM, Altman DG, Hooft L, Korevaar DA, Cohen JF, STARD Group. STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies. Radiology. 2015;277(3):826–32. https://doi.org/10.1148/radiol.2015151516 .
Chan A-W, Tetzlaff JM, Altman DG, Laupacis A, Gøtzsche PC, Krleža-Jerić K, Hróbjartsson A, Mann H, Dickersin K, Berlin JA, Doré CJ, Parulekar WR, Summerskill WSM, Groves T, Schulz KF, Sox HC, Rockhold FW, Rennie D, Moher D. SPIRIT 2013 statement: defining standard protocol items for clinical trials. Ann Intern Med. 2013;158(3):200–7. https://doi.org/10.7326/0003-4819-158-3-201302050-00583 .
CrossRef PubMed PubMed Central Google Scholar
Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. BMJ. 2015;350(jan07 4):g7594. https://doi.org/10.1136/bmj.g7594 .
FDA. Proposed regulatory framework for modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) - Discussion Paper and Request for Feedback. 2019.
Mongan J, Moy L, Kahn JEJ. Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers. Radiology: Artificial Intelligence. 2020:2. https://doi.org/10.1148/ryai.2020200029 .
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, Desmaison A, Köpf A, Yang E, DeVito Z, Raison M, Tejani A, Chilamkurthy S, Steiner B, Fang L, et al. PyTorch: an imperative style. High-Performance Deep Learning Library. 2019; http://arxiv.org/abs/1912.01703 .
Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V, Vanderplas J, Passos A, Cournapeau D, Brucher M, Perrot M, Duchesnay É. Scikit-learn: machine learning in python. J Mach Learn Res. 2011;12(Oct):2825–30.
Download references
Author information
Authors and affiliations.
Department of Radiology, University Hospital Basel, University of Basel, Basel, Switzerland
Thomas Weikert
Department of Radiology, Mayo Clinic, Rochester, MN, USA
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Thomas Weikert .
Editor information
Editors and affiliations.
Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
MD Carlo N. De Cecco
Ph.D. Marly van Assen
MD Tim Leiner
Rights and permissions
Reprints and Permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter.
Weikert, T., Leiner, T. (2022). How to Write and Review an Artificial Intelligence Paper. In: De Cecco, C.N., van Assen, M., Leiner, T. (eds) Artificial Intelligence in Cardiothoracic Imaging. Contemporary Medical Imaging. Humana, Cham. https://doi.org/10.1007/978-3-030-92087-6_53
Download citation
DOI : https://doi.org/10.1007/978-3-030-92087-6_53
Published : 22 April 2022
Publisher Name : Humana, Cham
Print ISBN : 978-3-030-92086-9
Online ISBN : 978-3-030-92087-6
eBook Packages : Medicine Medicine (R0)
Share this chapter
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Find a journal
- Publish with us
To read this content please select one of the options below:
Please note you do not have access to teaching notes, evolutions and trends of artificial intelligence (ai): research, output, influence and competition.
Library Hi Tech
ISSN : 0737-8831
Article publication date: 22 July 2021
Issue publication date: 27 May 2022
This paper throws light on some of the nature of artificial intelligence (AI) development, which will serve as a starter for helping to advance its development.
Design/methodology/approach
This work reveals the evolutions and trends of AI from four dimensions: research, output, influence and competition through leveraging academic knowledge graph with 130,750 AI scholars and 43,746 scholarly articles.
The authors unearth that the “research convergence” phenomenon becomes more evident in current AI research for scholars' highly similar research interests in different regions. The authors notice that Pareto's principle applies to AI scholars' outputs, and the outputs have been increasing at an explosive rate in the past two decades. The authors discover that top works dominate the AI academia, for they attracted considerable attention. Finally, the authors delve into AI competition, which accelerates technology development, talent flow, and collaboration.
Originality/value
The work aims to throw light on the nature of AI development, which will serve as a starter for helping to advance its development. The work will help us to have a more comprehensive and profound understanding of the evolutions and trends, which bridge the gap between literature research and AI development as well as enlighten the way the authors promote AI development and its strategy formulation.
- Science of science
- Artificial intelligence
- Knowledge graph
Acknowledgements
This article has been awarded by the National Natural Science Foundation of China (61941113, 82074580, 61806111), the Fundamental Research Fund for the Central Universities (30918015103, 30918012204), Nanjing Science and Technology Development Plan Project (201805036), China Academy of Engineering Consulting Research Project (2019-ZD-1-02-02), National Social Science Foundation (18BTQ073), NSFC for Distinguished Young Scholar under Grant No. 61825602 and National Key R&D Program of China under Grant No. 2020AAA010520002.
Shao, Z. , Yuan, S. , Wang, Y. and Xu, J. (2022), "Evolutions and trends of artificial intelligence (AI): research, output, influence and competition", Library Hi Tech , Vol. 40 No. 3, pp. 704-724. https://doi.org/10.1108/LHT-01-2021-0018
Emerald Publishing Limited
Copyright © 2021, Emerald Publishing Limited
Related articles
We’re listening — tell us what you think, something didn’t work….
Report bugs here
All feedback is valuable
Please share your general feedback
Join us on our journey
Platform update page.
Visit emeraldpublishing.com/platformupdate to discover the latest news and updates
Questions & More Information
Answers to the most commonly asked questions here

Recent searches
Institutions, conferences, journals gallery.
40,000+ journal templates to choose from for your next paper
Flexible pricing plans that caters to everyone’s needs
Plagiarism check
Detect plagiarism early. Powered by Turnitin.
Journal Submission
Get accepted in top journals.
For Publishers
Streamline publishing process with automated workflows
Client Stories
Read what our clients have yielded with our products and services
Convert from Word
Word file to JATS XML, PMC XML, DOAJ XML and more
Convert from PDF
PDF file to SciELO XML, CrossRef XML and more
Convert from JATS XML
JATS XML to Redalyc XML, DataCite XML and more
Adhere to standard of all global publishing bodies
Compliance for medical journals in PubMed database
Generate standardized XML for SciELO indexed journals

Artificial Intelligence — Template for authors
— or sign up using email —

Related Journals

Natural Language Engineering
Cambridge University Press
Categories: Language and Linguistics, Linguistics and Language, Artificial Intelligence and Software +2 more

Artificial Intelligence Review
Categories: Language and Linguistics, Linguistics and Language and Artificial Intelligence +1 more

Journal of Pragmatics

Journal of Memory and Language
Categories: Language and Linguistics, Linguistics and Language, Experimental and Cognitive Psychology, Neuropsychology and Physiological Psychology and Artificial Intelligence +3 more
Journal Performance & Insights

23% from 2019
1% from 2019
- CiteRatio of this journal has increased by 8% in last years.
- This journal’s CiteRatio is in the top 10 percentile category.
- SJR of this journal has decreased by 23% in last years.
- This journal’s SJR is in the top 10 percentile category.
- SNIP of this journal has increased by 1% in last years.
- This journal’s SNIP is in the top 10 percentile category.
Artificial Intelligence

Guideline source: View
All company, product and service names used in this website are for identification purposes only. All product names, trademarks and registered trademarks are property of their respective owners.
Use of these names, trademarks and brands does not imply endorsement or affiliation. Disclaimer Notice
Artificial Intelligence, which commenced publication in 1970, is now the generally accepted international forum for the publication of results of current research in this field. The journal welcomes basic and applied papers describing mature work involving computational accounts of aspects of intelligence. Specifically, it welcomes papers on:• automated reasoning• computational theories of learning• heuristic search• knowledge representation• qualitative physics• signal, image and speech understanding• robotics• natural language understanding• software and hardware architectures for AI. The journal reports results achieved; proposals for new ways of looking at AI problems must include demonstrations of effectiveness. From time to time, the journal publishes survey articles. Read Less
Artificial Intelligence, which commenced publication in 1970, is now the generally accepted international forum for the publication of results of current research in this field. The journal welcomes basic and applied papers describing mature work involving computational accoun...... Read More
Language and Linguistics
Linguistics and Language
Arts and Humanities
Top papers written in this journal

SciSpace is a very innovative solution to the formatting problem and existing providers, such as Mendeley or Word did not really evolve in recent years.
- Andreas Frutiger, Researcher, ETH Zurich, Institute for Biomedical Engineering
(Before submission check for plagiarism via Turnitin)
What to expect from SciSpace?
Speed and accuracy over ms word.
With SciSpace, you do not need a word template for Artificial Intelligence.
It automatically formats your research paper to Elsevier formatting guidelines and citation style.
You can download a submission ready research paper in pdf, LaTeX and docx formats.

Time taken to format a paper and Compliance with guidelines
Plagiarism Reports via Turnitin
SciSpace has partnered with Turnitin, the leading provider of Plagiarism Check software.
Using this service, researchers can compare submissions against more than 170 million scholarly articles, a database of 70+ billion current and archived web pages. How Turnitin Integration works?

Freedom from formatting guidelines
One editor, 100K journal formats – world's largest collection of journal templates
With such a huge verified library, what you need is already there.

Easy support from all your favorite tools
Artificial Intelligence format uses elsarticle-num citation style.
Automatically format and order your citations and bibliography in a click.
SciSpace allows imports from all reference managers like Mendeley, Zotero, Endnote, Google Scholar etc.
Frequently asked questions
1. can i write artificial intelligence in latex.
Absolutely not! Our tool has been designed to help you focus on writing. You can write your entire paper as per the Artificial Intelligence guidelines and auto format it.
2. Do you follow the Artificial Intelligence guidelines?
Yes, the template is compliant with the Artificial Intelligence guidelines. Our experts at SciSpace ensure that. If there are any changes to the journal's guidelines, we'll change our algorithm accordingly.
3. Can I cite my article in multiple styles in Artificial Intelligence?
Of course! We support all the top citation styles, such as APA style, MLA style, Vancouver style, Harvard style, and Chicago style. For example, when you write your paper and hit autoformat, our system will automatically update your article as per the Artificial Intelligence citation style.
4. Can I use the Artificial Intelligence templates for free?
Sign up for our free trial, and you'll be able to use all our features for seven days. You'll see how helpful they are and how inexpensive they are compared to other options, Especially for Artificial Intelligence.
5. Can I use a manuscript in Artificial Intelligence that I have written in MS Word?
Yes. You can choose the right template, copy-paste the contents from the word document, and click on auto-format. Once you're done, you'll have a publish-ready paper Artificial Intelligence that you can download at the end.
6. How long does it usually take you to format my papers in Artificial Intelligence?
It only takes a matter of seconds to edit your manuscript. Besides that, our intuitive editor saves you from writing and formatting it in Artificial Intelligence.
7. Where can I find the template for the Artificial Intelligence?
It is possible to find the Word template for any journal on Google. However, why use a template when you can write your entire manuscript on SciSpace , auto format it as per Artificial Intelligence's guidelines and download the same in Word, PDF and LaTeX formats? Give us a try!.
8. Can I reformat my paper to fit the Artificial Intelligence's guidelines?
Of course! You can do this using our intuitive editor. It's very easy. If you need help, our support team is always ready to assist you.
9. Artificial Intelligence an online tool or is there a desktop version?
SciSpace's Artificial Intelligence is currently available as an online tool. We're developing a desktop version, too. You can request (or upvote) any features that you think would be helpful for you and other researchers in the "feature request" section of your account once you've signed up with us.
10. I cannot find my template in your gallery. Can you create it for me like Artificial Intelligence?
Sure. You can request any template and we'll have it setup within a few days. You can find the request box in Journal Gallery on the right side bar under the heading, "Couldn't find the format you were looking for like Artificial Intelligence?”
11. What is the output that I would get after using Artificial Intelligence?
After writing your paper autoformatting in Artificial Intelligence, you can download it in multiple formats, viz., PDF, Docx, and LaTeX.
12. Is Artificial Intelligence's impact factor high enough that I should try publishing my article there?
To be honest, the answer is no. The impact factor is one of the many elements that determine the quality of a journal. Few of these factors include review board, rejection rates, frequency of inclusion in indexes, and Eigenfactor. You need to assess all these factors before you make your final call.
13. What is Sherpa RoMEO Archiving Policy for Artificial Intelligence?

- Pre-prints as being the version of the paper before peer review and
- Post-prints as being the version of the paper after peer-review, with revisions having been made.
14. What are the most common citation types In Artificial Intelligence?
15. how do i submit my article to the artificial intelligence, 16. can i download artificial intelligence in endnote format.
Yes, SciSpace provides this functionality. After signing up, you would need to import your existing references from Word or Bib file to SciSpace. Then SciSpace would allow you to download your references in Artificial Intelligence Endnote style according to Elsevier guidelines.
with Artificial Intelligence format applied
Fast and reliable, built for complaince.
Instant formatting to 100% publisher guidelines on - SciSpace.

No word template required
Typset automatically formats your research paper to Artificial Intelligence formatting guidelines and citation style.

Verifed journal formats
One editor, 100K journal formats. With the largest collection of verified journal formats, what you need is already there.

Trusted by academicians

I spent hours with MS word for reformatting. It was frustrating - plain and simple. With SciSpace, I can draft my manuscripts and once it is finished I can just submit. In case, I have to submit to another journal it is really just a button click instead of an afternoon of reformatting.


APA Citations (7th ed.)
- General Formatting
- Student Paper Elements - Title Page
- Professional Paper Elements - Title Page
- In-text Citation Basics
- In-text Citation Author Rules
- Citing Multiple Works
- Personal Communications
- Classroom or Intranet Resources
- Secondary Sources
- Periodicals
- Books & Reference Works
- Edited Book Chapters & Entries in Reference Works
- Reports & Gray Literature
- Conference Sessions & Presentations
- Dissertations & Theses
- Data Sets & Software
- Tests, Scales, & Inventories
- Audiovisual Works
- Audio Works
- Visual Works
- Social Media
- Webpages & Websites
Artificial Intelligence
- Basics & Formatting
- Avoiding Plagiarism
Using AI such as ChatGPT may be used to create text and to facilitate research, but not to write the full text of a paper or other research project.
Instructors have differing opinions about how or even whether students should use ChatGPT and other AI. As always, check with your instructor before utilizing AI for your academic projects.
Citing Artificial Intelligence
The official APA Style Blog uses an adapted version of the reference template for software to cite AI. Because these guidelines are based on the software template, they can be adapted to note the use of other large language models (e.g., Bard), algorithms, and similar software.
Common software and mobile apps mentioned in the text, but not paraphrased or quoted, do not need citations, nor do programming languages. "Common" is relative to your field and audience.
Include reference list entries and in-text citations if you have paraphrased or quoted from any software or app, or if you are mentioning software, apps, and apparatuses or equipment with limited distribution.
Artificial Intelligence/Software Template

Artificial Intelligence Example
- Describe how you used the tool in your Method section or in a comparable section of your paper
- For literature reviews or other types of essays or response or reaction papers, you might describe how you used the tool in your introduction
- In your text, provide the prompt you used and then any portion of the relevant text that was generated in response
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
Parenthetical citations: (OpenAI, 2023)
Narrative citations: OpenAI (2023)
- << Previous: Webpages & Websites
- Next: Tables & Figures >>
- Last Updated: Sep 6, 2023 11:21 AM
- URL: https://bvu.libguides.com/apa
Need help? Email [email protected] or chat with a BVU Librarian .
Artificial Intelligence Research Paper
This sample artificial intelligence research paper features: 8200 words (approx. 28 pages), an outline, and a bibliography with 31 sources. Browse other research paper examples for more inspiration. If you need a thorough research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our writing service for professional assistance. We offer high-quality assignments for reasonable rates.
I. Introduction

Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% off with fall23 discount code.
II. History of Artificial Intelligence
III. Knowledge Representation
a. Symbolic Artificial Intelligence (Top-Down Approach)
i. Predicate Logic or First-Order Logic
ii. Rule-Based Systems (Production Systems)
iii. Fuzzy Logic
iv. Semantic Networks, Frames, and Scripts
v. Bayesian Networks
b. Subsymbolic Artificial Intelligence (Bottom-Up Approach)
i. Neural Networks
ii. Evolutionary Computation
iii. Particle Swarm Optimization
iv. Behavior-Based Artificial Intelligence
IV. Artificial Intelligence in Psychology
a. SOAR (State Operator and Result)
b. PDP (Parallel Distributed Processing)
c. ACT-R (Adaptive Control of Thought-Rational)
V. Criticisms of Artificial Intelligence
a. Weak AI Versus Strong AI
i. Criticisms of the Strong AI View
ii. Machines Cannot Have Understanding—The Chinese Room Argument
VI. Summary
VII. Bibliography
Introduction
Artificial intelligence (AI) comprises a vast interdisciplinary field, which has benefited from its beginning from disciplines such as computer science, psychology, philosophy, neuroscience, mathematics, engineering, linguistics, economics, education, biology, control theory, and cybernetics. Although the goals of AI are as wide as the field is interdisciplinary, AI’s main goal is the design and construction of automated systems (computer programs and machines) that perform tasks considered to require intelligent behavior (i.e., tasks that require adaptation to complex and changing situations).
The role of psychology in artificial intelligence is twofold. On the one hand, psychologists can help in the development and construction of AI systems because knowledge of cognitive and reasoning processes such as perception, language acquisition, and social interaction is crucial to AI. AI has much to learn from humans because we are the best model of intelligent behavior we know, and because many AI machines will have to interact with us. On the other hand, psychologists could benefit from the AI techniques and tools to develop further their own discipline using AI tools such as modeling and simulation of theories, expert systems in diagnosis and organization, and interactive techniques in education, just to mention a few.
History of Artificial Intelligence
It seems that the desire to build machines that behave intelligently has always been a part of human history. For example, around 2500 BCE in Egypt, citizens and peregrines turned to oracles (statues with priests hidden inside) for advice. Homer’s Iliad, a remarkable literature work from ancient Greece, narrates how the Greek god Hephaestos creates Talos, a man of bronze whose duty is to patrol and protect the beaches of Crete. The idea of building humans and machines with intelligence transferred from mythology into modern literature. For example, Karel Kapek’s play R.U.R. (Rossum’s Universal Robots), which opened in London in 1923, coined the word “robot.” Shortly after, the very popular science fiction movie Metropolis, by Fritz Lang (1927), had a robot character (Maria) that played a decisive role in the plot of the movie. And, in the 1940s, Isaac Asimov started publishing his famous collection of books about robotics.
However, people not only wrote about the possibility of creating intelligent machines; they actually built them. For example, the ancient Greeks were fascinated with automata of all kinds, which they used mostly in theater productions and religious ceremonies for amusement. In the 4th century BCE, the Greek mathematician Archytas of Tarentum built a mechanical bird (a wooden pigeon) that, when propelled by a jet of steam or compressed air, could flap its wings. Supposedly, in one test, it flew a distance of 200 meters (however, once it fell to the ground, it could not take off again). Toward the end of the Middle Ages, clockmakers helped build devices that tried to mimic human and animal behavior. For example, Leonardo da Vinci built a humanoid automaton (an armored knight) around the end of the 15th century for the amusement of royalty. This armored knight was apparently able to make several humanlike motions, such as sitting up and moving its arms, legs, and neck. Reportedly, da Vinci also built a mechanical lion that could walk a programmable distance. In the early 16th century, Hans Bullmann created androids that could play musical instruments for the delight of paying customers. In the 18th century, Jacques de Vaucanson created a mechanical life-size figure (The Flute Player) capable of playing a flute with a repertoire of 12 different tunes. He also created an automatic duck (The Digesting Duck) that could drink, eat, paddle in water, and digest and excrete eaten grain.
In modern scientific artificial intelligence, the first recognized work was Warren McCulloch and Walter Pitts’s 1943 article A Logical Calculus of the Ideas Immanent in Nervous Activity, which laid the foundations for the development of neural networks. McCulloch and Pitt proposed a model of artificial neurons, suggesting that any computable function could be achieved by a network of connected neurons and that all logical connectives (and, or, not, etc.) could be implemented by simple network structures. In 1948, Wiener’s popular book Cybernetics popularized the term cybernetic and defined the principle of the feedback theory. Wiener suggested that all intelligent behavior was the result of feedback mechanisms, or conditioned responses, and that it was possible to simulate these responses using a computer. One year later, Donald Hebb (1949) proposed a simple rule for modifying and updating the strength of the connections between neurons, which is now known as Hebbian learning. In 1950, Alan M. Turing published Computing Machinery and Intelligence, which was based on the idea that both machines and humans compute symbols and that this commonality should be the basis for building intelligent machines. Turing also introduced an operational strategy to test for intelligent behavior in machines based upon an imitation game known as the Turing test. (A brief description of the test and its impact on AI is discussed.) Because of the impact of his ideas on the field of AI, Turing is considered by many to be the father of AI.
The term artificial intelligence was coined at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 at Dartmouth College. This two-month workshop was organized by John McCarthy, Marvin Minsky, Claude Shannon, and Nathaniel Rochester and included as participants Trenchard More from Princeton, Arthur Samuel from IBM, Ray Solomonoff and Oliver Selfridge from MIT, and Allen Newell and Herbert Simon from Carnegie Tech, all of whom played fundamental roles in the development of AI. The Dartmouth workshop is considered the official birthplace of AI as a field, and it provided significant advances from previous work. For example, Allen Newell and Herbert Simon demonstrated a reasoning program, the Logic Theorist, which was capable of working with symbols and not just numbers.
The early years of artificial intelligence were promising and full of successes. Both a symbolic approach (i.e., an approach that uses symbols and rules) and a subsymbolic approach (i.e., an approach that does not use rules but learns by itself) to AI coexisted with many successes. In the symbolic approach, some of the early successes include the presentation of the General Problem Solver by Newell, Shaw, and Simon (1963), a program designed to imitate human problem- solving protocols, and John McCarthy’s LISP (1958), which became one of the predominant languages in AI. Some of the early successes in subsymbolic AI include the development of the Adelines by Widrow and Hoff (1960), which enhanced Hebbs’s learning methods, and the perceptron, by Frank Rosenblatt (1962), which was the precursor of the artificial neural networks we know today.
However, by the end of the 1960s, difficulties arose as the artificial intelligence promises from the decade before fell short and started to be considered “hype.” Research in subsymbolic AI was mostly relegated after Minsky and Papert formally proved in 1969 that perceptrons (i.e., simple neural networks) were flawed in their representation mechanism because they could not represent the XOR (exclusive-OR) logical problem: a perceptron could not be trained to recognize situations in which either one or another set of inputs had to be present, but not both at the same time. The discovery that AI systems were not capable of solving simple logical problems that humans can easily solve resulted in significant reductions in research funding for artificial neural networks, and most researchers of this era decided to abandon the field.
The 1970s focused almost exclusively on different techniques in symbolic artificial intelligence (such as production systems, semantic networks, and frames) and the application of these techniques to the development of expert systems (also known as knowledge-based systems), problem solving, and the understanding of natural language. By the mid-1980s, interest in symbolic AI began to decline because, once again, many promises remained unfulfilled. However, artificial neural networks became interesting again through what became known as the connectionism movement, largely due to two books discussing parallel distributed processing published by Rumelhart and McClelland (1986). These books demonstrated that complex networks could resolve the logical problems (e.g., X‑OR) that early perceptrons could not resolve, and allowed networks to resolve many new problems. This new impulse of AI research resulted in the development of new approaches to AI during the late 1980s and 1990s, such as the subsymbolic approaches of evolutionary computing with evolutionary programming and genetic algorithms, behavior-based robotics, artificial life, and the development of the symbolic Bayesian networks. Today, AI is becoming successful in many different areas, especially in the areas of game playing, diagnosis, logistics planning, robotics, language understanding, problem solving, autonomous planning, scheduling, and control.
Knowledge Representation
Knowledge representation addresses the problem of how knowledge about the world can be represented and what kinds of reasoning can be done with that knowledge. Knowledge representation is arguably the most relevant topic in artificial intelligence because what artificial systems can do depends on their ability to represent and manipulate knowledge. Traditionally, the study of knowledge representation has had two different approaches: symbolic AI (also known as the top-down approach) and subsymbolic AI (also known as the bottom-up approach).
Symbolic Artificial Intelligence (Top-Down Approach)
Symbolic artificial intelligence has been referred to also as conventional AI, classical AI, logical AI, neat AI, and Good Old Fashioned AI (GOFAI). The basic assumption behind symbolic AI is that (human) knowledge can be represented explicitly in a declarative form by using facts and rules: Knowledge, either declarative (or explicit) or procedural (or implicit), can be described by using symbols and rules for their manipulation.
Symbolic AI is traditionally associated with a top down approach because it starts with all the relevant knowledge already present for the program to use and the set of rules to decompose the problem through some inferential mechanism until reaching its goal. A top-down approach can be used only when we know how to formalize and operationalize the knowledge we need to solve the problem. Because of its higher level of representation, it is well suited to perform relatively high-level tasks such as problem solving and language processing. However, this approach is inherently poor at solving problems that involve ill-defined knowledge, and when the interactions are highly complex and weakly interrelated, such as in commonsense knowledge, when we do not know how to represent the knowledge hierarchy, or when we do not know how to represent the mechanism needed to reach a solution. Many different methods have been used in symbolic AI.
Predicate Logic or First-Order Logic
Logic is used to describe representations of our knowledge of the world. It is a well-understood formal language, with a well-defined, precise, mathematical syntax, semantics, and rules of inference. Predicate logic allows us to represent fairly complex facts about the world, and to derive new facts in a way that guarantees that, if the initial facts are true, so then are the conclusions. There are well-defined procedures to prove the truth of the relationships and to make inferences (substitution, modus ponens, modus tollens, unification, among others).
Rule-Based Systems (Production Systems)
A rule-based system consists of a set of IF-THEN rules, a set of facts, and some interpreter controlling the application of the rules, given the facts. A rule-based system represents knowledge in terms of a set of rules that guides the system inferences given certain facts (e.g., IF the temperature is below 65 F degrees AND time of day is between 5:00 p.m. and 11:00 p.m. THEN turn on the heater). Rulebased systems are often used to develop expert systems. An expert system contains knowledge derived from an expert or experts in some domain, and it exhibits, within that specific domain, a degree of expertise in problem solving that is comparable to that of the human experts. Simply put, an expert system contains a set of IF-THEN rules derived from the knowledge of human experts. Expert systems are supposed to support inspection of their reasoning processes, both in presenting intermediate steps and in answering questions about the solution processes: At any time we can inquire why expert systems are asking certain questions, and these systems can explain their reasoning or suggested decisions.
Fuzzy Logic
Fuzzy logic is a superset of classical Boolean logic that has been extended to handle the concept of partial truth. One of the limitations of predicate (first-order) logic is that it relies in Boolean logic, in which statements are entirely true or false. However, in the real world, there are many situations in which events are not clearly stated and the truth of a statement is a matter of degree (e.g., if someone states a person is tall, the person can be taller than some people but shorter than other people; thus, the statement is true only sometimes). Fuzzy logic is a continuous form of logic that uses modifiers to describe different levels of truth. It was specifically designed to represent uncertainty and vagueness mathematically and provide formalized tools for dealing with the imprecision intrinsic to many problems. Because fuzzy logic can handle approximate information systematically, it is ideal for controlling and modeling complex systems in which an inexact model exists or systems where ambiguity or vagueness is common. Today, fuzzy logic is found in a variety of control applications such as expert systems, washing machines, video cameras (e.g., focus aperture), automobiles (e.g., operation of the antilock braking systems), refrigerators, robots, failure diagnosis, pattern classifying, traffic lights, smart weapons, and trains.
Semantic Networks, Frames, and Scripts
Semantic networks are graphical representations of information consisting of nodes, which represent an object or a class, and links connecting those nodes, representing the attributes and relations between the nodes. Semantic networks are often called conceptual graphs. Researchers originally used them to represent the meaning of words in programs that dealt with natural language processing (e.g., understanding news), but they have also been applied to other areas, such as modeling memory.
One interesting feature of semantic networks is how convenient they are to establish relations between different areas of knowledge and to perform inheritance reasoning. For example, if the system knows that entity A is human, then it knows that all human attributes can be part of the description of A (inheritance reasoning).
A problem with semantic networks is that as the knowledge to be represented becomes more complex, the representation grows in size, it needs to be more structured, and it becomes hard to define by graphical representations. To allow more complex and structured knowledge representation, frames were developed. A frame is a collection of attributes or slots with associated values that describe some real-world entity. Frame systems are a powerful way of encoding information to support reasoning. Each frame represents a class or an instance (an element of a class, such as height) and its slots represent an attribute with a value (e.g., seven feet).
Scripts are used to develop ideas or processes that represent recurring actions and events. They are often built on semantic networks or frames, although production systems are also common. Scripts are used to make inferences on a whole set of actions that fall into a stereotypical pattern. A script is essentially a prepackaged inference chain relating to a specific routine situation. They capture knowledge about a sequence of events, and this knowledge has been used as a way of analyzing and describing stories. Typical examples of scripts are the sequence of actions and the knowledge needed to, for example, take a flight or buy a train ticket.
Bayesian Networks
Bayesian networks are also known as belief networks, probabilistic networks, causal networks, knowledge maps, or graphical probability models. They are a probabilistic graphical model with nodes, which represent discrete or continuous variables, and links between those nodes, which represent the conditional dependencies between variables. This graphical representation with nodes and links connecting the nodes provides an intuitive graphical visualization of the knowledge, including the interactions among the various sources of uncertainty. Because a Bayesian network is a complete model for the variables and their relationships, it can be used to answer probabilistic queries about them, and it can allow us to model and reason about uncertainty in complex situations. For example, a Bayesian network can be used to calculate the probability of a patient having a specific disease, given the absence or presence of certain symptoms, if the probabilistic dependencies between symptoms and disease are assumed to be known.
Bayesian networks have been used for diverse applications, such as diagnosis, expert systems, planning, learning, decision making, modeling knowledge in bioinformatics (gene regulatory networks, protein structure), medicine, engineering, document classification, image processing, data fusion, decision support systems, and e-mail spam filtering.
Subsymbolic Artificial Intelligence (Bottom-Up Approach)
After researchers became disillusioned in the mid- 1980s with the symbolic attempts at modeling intelligence, they looked into other possibilities. Some prominent techniques arose as alternatives to symbolic AI, such as connectionism (neural networking and parallel distributed processing), evolutionary computing, particle swarm optimization (PSO), and behavior-based AI.
In contrast with symbolic AI, subsymbolic AI is characterized by a bottom-up approach to AI. In this approach, the problem is addressed by starting with a relatively simple abstract program that is built to learn by itself, and it builds knowledge until reaching an optimal solution. Thus, it starts with simpler elements and then, by interacting with the problem, moves upwards in complexity by finding ways to interconnect and organize the information to produce a more organized and meaningful representation of the problem.
The bottom-up approach has the advantage of being able to model lower-level human and animal functions, such as vision, motor control, and learning. It is more useful when we do not know how to formalize knowledge and we do not know how to reach the answer beforehand.
Neural Networks
Neural networks, or more correctly, artificial neural networks, to differentiate them from biological neural networks, are computing paradigms loosely modeled after the neurons in the brain and designed to model or mimic some properties of biological neural networks. They consist of interconnected processing elements called nodes or neurons that work together to produce an output function. The output of a neural network depends on the cooperation of the individual neurons within the network. Because it relies on its collection of neurons to perform or reach a solution, it can still perform its overall function even if some of the neurons are not functioning, which makes them able to tolerate error or failure. They learn by adapting their structure based on external or internal information that flows through the network. Thus, they are mostly used to model complex relationships between inputs and outputs or to find patterns in data.
As you may recall, one of the simplest instantiations of a neural network, the perceptrons, were very popular in the early 1960s (Rosenblatt, 1962), but interest in them dwindled at the end of the 1960s because they were not able to represent some simple logical problems (Minsky & Papert, 1969). They became hugely popular again in the mid-1980s (McClelland et al., 1986; Rumelhart et al., 1986) because the problem associated with the perceptron was addressed and because of the decreased interest in symbolic approaches to AI. The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations. Such inferences are particularly useful in applications in which the complexity of the data or task makes the design of such a function by hand impractical.
There are three major learning paradigms in the neural network field, each corresponding to a particular abstract learning task: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, the network is given a set of examples (inputs) with the correct responses (outputs), and it finds a function that matches these examples with the responses. The network infers the mapping implied by the data by using the mismatch between the mapping and the data and correcting the weight of the connection between the nodes until the network is able to match the inputs with the outputs. With this function, new sets of data or stimuli, previously unknown to the system, can be correctly classified. In unsupervised learning, the network has to learn patterns or regularities in the inputs when no specific output values are supplied or taught to the network. Finally, in reinforcement learning, the set of examples with their answers are not given to the network, but are found by interacting with the environment and integrating the instances that lead to reinforcement. The system performs an action and, based on the consequences of that action (some cost according to some dynamics), relates the stimuli (inputs) and the responses (outputs). Artificial neural networks have been applied successfully to speech recognition, image analysis, adaptive control and navigation, video games, autonomous robots, decision making, detection of credit card fraud, data mining, and pattern recognition, among other areas.
Evolutionary Computation
Evolutionary computation applies biologically inspired concepts from natural selection (such as populations, mutation, reproduction, and survival of the fittest) to generate increasingly better solutions to the problem. Some of the most popular methods are evolutionary programming, evolution strategies, and genetic algorithms. Evolutionary computation has been successfully applied to a wide range of problems including aircraft design, routing in communications networks, game playing, robotics, air traffic control, machine learning, pattern recognition, market forecasting, and data mining. Although evolutionary programming, evolution strategies, and genetic algorithms are similar at the highest level, each of these varieties implements evolutionary algorithms in a different manner.
Evolutionary programming, originally conceived by Lawrence J. Fogel in 1960, emphasizes the relationship between parent solutions (the solutions being analyzed) and their offspring (new solutions resulting from some modification of the parent solutions). Fogel, Owens, and Walsh’s 1966 book Artificial Intelligence Through Simulated Evolution is the landmark publication in this area of AI. In general, in evolutionary programming, the problem to be solved is represented or encoded in a string of variables that defines all the potential solutions to the problem. Each full set of variables with its specific values is known as an individual or candidate solution. To solve the problem, a population of “individuals” is created, with each individual representing a random possible solution to the problem. Each of the individuals (i.e., each candidate solution) is evaluated and assigned a fitness value based on how effective the candidate solution is to solving the problem. Based on this fitness value, some individuals (usually the most successful) are selected to be parents, and offspring are generated from these parents.
In the generation process, a mutation operator selects elements of the parents’ representation of the solution and manipulates these elements when they are transferred to the offspring. A mutation operator is a rule that selects random variables and randomly alters the value of these variables in some degree, generating new individuals or candidate solutions from the selected parents. Thus, some characteristics of the parent solutions are changed slightly and then transferred to the offspring solution. In general, the degree of mutation is greater in the first generations, and it is gradually decreased as generations evolve and get closer to an optimal solution. The offspring candidate solutions are then evaluated based on their fitness, just like their parents were, and the process of generating offspring from the parents is repeated until an individual with sufficient quality (an optimal solution to the problem) is found or a previously determined computational limit is reached (e.g., after evolving for a given number of generations).
Evolution strategies (Bäck, Hoffmeister, & Schwefel, 1991; Rechenberg, 1973) and evolutionary programming share many similarities. The main difference is that, in evolution strategies, offspring are generated from the selected parents not only by using a mutation operator but also by recombination of the code from selected parents through a crossover operator. A crossover operator applies some rule to recombine the elements of the selected parents to generate offspring. The recombination operation simulates some reproduction mechanism to transfer elements from the parents to their offspring.
The crossover operator can take many variants (e.g., interchange the first half of the elements from one of the parents and the second half from the other one for one offspring; the reverse for another offspring). The crossover operator is inspired by the role of sexual reproduction in the evolution of living things. The mutation operator is inspired by the role of mutation in natural evolution. Generally, both mutation and reproduction are used simultaneously. Recombination and mutation create the necessary diversity and thereby facilitate novelty, while selection acts as a force increasing quality.
Genetic algorithms, popularized by John Holland (1975), are similar to evolution strategies in the general steps that the algorithm follows. However, there are substantial differences in how the problem is represented. One of the main differences is that the problem to be solved is encoded in each individual of the population by having arrays of bits (bit-strings), which represent chromosomes. Each bit in the bit-string is analogous to a gene (i.e., an element that represents a variable or part of a variable of the problem). In a genetic algorithm, each individual or candidate solution is encoded at a genotype level, whereas in evolutionary programming and evolution strategy, the problem is encoded at a phenotype level, in which there is a one-to-one relationship between each value encoded in the phenotype and the real value that it represents in the problem. A genetic algorithm can differentiate between genotype (the genes) and phenotype (the expression of a collection of genes). The manipulation at the level of genotype allows for more elaborated implementation of the crossover operator and the mutation operator. Additionally, the focus on genetic algorithms when creating offspring in successive generations is on reproduction (crossover) and no mutation, which is often considered as a background operator or secondary process.
Particle Swarm Optimization
PSO applies the concept of social interaction to problem solving. It was developed in 1995 by James Kennedy and Russ Eberhart, and it is inspired by the social behavior of bird flocking and fish schooling.
PSO shares many similarities with evolutionary computation techniques. The system is initialized with a population of random potential solutions (known in this framework as particles), which searches for an optimal solution to the problem by updating generations. However, unlike evolutionary computing, PSO has no evolution operators such as crossover and mutation. In PSO, the particles fly through the problem space by following the current best solution (the particle with the best fitness). Each particle (individual) records its current position (location) in the search space, records the location of the best solution found so far by the particle, and a gradient (direction) in which the particle will travel if undisturbed. In order to decide whether to change direction and in which direction to travel (searching for the optimal solution), the particles work with two fitness values: one for that specific particle, and another for the particle closer to the solution (best candidate solution). Thus, particles can be seen as simple agents that fly through the search space and record (and possibly communicate) the best solution that they have discovered so far. Particles travel in the search space by simply adding their vector (the direction in which they were traveling) and the vector (direction) of the best solution candidate. Then, each particle computes its new fitness, and the process continues until the particles converge on an optimal solution.
Behavior-Based Artificial Intelligence
Behavior-based AI is a methodology for developing AI based on a modular decomposition of intelligence. It was made famous by Rodney Brooks at MIT (see Brooks, 1999, for a compendium of his most relevant papers in the topic), and it is a popular approach to building simple robots, which surprisingly appear to exhibit complex behavior. The complexity of their behavior lies in the perception of the observer, not in the processing mechanism of the system. This approach questions the need for modeling intelligence using complex levels of knowledge representation and the need for a higher cognitive control. Brooks presents a series of simple robots that mimic intelligent behavior by using a set of independent semiautonomous modules, which interact independently with the environment and are not communicating information at any higher level. For example, a spiderlike robot will navigate through a path with obstacles just by each of its legs addressing its own situation, without a mechanism relating what each other leg knows about the environment. This approach has been successful when dealing with dynamic, unpredictable environments. Although behavior-based AI has been popular in robotics, it also can be applied to more traditional AI areas.
Artificial Intelligence in Psychology
The greatest impact of AI in psychology has been through the development of what has come to be known as the information processing paradigm or the computer analogy. Once computers started to be perceived as information-processing systems able to process symbols and not just numbers, an analogy between computers and the human mind was established. Both systems receive inputs (either through sensors or as the output from other processes or devices), process those inputs through a central processing unit, and generate an output (through motor responses or as the input for other processes or devices). The idea that the mind works on the brain just as a program works on a computer is the focus of cognitive psychology, which is concerned with information-processing mechanisms focusing especially on processes such as attention, perception, learning, and memory. It is also concerned with the structures and representations involved in cognition in general.
One of the dominant paradigms in psychology, before the surge of AI at the end of the 1950s, was behaviorism, in which the focus was on the study of the responses of the organism (behavior) given particular inputs (stimuli). Its main assumption was that, because researchers can only scientifically study what they can observe and measure, behavior should be the only subject matter of study of scientific psychology. With cognitive psychology and the computer analogy, the focus started to shift toward the study of mental processes, or cognition. Cognitive psychology is interested in identifying in detail what happens between stimuli and responses. To achieve this goal, psychological experiments need to be interpretable within a theoretical framework that describes and explains mental representations and procedures. One of the best ways of developing these theoretical frameworks is by forming and testing computational models intended to be analogous to mental operations. Thus, cognitive psychology views the brain as an information-processing device that can be studied through experimentation and whose theories can be rigorously tested and discussed as computer programs.
A stronger focus on computer modeling and simulations and on study of cognition as a system resulted in the development of cognitive science. Cognitive science is an interdisciplinary discipline concerned with learning how humans, animals, and machines acquire knowledge, represent that knowledge, and how those representations are manipulated. It embraces psychology, artificial intelligence, neuroscience, philosophy, linguistics, anthropology, biology, evolution, and education, among other disciplines.
More recently, AI philosophy and techniques have impacted a new discipline, cognitive neuroscience, which attempts to develop mathematical and computational theories and models of the structures and processes of the brain in humans and other animals. This discipline is concerned directly with the nature of the brain and tries to be more biologically accurate by modeling the behavior of neurons, simulating, among other things, the interactions among different areas of the brain and the functioning of chemical pathways. Cognitive neuroscience attempts to derive cognitive-level theories from different types of information, such as computational properties of neural circuits, patterns of behavioral damage as a result of brain injury, and measures of brain activity during the performance of cognitive tasks.
A new and interesting approach resulting from developments in computer modeling is the attempt to search and test for a so-called unified architecture (a unified theory of cognition). The three most dominant unified theories of cognition in psychology are SOAR (based on symbolic AI), PDP (based on subsymbolic AI), and ACT-R (originally based on symbolic AI, currently a hybrid, using both symbolic and subsymbolic approaches).
SOAR (State Operator and Result)
SOAR (Laird, Newell, & Rosenbloom, 1987; Newell, 1990) describes a general cognitive architecture for developing systems that exhibit intelligent behavior. It represents and uses appropriate forms of knowledge such as procedural, declarative, and episodic knowledge. It employs a full range of problem-solving methods, and it is able to interact with the outside world.
PDP (Parallel Distributed Processing)
In the 1980s and 1990s, James L. McClelland, David E. Rumelhart, and the PDP Research Group (McClelland et al., 1986; Rumelhart et al., 1986) popularized artificial neural networks and the connectionist movement, which had lain dormant since the late 1960s. In the connectionist approach, cognitive functions and behavior are perceived as emergent processes from parallel, distributed processing activity of interconnected neural populations, with learning occurring through the adaptation of connections among the participating neurons. PDP attempts to be a general architecture and explain the mechanisms of perception, memory, language, and thought.
ACT-R (Adaptive Control of Thought-Rational)
In its last instantiation, ACT-R (Anderson et al., 2004; Anderson & Lebiere, 1998) is presented as a hybrid cognitive architecture. Its symbolic structure is a production system. The subsymbolic structure is represented by a set of massive parallel processes that can be summarized by a number of mathematical equations. Both representations work together to explain how people organize knowledge and produce intelligent behavior. ACT-R theory tries to evolve toward a system that can perform the full range of human cognitive tasks, capturing in great detail how we perceive, think about, and act on the world. Because of its general architecture, the theory is applicable to a wide variety of research disciplines, including perception and attention, learning and memory, problem solving and decision making, and language processing.
The benefits of applying computer modeling to psychological hypotheses and theories are multiple. For example, computer programs provide unambiguous formulations of a theory as well as means for testing the sufficiency and consistency of its interconnected elements. Because computer modeling involves using a well-formulated language, it eliminates vagueness and highlights hidden or ambiguous intermediated processes that were not previously known or made explicit with the verbal description of the theory. Explicit formulations also allow researchers to falsify (i.e., test) the theory’s assumptions and conclusions. Additionally, given the same data set, alternative programs/theories can be run through the data to analyze which hypotheses are more consistent with the data and why.
Computer modeling has focused mostly in the areas of perception, learning, memory, and decision making, but it is also being applied to the modeling of mental disorders (e.g., neurosis, eating disorders, and autistic behavior), cognitive and social neuroscience, scientific discovery, creativity, linguistic processes (e.g., dyslexia, speech movements), attention, and risk assessment, among other fields. Importantly, the impact of AI on psychology has not been relegated to only theoretical analyses. AI systems can be found in psychological applications such as the development of expert systems for clinical diagnosis and education, human-computer interaction and user interfaces, and many other areas, although the impact in application areas is not as extended as might be expected—there seems to be much work to do in this area.
Criticisms of Artificial Intelligence
Is true AI possible? Can an AI system display true intelligence and consciousness? Are these functions reserved only to living organisms? Can we truly model knowledge?
Weak AI Versus Strong AI
The weak AI view (also known as soft-cautious AI) supports the idea that machines can be programmed to act as if they were intelligent: Machines are only capable of simulating intelligent behavior and consciousness (or understanding), but they are not really capable of true understanding. In this view, the traditional focus has been on developing machines able to perform in a specific task with no intention of building a complete system able to perform intelligently in all or most situations.
Strong AI (also known as hard AI) supports the view that machines are really intelligent, and that, someday, they could have understanding and conscious minds. This view assumes that all mental activities of humans can be eventually reducible to algorithms and processes that can be implemented in a machine. Thus, for example, there should be no fundamental differences between a machine that emulates all the processes in the brain and the actions of a human being, including understanding and consciousness. One of the problems with the strong AI view centers on the following two questions: (a) How do we know that an artificial system is truly intelligent? (b) What makes a system (natural or not) intelligent? Even today, there is no clear consensus on what intelligence really is. Turing (1950) was aware of this problem and, recognizing the difficulties in agreeing on a common definition on intelligence, proposed an operational test to circumvent this question. He named this test the imitation game and it was later known as the Turing test.
The Turing test is conducted with two people and a machine. One person, who sits in a separate room from the machine and the other person, plays the role of an interrogator or judge. The interrogator knows the person and the machine only as A and B, respectively, and has no way of knowing beforehand which is the person and which is the machine. The interrogator can ask A and B any question she wishes. The interrogator’s aim is to determine which one is the person and which one is the machine. If the machine fools the interrogator into thinking that it is a person, then we can conclude that the machine can think and it is truly intelligent, or at least as intelligent as the human counterpart. The Turing test has become relevant in the history of AI because some of the criticisms and philosophical debates about the possibilities of AI have focused on this test.
Criticisms of the Strong AI View
For many decades it has been claimed that Gödel’s (1931) incompleteness theorem precludes the possibility of the development of a true artificial intelligence. The idea behind the incompleteness theorem is that within any given branch of mathematics, there will always be some propositions that cannot be proved or disproved within the system using the rules and axioms of that mathematical branch itself. Thus, there will be principles that cannot be proved in a system, and the system cannot deduce them. Because machines do little more than follow a set of rules, they cannot truly emulate human behavior. Human behavior is too complex to be described by any set of rules.
Machines Cannot Have Understanding—The Chinese Room Argument
John Searle stated in his Chinese room argument that machines work with encoded data that describe other things, and that data are meaningless without a cross-reference to the things described. This point led Searle to assert that there is no meaning or understanding in a computational machine itself. As a result, Searle claimed that even a machine that passes the Turing test would not necessarily have understanding or be conscious. Consciousness seems necessary in order to show understanding.
In his thought experiment, Searle asked people to imagine a scenario in which he is locked in a room and receives some Chinese writing (e.g., a question). He does not know Chinese, but he is given a rule book with English instructions that allows him to correlate the set of Chinese symbols he receives with another set of Chinese symbols (e.g., by their shape), which he would give back as an answer. Imagine the set of rules to correlate both sets of symbols is so advanced that it allows him to give a meaningful answer to the question in Chinese. Imagine the same thing happening with an English question and its answer. For a Chinese and an English observer, respectively, the answers from Searle are equally satisfying and meaningful (i.e., intelligent). In both situations Searle would pass the Turing’s test. However, can we say Searle understands Chinese in the same way that he understands English? Actually, he does not know any Chinese. A computer program would behave similarly by taking a set of formal symbols (Chinese characters) as input and, following a set of rules in its programming, correlating them with another set of formal symbols (Chinese characters), which it presents as output. In this thought experiment, the computer would be able to pass the Turing test. It converses with Chinese speakers, but they do not realize they are talking with a machine. From a human observer’s view, it seems that the computer truly understands Chinese. However, in the same way that Searle does not understand Chinese, the machine does not understand Chinese either. What the computer does is mindless manipulation of the symbols, just as Searle was doing. Although it would pass the Turing test, there is no genuine understanding involved.
Several other authors have argued against the possibility of a strong AI (see, e.g., Penrose, 1989). The arguments denying the possibility of a strong AI and their counterarguments have populated the AI literature, especially the philosophical discussion of AI.
Machines able to sense and acquire experiences by interacting with the world should be able to solve most of these criticisms. In fact, most AI researchers are concerned with specific issues and goals to resolve, such as how to improve actual systems or how to create new approaches to the problem of AI, and not passing the Turing test and proving that AI systems can be truly intelligent. Additionally, the Turing test has been seriously criticized as a valid test for machine intelligence. For example, Ford and Hayes (1998) highlighted some of the problems with the Turing test. The central defect of the Turing test is its species-centeredness. It assumes that human thought is the highest achievement of thinking against which all other forms of thinking must be judged. The Turing test depends on the subjectivity of the judge, and it is culture-bound (a conversation that passes the test in the eyes of a British judge might fail it according to a Japanese or Mexican judge). More important, it does not admit as measures of intelligence weaker, different, or even stronger forms of intelligence than those deemed human. Ford and Hayes made this point clearly when they compared AI with artificial flight. In the early thinking about flight, success was defined as the imitation of a natural model: for flight, a bird; for intelligence, a human. Only when pioneers got some distance from the model of a bird did flying become successful. In the same analogy, you do not deny an airplane can fly just because it does not fly like a bird does. However, truly artificial intelligence is denied if it does not parallel that of a human being.
Many supporters of strong AI consider that it is not necessary to try to isolate and recreate consciousness and understanding specifically. They accept that consciousness could be a by-product of any sufficiently complex intelligent system, and it will emerge automatically from complexity. They focus on the analogy of the human brain. In humans, a single neuron has nothing resembling intelligence. Only when billions of neurons combine to form a mind does intelligence emerge. It would appear, then, that the brain is more than the sum of its parts. Intelligence emerges with sufficient joint complexity of neurons.
In the final analysis, the debate of the possibility of a strong AI seems to be based on the classical mind/body problem. On the one side, we have the physicalism view (the belief that nothing exists but the physical world that can be studied scientifically; mental states are assumed to mirror brain states). In this view, strong AI could be eventually possible. On the other side, we have the view based on dualism (mind and matter are not the same thing). In this view, strong AI is not possible.
The most important influence of AI in psychology has been the computer metaphor, in which both living organisms and computers can be understood as information processors. The information-processing approach has brought psychology new paradigms such as cognitive psychology and cognitive science. The goal of these disciplines is to learn what happens within the organism, at the level of the brain. Brain or mental processes cannot be observed directly, but computer formalization of theories of how mental processes work can be tested scientifically.
Although AI and cognitive science share techniques and goals, there is at least one fundamental difference between them. Psychology has the restriction that its computer programs and simulations have to achieve the same results as the simulated system (human and animal cognition), and they have to do it following the same processes. Only when there is a match with real processes can psychologists assume that the relationships proposed in the program were correct. AI does not have a restriction of similarity of processes: Psychological theory must be able to predict error empirically. AI focuses on efficiency; psychology focuses on plausibility.
Additionally, computer modeling and the focus on brain structure and their function have given rise to the field of neuroscience. Neuroscience is one of the most rapidly emerging disciplines in psychology, largely due to computer modeling. Computer modeling allows for the formalization of models and the testing of hypotheses about how neurons work together. The new computational techniques applied to brain activity measurement (e.g., fMRI) allow researchers to observe how brain structures actually work. Thus, formal models that represent theories and hypotheses can be tested against real data.
The impact of cognitive modeling in psychology should increase as psychological models move forward toward a unified theory of cognition. Only through computer formalizations and implementations does it seem plausible to work on a unified view of psychology in which all the theories might be integrated in a single framework. One of the underlying assumptions of computer modeling is that the role of formal modeling of theories and hypotheses in psychology plays a role similar to the one of mathematics in the physical sciences.
Although computer modeling has strongly influenced AI in psychology, the impact of AI in psychology goes far beyond computer modeling. AI techniques can be applied to many areas of psychology, especially those that focus on diagnosis, classification, and decision making.
Bibliography:
- Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S., Lebiere, C., & Qin, Y. (2004). An integrated theory of the mind. Psychological Review, 111, 1036–1060.
- Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ: Erlbaum.
- Bäck, T., Hoffmeister, F., & Schwefel, H.-P. (1991). A survey of evolution strategies, In R. K. Belew & L. B. Booker (Eds.), Proceedings of the fourth international conference on genetic algorithms (pp. 2–9). San Mateo, CA: Morgan Kaufmann.
- Boden, M. A. (Ed.). (1988). Computer models of mind. Cambridge, UK: Cambridge University Press.
- Boden, M. A. (Ed.). (1989). Artificial intelligence in psychology: Interdisciplinary essays. Cambridge, MA: MIT Press.
- Boden, M. A. (Ed.). (1996). Artificial intelligence. San Diego, CA: Academic Press.
- Brooks, R. A. (1999). Cambrian intelligence. Cambridge, MA: MIT Press.
- Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966). Artificial intelligence through simulated evolution. New York: Wiley.
- Ford, K. M., & Hayes, P. J. (1998). On computational wings: Rethinking the goals of artificial intelligence. Scientific American, 9, 78–83.
- Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173–198.
- Hebb, D. O. (1949). The organization of behavior. New York: Wiley.
- Holland, J. H. (1975). Adaptation in natural and artificial systems. Ann Arbor: University of Michigan Press.
- Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of IEEE International Conference on Neural Networks (Vol. 4, pp. 1942–1948). Piscataway, NJ: IEEE Press.
- Laird, J. E., Newell, A., & Rosembloom, P. S. (1987). SOAR: An architecture for general intelligence. Artificial Intelligence, 33, 1–64.
- McCarthy, J. (1958). Programs with common sense. In Proceedings of the Symposium on Mechanization of Thought Processes (Vol. 1, pp. 77–84). London: Her Majesty’s Stationery Office.
- McClelland, J. L., Rumelhart, D. E., & the PDP Research Group (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 2). Cambridge, MA: MIT Press.
- McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 115–137.
- Minsky, M. L., & Papert, S. (1969). Perceptrons: An introduction to computational geometry. Cambridge, MA: MIT Press.
- Newell, A. (1990). Unified theories of cognition. The William James lectures. Cambridge, MA: Harvard University Press.
- Newell, A., Shaw, J. C., & Simon, H. (1963). GPS, A program that simulates human thought. In E. A. Feingenbaum & J. Feldman (Eds.), Computers and thought (pp. 279–296). New York: McGraw-Hill. (Reprinted from Lernende Automaten, pp. 109–124, by H. Billing, Ed., Munich: Oldenbourg).
- Penrose, R. (1989). The emperor’s new mind. Oxford, UK: Oxford University Press.
- Rechenberg, I. (1973). Evolutionsstrategie: Optimierung technischer systeme nach prinzipien der biologischen evolution. Stuttgart, Germany: Fromman-Holzboog.
- Rich, E., & Knight, K. (1991). Artificial intelligence (2nd ed.). New York: McGraw-Hill.
- Rosenblatt, F. (1962). Principles of neurodynamics: Perceptrons and the theory of brain mechanism. Chicago: Spartan.
- Rumelhart, D. E., McClelland, J. L., & the PDP Research Group (1986). Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1). Cambridge, MA: MIT Press.
- Russell, S. J., & Norvig, P. (2002). Artificial intelligence: A modern approach (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
- Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.
- Wagman, M. (1993). Cognitive psychology and artificial intelligence. Theory and research in cognitive science. Westport, CT: Praeger.
- Wagman, M. (1995). The sciences of cognition: Theory and research in psychology and artificial intelligence. Westport, CT: Praeger.
- Widrow, B., & Hoff, M. E. (1960). Adaptive switching circuits. In IRE WESCON Convention Record (Part 4, pp. 96–104). New York: IRE.
- Wiener, N. (1948). Cybernetics. New York: Wiley.
ORDER HIGH QUALITY CUSTOM PAPER

- Reference Manager
- Simple TEXT file
People also looked at
Original research article, research landscape of artificial intelligence and e-learning: a bibliometric research.
- 1 School of Management, Zhejiang University of Technology, Hangzhou, China
- 2 School of Cultural Creativity and Management, Communication University of Zhejiang, Hangzhou, China
- 3 School of Economics, Zhejiang University of Technology, Hangzhou, China
- 4 College of Science and Technology Ningbo University, Ningbo, China
While an increasing number of organizations have introduced artificial intelligence as an important facilitating tool for learning online, the application of artificial intelligence in e-learning has become a hot topic for research in recent years. Over the past few decades, the importance of online learning has also been a concern in many fields, such as technological education, STEAM, AR/VR apps, online learning, amongst others. To effectively explore research trends in this area, the current state of online learning should be understood. Systematic bibliometric analysis can address this problem by providing information on publishing trends and their relevance in various topics. In this study, the literary application of artificial intelligence combined with online learning from 2010 to 2021 was analyzed. In total, 64 articles were collected to analyze the most productive countries, universities, authors, journals and publications in the field of artificial intelligence combined with online learning using VOSviewer through WOS data collection. In addition, the mapping of co-citation and co-occurrence was explored by analyzing a knowledge map. The main objective of this study is to provide an overview of the trends and pathways in artificial intelligence and online learning to help researchers understand global trends and future research directions.
Introduction
The integration of technology into teaching has become an important part of the educational environment in recent years ( Martins and Kellermanns, 2004 ; Teo, 2016 ; Huang et al., 2020 ). The steady development of online learning not only promotes the transfer of curriculum forms from offline to online learning ( Lin et al., 2021 ; Wang et al., 2021 ) but also urges universities and teachers to use various online learning technologies to assist teaching ( Mulder, 2013 ). For instance, the integration of wireless networks, sensing and mobile technologies into the field of education brings innovative changes to education and learning ( Tang et al., 2021 ) and forms the concept of e-learning. Compared to traditional face-to-face education, online learning has more advantages ( Piccoli et al., 2001 ). For example, the liberating interactions between learners and instructors or learners and learners have been facilitated, and the limitations of time and space in the asynchronous and synchronous learning network model have been reduced ( Trentin, 1997 ; Katz, 2000 , 2002 ).
To better enable the e-learning system to provide personalized or appropriate learning content, learning guidance, learning feedback, learning paths or interfaces ( Hwang, 2014 ; Rastegarmoghadam and Ziarati, 2017 ; Conde et al., 2020 ). As an emerging technology, artificial intelligence has been extensively explored worldwide in the past few decades, and the application of artificial intelligence in e-learning (AIEL) has also been a serious issue ( Tang et al., 2021 ), especially its application in many disciplines. For example, Hwang and Tu (2021) conducted a literature review on the application of artificial intelligence in mathematics education. Artificial intelligence enables students to interact with virtual patients and obtain the diagnostic information and feedback of specific patients in the field of medical education ( Khumrin et al., 2017 ). The neural network model of artificial intelligence can also be used in science education ( Iyanda et al., 2018 ). In addition, researchers have also studied the application of artificial intelligence in online learning from other perspectives. García et al. (2007) , for example, used artificial intelligence to identify the heterogeneity of students’ learning styles. Osmanbegovic and Suljic (2012) used data mining methods to predict students’ performances. Some scholars have also used new algorithms ( Kurilovas et al., 2015 ) and models ( Ben Ammar et al., 2010 ; Caputi and Garrido, 2015 ) to develop learning systems.
Presently, there are also relevant literature reviews in the AIEL field, but some of these literature reviews only focus on a certain discipline (e.g., mathematics, Hwang and Tu, 2021 ) or a specific field ( Chan-Olmsted, 2019 ; George and Lal, 2019 ). Tang et al. (2021) conducted a systematic review and co-cited network analysis on the application trend of artificial intelligence in online learning. However, the COVID-19 pandemic in early 2020 accelerated the change in the performance mode of physical teaching and blended learning ( Jin et al., 2021 ; Lin et al., 2021 ; Mo et al., 2021 ). Although some studies have discussed the application of artificial intelligence in online learning, they only focused on co-citation network analysis ( Hwang and Tu, 2021 ; Tang et al., 2021 ) and did not discuss the application development of artificial intelligence in teaching at different time points and the application trend of artificial intelligence in online learning during the pandemic. Furthermore, they did not explore the possible development trend of artificial intelligence applications in online learning after the pandemic.
Therefore, considering the above discussion, the previous research on AIEL application from 2010 to 2021 was analyzed and extracted in this study. Admittedly, the reason for choosing 2010 as the main research target is because Hinton’s team used the Alex Net, a convolutional neural network, to win the ImageNet image recognition competition in 2012, proving the potential of deep learning to the world and attracting the attention of the academic world ( Hinton et al., 2012 ; Krizhevsky et al., 2012 ). It was also from this moment that research on artificial intelligence entered a period of explosion, which urged many research fields, such as the field of education, to emerge with high-frequency knowledge association related to artificial intelligence. Therefore, by applying the literature metrology method, the literature during the selected period was analyzed to extract key information, such as periodicals, authors, institutions, countries, years, keywords and references in academic publications, thus forming knowledge network maps and providing references for the related researchers and practitioners. Specifically, the research questions (RQ) raised in this study are as follows:
RQ1: Who are the main authors published in the field of AIEL and what are their institutional and county affiliations?
RQ2: What kinds of major journals and keywords are used in the field of AIEL and what are the connections and differences between them?
RQ3: What is the co-citation of the AIEL literature? What were the most frequently cross-referenced research streams in this field during the selected period? What is the visualized structure of the main AIEL literature from the perspective of these papers?
This research is organized as follows: Section “Literature Review” describes the literature review of this study. Section “Research Methodology” presents the research methodology. Section “Results” presents the data analysis and results, and section “Discussion and Conclusion” highlights the discussion and conclusion.
Literature Review
Artificial intelligence.
Artificial intelligence is one of the branches of computer science and is defined as “the theory and development of computer systems capable of performing tasks that normally require human intelligence” ( Miyazawa, 2019 ). The field of artificial intelligence research is defined as the study of “intelligent agents” and any device that can sense its surrounding environment and act to maximize its chances of success at a certain goal ( Song and Wang, 2020 ). Studies have interpreted artificial intelligence as a system of rational thinking, rational behavior, or both ( Kok et al., 2009 ).
Many related technologies have been integrated into artificial intelligence to simulate human thought processes and intelligent behavior, such as neural networks, expert systems, deep learning, symbolic machine learning, speech recognition, image recognition, natural language processing and statistical analysis, or others that can be classified as artificial intelligence technologies ( Lu and Yang, 2018 ). Artificial intelligence has made considerable progress recently and has been extensively applied in various fields around the world, bringing outstanding value and possessing great potential, such as practical application in medical-related fields ( He et al., 2019 ), augmented online language learning ( Li and Wang, 2020 ), scientific research based on the neural network model ( Iyanda et al., 2018 ), personality and affective differences in the psychology sector ( Latham et al., 2012 ) and basic discipline education in mathematical science ( Kok et al., 2009 ; Colchester et al., 2017 ).
Since 2012, artificial intelligence research has been expanding ( Hinton et al., 2012 ; Krizhevsky et al., 2012 ). Due to its advantages over geography and time constraints, this technology is considered a necessary resource for big data analysis ( Song et al., 2020 ). Particularly, in the field of education, a high-frequency artificial intelligence–related knowledge association has emerged, aiming to improve the efficiency of educational communication and to provide an assessment teaching system with distinct individual differences ( Tang et al., 2021 ). Therefore, a large-scale application of artificial intelligence combined with teaching has emerged, providing a comprehensive learning platform with a coherent learning sequence and formative assessment ( Vílchez-Román et al., 2020 ). Based on data mining technology ( Hu et al., 2014 ), an early warning prediction of students’ academies is conducted. Further, based on the holistic multi-dimensional instructional design model, an adaptive learning environment for dynamic curriculum design is realized, and based on ontology and sequential pattern mining, the problem of cold starts and sparse ratings is solved to generate a knowledge recommendation system with final suggestions for target learners ( Tarus et al., 2017 ). Artificial intelligence, combined with online learning, enables students with different knowledge levels, personalities and emotions to customize education courses. Simultaneously, it has proved useful in the areas of employee training, knowledge updating, professional development and skills training ( Kavitha and Lohani, 2019 ).
Bibliometric Analysis Using Knowledge Mapping
Knowledge mapping is a new development of scientometrics and infometrics that is based on science and involves the interdisciplinary application of applied mathematics, information science and computer science Chen, 2006 , 2017 ). In recent years, an increasing number of researchers have been using various knowledge mapping tools to analyze the development trends and evolution processes of several disciplines. CiteSpace and VOSviewer are two major visualization tools in bibliometric research and other graphic science or scientific discipline research, both of which can import data directly from Web of Science and other bibliometric databases to generate visualizations ( Song et al., 2020 ). Comparing the differences between the two, VOSviewer is more accurate in clustering algorithms, while CiteSpace is better in presenting evolution and has the advantages of beautiful images and easier interpretation. Therefore, this study uses VOSviewer software to analyze citation, co-citation and the most commonly used author keywords in articles ( Hwang and Tu, 2021 ). VOSviewer is a computer program that can construct and view bibliometric maps of bibliomapies.
The software, constructed by van Eck and Waltman (2010) , is mainly used to construct literature maps and co-author relationships and to conduct co-citation analysis and literature coupling analysis. Different from most computer programs used for bibliometrics, VOSviewer pays particular attention to the graphical representation of bibliometrics and plays a prominent role in the field of large bibliometric maps due to its easy interpretation ( van Eck and Waltman, 2010 ). More importantly, VOSviewer can conduct text mining and can construct a network of important terms in literature ( Flis and van Eck, 2018 ), particularly playing an irreplaceable role in exploring literature reviews according to time sequence. Thus, it has been extensively used in bibliometric reviews in various fields over the past decades: research of scientific software by a link-based approach ( Orduna-Malea and Costas, 2021 ), visual analysis of Catholic health maintenance studies ( Ivanitskaya et al., 2021 ), the development trend of e-waste research ( Corsini et al., 2012 ), development of European political science ( Mas-Verdu et al., 2021 ), trend analysis of financial inclusion ( Gálvez-Sánchez et al., 2021 ), exploration of the decision-making process of higher education institutions using visual analysis technology ( Vílchez-Román et al., 2020 ) and trend analysis of library and information science ( Song et al., 2020 ). Based on the results of previous studies, it can be understood that most studies use the VOSviewer tool to explore relevant issues in co-citation and coupling analyzes in the literature (see Table 1 ).

Table 1. Research topics on bibliometric research.
Research Methodology
All articles in this study are from the Science Citation Index (SCI) and Social Science Citation Index (SSCI) databases, obtained from the Web of Science (WOS) platform created by the Institute for Scientific Information, which provides high-quality literature data sets and is often used in scientometric research and scientific research of literature ( Su et al., 2020 ; Vílchez-Román et al., 2020 ; Tang et al., 2021 ). Therefore, this study selected the WOS database platform as the data source for analysis. In selecting keywords, the keywords used by Zawacki-Richter et al. (2019) , Hwang and Tu (2021) , and Tang et al. (2021) were employed as the analysis basis of this study. To ensure the rationality of the keyword search, further discussions were conducted with three professors who have articles published in educational technology–related journals. The standards for the keyword search were determined following the suggestions of these experts in the related fields. Subsequently, according to the determined combinations of the keywords, we searched publications included in “SSCI” and “SCI” from the Web of Science database using two keywords on 31 July 2021—whose retrieval format is TS = [(“Artificial Intelligence” OR “machine Intelligence” OR “machine learning” OR “Deep learning”) AND (“e-learning” OR “Web learning” OR “Online learning”)] and retrieval standard is mainly from 1 January 2010 to 30 July 2021. Eventually, 137 papers were obtained. However, there were still inevitable problems based on the literature analysis. Some of the topics and keywords had only a “one-word description” in the papers, and the papers had no direct correlation with these keywords or topics. Therefore, subsequent data cleaning was necessary to improve the quality of samples and the credibility of the bibliometric analysis results ( Jia et al., 2014 ). However, since it could not be done through keywords or other software, the data cleaning had to be done manually. Given that, in this study, we referred to the practice of Wang and Ngai (2020) and conducted a manual cleaning of the collected samples. In this study, two professors engaging in educational technology and information systems independently reviewed the abstract of each sample and determined, through cross-confirmation, whether the content of each sample followed the topic of this study. If the content of the sample did not follow this study’s topic, that sample would be directly deleted. Thus, the final number of the literature was 64, and the relevant basic data filtering process is shown in Figure 1 .

Figure 1. Article selection process for bibliometric mapping analysis.
Co-citation Network Analysis
Co-citation is an analysis method that was first proposed by Small (1973) and refers to two pieces of literature being cited by other literature simultaneously. Citation network analysis is a combination of literature co-citation analysis and social network analysis. Core papers in the field can be identified by analyzing citation relationships between the literature ( Tang et al., 2021 ). VOSviewer can count identical entries by comparing the reference lists of documents using.
Presently, numerous bibliometrics studies have used the citation network analysis method. For example, Liang and Liu (2018) used citation network analysis to identify core and important literature on business intelligence and big data analytics between 1990 and 2017. Through citation network analysis, Zhang et al. (2020) unearthed influential literature in the field of tourism demand prediction and analyzed the evolution process of this field according to their publication time and research methods. Wang and Ngai (2020) identified publishing trends, relevant technologies, influential publications and knowledge clusters in the field of business research methods through the main analysis of three stages: initial sample analysis, citation analysis and co-citation analysis. Through the accumulation and analysis of numerous literature on business event research methods, it has helped to transfer and accumulate useful techniques amongst disciplines and determine the direction of future research. Therefore, through co-citation network analysis, researchers can quickly discover the knowledge structure of artificial intelligence combined with online learning and the research trend in the future. For the results of the network analysis, descriptive tables and visual maps were used in this study by referring to previous practices of Kocak et al. (2019) , Mou et al. (2019) , and Wu et al. (2021) . For the tables, frequency or publication is used to represent the weight of each node (i.e., author, institution, country, journal, and paper), while centrality is used to represent the connectivity of each node. For example, a node with high centrality indicates that the node acts as a key point linking two or more groups that display transition patterns. For the maps, the nodes show the analysis items (i.e., author, journal, reference, etc.), and the sizes of the nodes show the co-occurrence frequency of the items. The thickness and color of the nodal rings indicate the strength of an item’s co-occurrence time. The line between the nodes represents the connection relationship, the thickness of the line represents the co-citation frequency, and the color represents the strong and weak characteristics of the co-citation relationship between the nodes. Therefore, a cluster view is adopted in this study to show the evolution of knowledge over time and the collaboration between nodes.
Publication Trends
As the development trend of a certain topic is visible from the number of papers published, it can be seen that during the period from 2010 to 2021, there was only one paper published or no relevant research output under this topic in each year, indicating that this field is in the embryonic stage. By 2013, there were five papers published in this field, which increased greatly compared to the previous 3 years. Hinton proposed the solution of deep network training in the Science Journal in 2006 ( Hinton and Salakhutdinov, 2006 ); however, it was not until 2012 that Hinton’s team used Alex Net, a convolutional neural network, to win the ImageNet image recognition competition that it proved the potential of deep learning to the world and attracted widespread attention from the academic world ( Hinton et al., 2012 ). It was also from this moment that the research on artificial intelligence entered a period of explosion and urged many research fields, such as the field of education, to emerge with high-frequency knowledge association related to artificial intelligence. Since then, the number of articles published in this field has remained stable and, beginning in 2019, the number of articles published in this field has increased significantly, given that the number of articles published before 2020 was only 51. Furthermore, in only the first half of 2021, the number of published papers has reached 13, which reflects the research frontier and development trend in this field (see Figure 2 ).

Figure 2. Data distribution of AIEL articles from 2010–2021.
Data Collection
Table 2 shows the journals with more than two publications under this topic, amongst which Expert Systems with Applications, Computers in Human Behavior, IEEE Access and Journal of Intelligent and Fuzzy Systems tied for first place in the number of published journals that focus on computer science, information systems, information education, psychology and communication technology. Furthermore, aiming at the literature retrieved in this study, articles published by Expert Systems with Applications were mainly gathered before 2015, and most of them are about the designs of new models or algorithms to develop new learning systems. For example, Ammar et al. (2010) is one of the first papers that qualified for retrieval, which proposed an intelligent tutoring system that uses affective computing to recognize people’s facial expressions, thus monitoring students’ behaviors while learning. In addition, this article is also the most-cited article on artificial intelligence integration teaching in this journal (80 times). In 2015, Dias et al. (2015) arranged for college professors and students to use Learning Management Systems (LMSs) in a blended learning environment for an academic year. They then explored effective learning methods through fuzzy analysis and helped the college professors and students to improve their learning efficiency. The Computers in Human Behavior journal also published four articles, and the main articles were published between 2014 and 2016. For example, Kurilovas et al. (2014 , 2015) used a recommendation system to evaluate students’ learning effects to improve the modules and design of online learning courses. These two studies were cited up to 81 times. Conversely, the publications of the IEEE Access journal were mainly in 2017, and its research is mainly on applying artificial intelligence to teaching situations. For example, Barlybayev et al. (2020) proposed an intelligent system to evaluate students’ learning situations. However, other literature also explores the field of AIEL from other perspectives. Turchet et al. (2018) described and analyzed the education of music e-learning in the internet of Things. Finally, the literature retrieved from the Journal of Intelligent and Fuzzy Systems is mostly integrated into online learning from the technical level. Amongst the four journals that published the most papers on combined artificial technology and teaching, the Journal of Intelligent and Fuzzy Systems published most of these papers in 2021, showing that this journal started to include a larger number of papers on artificial intelligence and teaching from 2021. For example, Xie and Mai (2021) integrated cloud computing and artificial intelligence in MOOC platforms and applied them in the cross-cultural teaching of college English. Aiyuan and Hui (2021) proposed a model for online English learning based on artificial technology, in which students’ actual learning was analyzed through an improved deep network analysis method so that artificial intelligence could effectively improve students’ learning efficiency and demand.

Table 2. The top 10 productive journals from 2010 to 2021 (Documents ≥ 2).
Amongst the highly cited literature, Dwivedi et al.’s (2020) research (cited 123 times) was a highly cited topic in the research and application of artificial intelligence and online learning during the epidemic, which collated the views of 12 scientists on the impact of COVID-19 on organizations and society, including information management and the use of artificial intelligence in online learning.
Author’s Cooperation Network
Scientific research authors are the main force of scientific research institutions, representing the research development direction of a subject area. In this study, some publications of authors in the field of artificial intelligence are counted. According to the statistical results in Table 3 , the distribution difference of the number of articles published between scholars is not obvious. The frequency of the highest number of articles published is two times, which is by 12 authors, and accounts for 2% of the total number of articles published. Conversely, the number of articles published once accounts for 98%. This shows that there are few high-yielding core authors in the field of artificial intelligence, and the vast majority of scholars first dabbled in the field of “artificial intelligence and online education.” Although there is no scholar with a high publication output in this field, the two papers published by Kurilovas et al. (2014 , 2015) are both highly cited studies in the field of artificial intelligence and online education. However, the authors of the two papers mainly studied artificial intelligence, learning analytics and technology-enhanced learning. The academic backgrounds of the authors are mainly technology learning, information systems and other computer-related majors, suggesting that relevant research studying the combination of artificial intelligence and teaching is still directly related to the discipline and speciality of the researchers.

Table 3. Author’s cooperation network from 2010 to 2021 (Documents ≥ 2).
Countries and Institutions
A total of 17 countries conducted research on “artificial intelligence and e-learning” between 2010 and 2021, and 10 countries published at least three articles. China, followed by Spain, is the most productive country for artificial intelligence, with 20.31% (13) of publications in the field of artificial intelligence from Chinese authors and 209 citations in other papers. It was cited 127 times in other literature, making it the most cited country (see Table 4 ).

Table 4. The top 10 productive countries from 2010 to 2021 (Documents ≥ 3).
Figure 3 comprises 12 nodes and seven wires, where the nodes represent authors and the wires represent cooperative relationships between authors. Almost all the authors who published two articles conducted their studies by cooperating with each other. Meanwhile, this study found that most of the research teams were small in size, and there was almost no cooperation between the groups.

Figure 3. Data distribution of AIEL co-authorship amongst countries from 2010–2021.
Roles of Artificial Intelligence in e-Learning
This study referred to the subject classification of Hwang and Tu (2021) and Tang et al. (2021) in classifying AIEL, including “adaptive systems and personalization,” “evaluation and assessment,” “profiling and prediction,” “other” and “intelligent tutoring systems.” According to the summarized results, studies between 2010 and 2021 mainly discussed “evaluation and assessment (23.44%)” and “other (23.44%),” followed by “intelligent tutoring systems (20.31%),” “adaptive systems and personalization (18.75%)” and “evaluation and assessment (14.06%)” (see Figure 4 ). In particular, “Other” mainly analyzed the students’ study behavior intention after constructing AIEL ( Miao et al., 2020 ) or exploring the trends of application and development of AIEL using the interview method ( Ally, 2019 ).

Figure 4. Roles of AIEL research.
Keyword Network Analysis
Keywords can intuitively reflect the core research content and theme of a discipline. In this paper, the density of the correlation between keywords is used to cluster keywords through a modularity algorithm, and the relevant results are shown in Table 5 and Figure 5 .

Table 5. Cluster terms.

Figure 5. Analysis of keyword co-occurrence from 2010–2021.
Cluster 1 reflects the hot vocabularies in the field of learning systems—including artificial intelligence, education, online learning, blended learning, deep learning, augmented reality, expert systems, intelligent systems and ontologies—showing that the papers in Cluster 1 focus on the use of computers and artificial intelligence, the development and building of more sophisticated learning systems and the use of new technologies in the teaching environment. For example, augmented reality technology is used to combine online and offline teaching to facilitate access to more comprehensive learning materials or enhance the teaching and learning experience ( Ally, 2019 ). Meanwhile, Aiyuan and Hui (2021) adopted artificial intelligence in online English learning and used an improved network analysis method to evaluate the time and model of students’ online learning, thus realizing effective application in actual intelligent teaching. Therefore, under the environmental circumstance of combining artificial intelligence and teaching, it is reasonable to use new technologies to improve teaching, and such use is a vital part of enhancing teaching efficiency.
Cluster 2 reflects the hot vocabularies in the field of performance evaluation—including system, adaptive e-learning, data mining, emotions, environment, fuzzy logic, intelligent tutoring systems, learning styles, personality and technology—showing that the literature uses artificial intelligence technology to evaluate students’ learning by taking advantage of students’ emotions and other personality factors to make the online learning system more adaptive ( Alsobhi and Alyoubi, 2019 ). The conclusion drawn from Cluster 2 is that using artificial intelligence to evaluate learning and employing adaptive e-learning can improve students’ learning efficiency and make online learning more helpful.
Cluster 3 reflects the hot vocabularies in the field of artificial intelligence combined with online education—including educational data mining, social networks, systems, computer programming, educational games, learning analytics, management, online and tutoring systems. Amongst them, educational data mining, systems, educational games, and tutoring systems reflect that the core research orientation in the field of education is to focus on the construction of a complete educational system. Through educational data mining ( Santos and Boticario, 2015 ; Yang et al., 2021 ), learning analytics ( Franzoni et al., 2020 ) and other data collection and analysis methods, the behavioral patterns and emotional differences of personalized students can be analyzed to improve the management ability of the system. Based on suggestions in the literature on the multi-dimensional and overall research direction, social networks can be supported by computer programming and educational artificial intelligence games to promote the development of artificial intelligence applications in online education ( Kuk et al., 2017 ; Petit et al., 2018 ). Therefore, the combination of educational games and artificial intelligence is a vital approach for studies on using artificial intelligence in teaching.
Cluster 4 reflects the hot vocabularies in the field of artificial intelligence combined with online education—including model, students, knowledge, behavior, engagement, genetic algorithm, intelligent tutoring system optimization and particle swarm optimization. Amongst them, students, knowledge, behavior and engagement focus more on the subject in the process of education system construction. It includes but is not limited to behavioral patterns, teaching engagement and knowledge. In addition, the concept of artificial intelligence is strengthened and highlighted, and the concept of personalized customization is developed and extended. Particularly at the level of artificial intelligence technology, the comprehensive application of various models, genetic algorithms, optimization, particle swarm optimization and other technologies advance the development of education systems in a more intelligent direction ( Chang and Ke, 2013 ; Kuk et al., 2017 ) and change “tutoring systems” in Cluster 4 to “intelligent tutoring systems.” This indicates that improving the use of intelligent tutoring systems through algorithmic improvements in teaching and research is another important approach in the sustainable development of artificial intelligence in the future.
Cluster 5 reflects the hot vocabularies in a certain theme—including e-learning, design, second life, instruction, pedagogy, SLOODLE and teaching model—showing that the application of the literature focuses on the development of an intelligent learning environment and proposes the creation of a learning environment in Second Life or OpenSimulator, combined with the Moodle learning management system ( Crespo et al., 2013 ). However, an interesting issue is raised in the study approach of Cluster 5, which is that studies in that period tended to enhance students’ learning efficiency by combining simulated environments with LMSs. Those studies are the early research on using artificial intelligence in teaching, and since 2015, relevant studies have either slowed down or disappeared, possibly due to the study approaches and trends on the use of artificial intelligence.
Cluster 6 reflects the hot vocabularies in a certain theme—including swarm intelligence, ant colony optimization, algorithm, ant colony system, collaborative learning, learners’ speech classes like behavior, learning objects and learning paths—which mainly reflect new methods of recommending appropriate learning paths for different learner groups. Through ant colony optimization, ant colony system and other algorithms, we can monitor and improve e-learning modules and courses according to the behaviors of learners ( Kurilovas et al., 2014 ). The conclusion drawn from Cluster 6 shows that algorithmic improvements could enable effective recommendations of learning paths. Furthermore, the studies on this topic were conducted around 2015, after which learning recommendation systems was no longer the topic of relevant studies; thus, studies in this area slowed down.
Cluster 7 reflects the hot vocabularies in a certain theme—including performance, achievement, analytics and big data—which mainly discuss the application of teaching to improve the traditional algorithm, propose a new improved model and develop automatic methods to detect patterns in a large amount of educational data and estimate unknown information and behaviors about students ( Kose and Arslan, 2016 ; Miao et al., 2020 ). In particular, the study in Cluster 7 mainly analyzed students’ learning results and efficiency through an intelligent e-learning system and verified the study result through statistical analysis.
Keyword Evolution Analysis
The evolution of AIEL is shown in Figure 6 . From 2010 to 2015, the application of AI mainly focused on recommending suitable learning paths for learners and improving their learning benefits through intelligent learning ( Kurilovas et al., 2015 ). However, research from 2016 to 2021 focused on developing new technological applications to assess the learning status of individual students and providing them with helpful learning methods (e.g., statistical learning, data mining and decision trees) ( Samarakou et al., 2018 ; Kavitha and Lohani, 2019 ; Franzoni et al., 2020 ). Conversely, compared to recommended applications before 2015, the research in 2016 also focused on applying artificial intelligence technology to integrate teaching and adopted optimized methods to improve the teaching methods of online learning to assist students in learning, as well as to evaluate and optimize learning effects ( Barlybayev et al., 2020 ; Xie and Mai, 2021 ).

Figure 6. Keyword evolution analysis.
Discussion and Conclusion
The study analyzed 64 articles on AIEL published in the WOS database between 2010 and 2021. Several studies have shown that the use of artificial intelligence technology has great potential in promoting students’ learning performances and higher-level thinking ( Voskoglou and Salem, 2020 ). In addition, using artificial intelligence technology to diagnose students’ learning problems can not only provide immediate feedback to individual students but also provide information to help teachers improve learning designs ( Hung et al., 2014 ; Bywater et al., 2019 ). From the analysis results, the conclusions and impacts are as follows:
Most AIEL studies were published in Expert Systems with Applications, Computers in Human Behavior, IEEE Access, and Journal of Intelligent and Fuzzy Systems. Before 2016, the topics mainly focused on the use of a recommendation system to evaluate students’ learning effects to further propose the design of new online learning course modules Kurilovas et al., 2014 , 2015 ; Stantchev et al., 2015 ). However, since 2017, the research focus has been on the influence of artificial intelligence applications on teaching status ( Jalal and Mahmood, 2019 ; Kavitha and Lohani, 2019 ; Franzoni et al., 2020 ). In addition, the top three most-cited journals (co-citation analysis) are Expert Systems with Applications, Computers in Human Behavior, and Journal of Network and Computer Applications. In other words, more researchers in education and educational technology are engaged in AIEL research, especially in the application of artificial intelligence technology in the teaching environment.
From the results of cluster analysis on user keywords, the AIEL cluster analysis showed that the three most important clusters are “behavioral analysis of artificial intelligence combined with online education,” “learning effect evaluation,” and “using artificial intelligence technology to develop and build a perfect learning system.” Furthermore, intelligent systems to assess student learning ( Barlybayev et al., 2020 ; Aiyuan and Hui, 2021 ; Yang et al., 2021 ) may be a good reference direction for future research.
The most common roles of artificial intelligence in online learning are “e-learning” and “adaptive e-learning,” followed by “educational data mining” and “engagement and behavior,” which is consistent with the findings on research issues—that is, evaluating student performance through artificial intelligence is the main focus of the AIEL research. In addition, the research on evaluating students’ learning problems to provide them with immediate support, improve their learning effectiveness and performance, and conduct an effective evaluation of follow-up learning is relatively rare. Therefore, it should also be the direction of future research topics.
Most studies employ traditional routine learning methods or interviews with experts in this field, while modern artificial intelligence methods, such as deep learning, are rarely adopted. This may be because these AIEL studies have focused on developing new technology applications to assess the learning status of individual students and help them. This goal is associated with the characteristics of traditional routine learning methods (e.g., statistical learning, recommendation systems, data mining, and decision trees) and knowledge acquisition methods by interviewing experts in the field. That is, knowledge in the field is explicitly expressed and used to make decisions or predictions ( Samarakou et al., 2018 ; Kavitha and Lohani, 2019 ; Franzoni et al., 2020 ).
Most AIEL studies have investigated the application of artificial intelligence technologies to integrate teaching and the use of optimized methods to improve the teaching methods of online learning to assist students in learning, as well as evaluate and optimize learning outcomes ( Barlybayev et al., 2020 ; Xie and Mai, 2021 ). However, there is limited discussion on the rationality and development of artificial intelligence to accumulate skills. Thus, it should also be the direction of future teaching research.
Finally, regarding the research topic, studies on the AIEL field are growing. Most researchers have focused on five directions: adaptive systems and personalization, profiling and prediction, evaluation and assessment, other and intelligent tutoring systems. The result also indicated that research on AIEL is still in its growth period, thus needing more interdisciplinary research. Subsequent research could emphasize the analysis of students’ behaviors, an evaluation of the study’s effectiveness, the technological application of artificial intelligence combined with online learning, and the integration of cross-technology applications (e.g., the AR/VR-integrated teaching modes, the STEAM teaching mode, etc.).
This study has some limitations: the presented keyword nodes can only be presented qualitatively. From the quantitative perspective, the value represented by the node comes from the number of occurrences of the keywords in the entire literature data set. If the software can weigh the number of occurrences of the keyword and the author’s publication order in the specific literature, the study will have a stronger persuasion.
Data Availability Statement
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
Author Contributions
KJ and YL designed the research and provided guidance throughout the entire research process. C-LL and ZC collected the references, did the literature analysis, and wrote the manuscript. XJ and PW helped with translation and offered modification suggestions. TC participated in the literature collection, analysis, and organization. All authors listed have made a substantial, direct, and intellectual contribution to the work, and approved it for publication.
This research was supported by the First Batch of Industry-University Collaborative Education Project of the Ministry of Education—“Social Practice Training Camp Plan Based on Science and Technology Innovation and Entrepreneurship Projects” (Grant No. 202002143051), the National Social Science Late Funding Project of China (Grant No. 20FXWB020), KC Wong Magna Fund in Ningbo University (Grant No. RC190015), and the China Postdoctoral Science Foundation (Grant No. 2016M60283).
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s Note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Aiyuan, L., and Hui, W. (2021). An artificial intelligence recognition model for English online teaching. J. Intell. Fuzzy Syst. 40, 3547–3558. doi: 10.3233/JIFS-189391
CrossRef Full Text | Google Scholar
Ally, M. (2019). Competency profile of the digital and online teacher in future education. Int. Rev. Res. Open Distrib. Learn. 20, 302–318. doi: 10.19173/irrodl.v20i2.4206
Alsobhi, A. Y., and Alyoubi, K. H. (2019). Adaptation algorithms for selecting personalised learning experience based on learning style and dyslexia type. Data Technol. Appl. 53, 189–200. doi: 10.1108/dta-10-2018-0092
Ammar, M. B., Neji, M., Alimi, A. M., and Gouardères, G. (2010). The affective tutoring system. Exp. Syst. Appl. 37, 3013–3023.
Google Scholar
Barlybayev, A., Kaderkeyeva, Z., Bekmanova, G., Sharipbay, A., Omarbekova, A., and Altynbek, S. (2020). Intelligent system for evaluating the level of formation of professional competencies of students. IEEE Access 8, 58829–58835. doi: 10.1109/ACCESS.2020.2979277
Ben Ammar, M., Neji, M., Alimi, A. M., and Gouarderes, G. (2010). The affective tutoring system. Exp. Syst. Appl. 37, 3013–3023. doi: 10.1016/j.eswa.2009.09.031
Bywater, J. P., Chiu, J. L., Hong, J., and Sankaranarayanan, V. (2019). The teacher responding tool: scaffolding the teacher practice of responding to student ideas in mathematics classrooms. Comput. Educ. 139, 16–30.
Caputi, V., and Garrido, A. (2015). Student-oriented planning of e-learning contents for Moodle. J. Netw. Comput. Appl. 53, 115–127.
Chang, T. Y., and Ke, Y. R. (2013). A personalized e-course composition based on a genetic algorithm with forcing legality in an adaptive learning system. J. Netw. Comput. Appl. 36, 533–542. doi: 10.1016/j.jnca.2012.04.002
Chan-Olmsted, S. M. (2019). A review of artificial intelligence adoptions in the media industry. Int. J. Media Manage. 21, 193–215. doi: 10.1080/14241277.2019.1695619
Chen, C. (2017). Science mapping: a systematic review of the literature. J. Data Inf. Sci. 2, 1–40. doi: 10.1515/jdis-2017-0006
Chen, C. M. (2006). CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature. J. Am. Soc. Inf. Sci. Technol. 57, 359–377. doi: 10.1002/asi.20317
Colares, G. S., Dell’Osbel, N., Wiesel, P. G., Oliveira, G. A., Lemos, P. H. Z., da Silva, F. P., et al. (2020). Floating treatment wetlands: a review and bibliometric analysis. Sci. Total Environ. 714:136776. doi: 10.1016/j.scitotenv.2020.136776
PubMed Abstract | CrossRef Full Text | Google Scholar
Conde, A., Arruarte, A., Larrañaga, M., and Elorriaga, J. A. (2020). How can wikipedia be used to support the process of automatically building multilingual domain modules? a case study. Inf. Process. Manag. 57:102232.
Colchester, K., Hagras, H., Alghazzawi, D., and Aldabbagh, G. (2017). A survey of artificial intelligence techniques employed for adaptive educational systems within e-learning platforms. J. Artif. Intell. Soft Comput. Res. 7, 47–64. doi: 10.1515/jaiscr-2017-0004
Corsini, F., Frey, M., and Rizzi, F. (2012). “Recent trends in E-waste research A bibliometric map approach,” in World Congress on Sustainable Technologies (London: IEEE) 1, 95–98.
Crespo, R. G., Escobar, R. F., Aguilar, L. J., Velazco, S., and Sanz, A. G. C. (2013). Use of ARIMA mathematical analysis to model the implementation of expert system courses by means of free software OpenSim and Sloodle platforms in virtual university campuses. Expert Syst. Appl. 40, 7381–7390. doi: 10.1016/j.eswa.2013.06.054
Dias, S. B., Hadjileontiadou, S. J., Hadjileontiadis, L. J., and Diniz, J. A. (2015). Fuzzy cognitive mapping of LMS users’ quality of interaction within higher education blended-learning environment. Expert Syst. Appl. 42, 7399–7423. doi: 10.1016/j.eswa.2015.05.048
Donthu, N., Kumar, S., and Pattnaik, D. (2020). Forty-five years of journal of business research: a bibliometric analysis. J. Bus. Res. 109, 1–14.
Duque-Acevedo, M., Belmonte-Urena, L. J., Cortés-García, F. J., and Camacho-Ferre, F. (2020). Agricultural waste: review of the evolution, approaches and perspectives on alternative uses. Glob. Ecol. Conserv. 22:e00902.
Dwivedi, Y. K., Hughes, D. L., Coombs, C., Constantiou, I., Duan, Y., Edwards, J. S., et al. (2020). Impact of COVID-19 pandemic on information management research and practice: transforming education, work and life. Int. J. Inf. Manag. 55:102211. doi: 10.1016/j.ijinfomgt.2020.102211
Faust, O., Hagiwara, Y., Hong, T. J., Lih, O. S., and Acharya, U. R. (2018). Deep learning for healthcare applications based on physiological signals: a review. Comput. Methods Programs Biomed. 161, 1–13. doi: 10.1016/j.cmpb.2018.04.005
Ferasso, M., Beliaeva, T., Kraus, S., Clauss, T., and Ribeiro-Soriano, D. (2020). Circular economy business models: the state of research and avenues ahead. Bus. Strategy Environ. 29, 3006–3024.
Flis, I., and van Eck, N. J. (2018). Framing psychology as a discipline (1950-1999): a large-scale term co-occurrence analysis of scientific literature in psychology. Hist. Psychol. 21, 334–362. doi: 10.1037/hop0000067
Franzoni, V., Milani, A., Mengoni, P., and Piccinato, F. (2020). Artificial intelligence visual metaphors in E-learning interfaces for learning analytics. Appl. Sci. 10:7195. doi: 10.3390/app10207195
Gálvez-Sánchez, F. J., Lara-Rubio, J., Verdu-Jover, A. J., and Meseguer-Sanchez, V. (2021). Research advances on financial inclusion: a bibliometric analysis. Sustainability 13:3156. doi: 10.3390/su13063156
García, P., Amandi, A., Schiaffino, S., and Campo, M. (2007). Evaluating bayesian networks’ precision for detecting students’ learning styles. Comput. Educ. 49, 794–808. doi: 10.1016/j.compedu.2005.11.017
George, G., and Lal, A. M. (2019). Review of ontology-based recommender systems in e-learning. Comput. Educ. 142:103642. doi: 10.1016/j.compedu.2019.103642
Gonçalves, M. C. P., Kieckbusch, T. G., Perna, R. F., Fujimoto, J. T., Morales, S. A. V., and Romanelli, J. P. (2019). Trends on enzyme immobilization researches based on bibliometric analysis. Process Biochem. 76, 95–110.
Hallinger, P., and Kovačević, J. (2019). A bibliometric review of research on educational administration: science mapping the literature, 1960 to 2018. Rev. Educ. Res. 89, 335–369. doi: 10.3102/0034654319830380
He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., and Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nat. Med. 25, 30–36. doi: 10.1038/s41591-018-0307-0
Hinton, G. E., and Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science 313, 504–507. doi: 10.1126/science.1127647
Hinton, G. E., Srivastava, N., Krizhevsky, A., Sutskever, I., and Salakhutdinov, R. R. (2012). Improving neural networks by preventing co-adaptation of feature detectors. arXiv [Preprint] Available Online at: https://arxiv.org/abs/1207.0580 (accessed October 1, 2021).
Huang, F., Teo, T., and Zhou, M. (2020). Chinese students’ intentions to use the internet-based technology for learning. Educ. Tech. Rese. Deve. 68, 575–591. doi: 10.1007/s11423-019-09695-y
Hung, I. C., Yang, X. J., Fang, W. C., Hwang, G. J., and Chen, N. S. (2014). A context-aware video prompt approach to improving students’ in-field reflection levels. Comput. Educ. 70, 80–91.
Hu, N., Zheng, X., Tan, L., Li, X., and Shan, J. (2014). Development of an e-learning environment for anatomical female pelvic region. Gineco.ro 8, 90–93.
Hwang, G. J. (2014). Definition, framework and research issues of smart learning environments-a context-aware ubiquitous learning perspective. Smart Learn. Environ. 1, 1–14. doi: 10.1080/10494820.2019.1703012
Hwang, G.-J., and Tu, Y.-F. (2021). Roles and research trends of artificial intelligence in mathematics education: a bibliometric mapping analysis and systematic review. Mathematics 9:584. doi: 10.3390/math9060584
Ivanitskaya, L. V., Bjork, A. E., and Taylor, M. R. (2021). Bibliometric analysis and visualization of catholic health care research: 1973-2019. J. Religion Health. 60, 3759–3774. doi: 10.1007/s10943-021-01255-0
Iyanda, A. R., Ninan, O. D., Ajayi, A. O., and Anyabolu, O. G. (2018). Predicting student academic performance in computer science courses: a comparison of neural network models. Int. J. Mod. Educ. Comput. Sci. 10, 1–9. doi: 10.1007/s42979-021-00944-7
Jalal, A., and Mahmood, M. (2019). Students’ behavior mining in e-learning environment using cognitive processes with information technologies. Educ. Inf. Technol. 24, 2797–2821. doi: 10.1007/s10639-019-09892-5
Jia, X., Dai, T., and Guo, X. (2014). Comprehensive exploration of urban health by bibliometric analysis: 35 years and 11,299 articles. Scientometrics 99, 881–894. doi: 10.1007/s11192-013-1220-4
Jin, Y. Q., Lin, C. L., Zhao, Q., Yu, S. W., and Su, Y. S. (2021). A study on traditional teaching method transferring to e-learning under the Covid-19 pandemic: from chinese students’ perspectives. Front. Psychol. 12:632787. doi: 10.3389/fpsyg.2021.632787
Katz, Y. J. (2000). The comparative suitability of three ICT distance learning methodologies for college level instruction. Educ. Media Int. 37, 25–30. doi: 10.1080/095239800361482
Katz, Y. J. (2002). Attitudes affecting college students’ preferences for distance learning. J. Comput. Assist. Learn. 18, 2–9. doi: 10.1046/j.0266-4909.2001.00202.x
Kavitha, V., and Lohani, R. (2019). A critical study on the use of artificial intelligence, e-Learning technology and tools to enhance the learners experience. Cluster Comput. 22, S6985–S6989. doi: 10.1007/s10586-018-2017-2
Khumrin, P., Ryan, A., Judd, T., and Verspoor, K. (2017). Diagnostic machine learning models for acute abdominal pain: towards an e-learning tool for medical students. Stud. Health Technol. Inf. 245, 447–451.
PubMed Abstract | Google Scholar
Kocak, M., García-Zorita, C., Marugan-Lazaro, S., Çakır, M. P., and Sanz-Casado, E. (2019). Mapping and clustering analysis on neuroscience literature in Turkey: a bibliometric analysis from 2000 to 2017. Scientometrics 121, 1339–1366.
Kok, J. N., Boers, E. J., Kosters, W. A., Van der Putten, P., and Poel, M. (2009). Artificial intelligence: definition, trends, techniques, and cases. Artif. Intell. 1, 270–299.
Kose, U., and Arslan, A. (2016). Intelligent e-learning system for improving students’ academic achievements in computer programming courses. Int. J. Eng. Educ. 32, 185–198.
Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097–1105.
Kuk, K., Milentijević, I. Z., Ranđelović, D., Popović, B. M., and Čisar, P. (2017). The design of the personal enemy-MIMLebot as an intelligent agent in a game-based learning environment. Acta Polytech. Hung. 14, 121–139.
Kurilovas, E., Zilinskiene, I., and Dagiene, V. (2014). Recommending suitable learning scenarios according to learners’ preferences: an improved swarm based approach. Comput. Hum. Behav. 30, 550–557. doi: 10.1016/j.chb.2013.06.036
Kurilovas, E., Zilinskiene, I., and Dagiene, V. (2015). Recommending suitable learning paths according to learners’ preferences: experimental research results. Comput. Hum. Behav. 51, 945–951. doi: 10.1016/j.chb.2014.10.027
Latham, A., Crockett, K., Mclean, D., and Edmonds, B. (2012). A conversational intelligent tutoring system to automatically predict learning styles. Comput. Educ. 59, 95–109.
Li, A., and Wang, H. (2020). An artificial intelligence recognition model for English online teaching. J. Intell. Fuzzy Syst. 40, 1–12. doi: 10.3233/jifs-219131
Liang, T. P., and Liu, Y. H. (2018). Research landscape of business intelligence and big data analytics: a bibliometrics study. Exp. Syst. Appl. 111, 2–10. doi: 10.1016/j.eswa.2018.05.018
Liao, H., Tang, M., Luo, L., Li, C., Chiclana, F., and Zeng, X. J. (2018). A bibliometric analysis and visualization of medical big data research. Sustainability 10:166. doi: 10.3390/su10010166
Lin, C. L., Jin, Y. Q., Zhao, Q., Yu, S. W., and Su, Y. S. (2021). Factors influence students’ switching behavior to online learning under COVID-19 pandemic: a push–pull–mooring model perspective. Asia Pac. Educ. Res. 30, 229–245. doi: 10.1007/s40299-021-00570-0
Lu, T., and Yang, X. (2018). Effects of the visual/verbal learning style on concentration and achievement in mobile learning. EURASIA J. Math. Sci. Technol. Educ. 14, 1719–1729. doi: 10.29333/ejmste/85110
Martins, L. L., and Kellermanns, F. W. (2004). A model of business school students’ acceptance of a web-based course management system. Acad. Manag. Learn. Educ. 3, 7–26. doi: 10.5465/amle.2004.12436815
Mas-Verdu, F., Garcia-Alvarez-Coque, J.-M., Nieto-Aleman, P. A., and Roig-Tierno, N. (2021). A systematic mapping review of European Political Science. Eur. Polit. Sci. 20, 85–104. doi: 10.1057/s41304-021-00320-2
Miao, T. C., Gu, C. H., Liu, S., and Zhou, Z. K. (2020). Internet literacy and academic achievement among Chinese adolescent: a moderated mediation model. Behav. Inf. Technol. 1–13. doi: 10.1080/0144929X.2020.1831074
Miyazawa, A. A. (2019). Artificial intelligence: the future for cardiology. Heart 105, 1214–1224.
Mo, C. Y., Hsieh, T. H., Lin, C. L., Jin, Y. Q., and Su, Y. S. (2021). Exploring the critical factors, the online learning continuance usage during COVID-19 pandemic. Sustainability 13:5471. doi: 10.3390/su13105471
Moral Muñoz, J. A., Herrera Viedma, E., Santisteban Espejo, A., and Cobo, M. J. (2020). Software tools for conducting bibliometric analysis in science: an up-to-date review. Profesional Inf. 29:e290103. doi: 10.1200/CCI.19.00042
Mou, J., Cui, Y., and Kurcz, K. (2019). Bibliometric and visualized analysis of research on major e-commerce journals using Citespace. J. Electronic Commer. Res. 20, 219–237.
Mulder, R. H. (2013). Exploring feedback incidents, their characteristics and the informal learning activities that emanate from them. Eur. J. Train. Dev. 37, 49–71. doi: 10.1108/03090591311293284
Orduna-Malea, E., and Costas, R. (2021). Link-based approach to study scientific software usage: the case of VOSviewer. Scientometrics 126, 8153–8186. doi: 10.1007/s11192-021-04082-y
Osmanbegovic, E., and Suljic, M. (2012). Data mining approach for predicting student performance. Econ. Rev. 10, 3–12.
Petit, J., Roura, S., Carmona, J., Cortadella, J., Duch, J., Gimnez, O., et al. (2018). Jutge. org: characteristics and experiences. IEEE Trans. Learn. Technol. 11, 321–333.
Piccoli, G., Ahmad, R., and Ives, B. (2001). Web-based virtual learning environments: a research framework and a preliminary assessment of effectiveness in basic IT skills training. MIS Q. 25, 401–426.
Rastegarmoghadam, M., and Ziarati, K. (2017). Improved modeling of intelligent tutoring systems using ant colony optimization. Educ. Inf. Technol. 22, 1067–1087. doi: 10.1007/s10639-016-9472-2
Santos, O. C., and Boticario, J. G. (2015). User-centred design and educational data mining support during the recommendations elicitation process in social online learning environments. Expert Syst. 32, 293–311. doi: 10.1111/exsy.12041
Samarakou, M., Tsaganou, G., and Papadakis, A. (2018). An e-learning system for extracting text comprehension and learning style characteristics. Educ. Technol. Soc. 21, 126–136.
Sarkodie, S. A., and Strezov, V. (2019). A review on environmental Kuznets curve hypothesis using bibliometric and meta-analysis. Sci. Total Environ. 649, 128–145. doi: 10.1016/j.scitotenv.2018.08.276
Small, H. (1973). Co-citation in the scientific literature: a new measure of the relationship between two documents. J. Am. Soc. Inf. Sci. 24, 265–269. doi: 10.1002/asi.4630240406
Song, P., and Wang, X. (2020). A bibliometric analysis of worldwide educational artificial intelligence research development in recent twenty years. Asia Pac. Educ. Rev. 21, 473–486. doi: 10.1007/s12564-020-09640-2
Song, Y., Wei, K., Yang, S., Shu, F., and Qiu, J. (2020). Analysis on the research progress of library and information science since the new century. Lib. Hi Tech ahead-of-print doi: 10.1108/LHT-06-2020-0126
Stantchev, V., Prieto-Gonzalez, L., and Tamm, G. (2015). Cloud computing service for knowledge assessment and studies recommendation in crowdsourcing and collaborative learning environments based on social network analysis. Comput. Hum. Behav. 51, 762–770. doi: 10.1016/j.chb.2014.11.092
Su, Y.-S., Lin, C.-L., Chen, S.-Y., and Lai, C.-F. (2020). Bibliometric study of social network analysis literature. Lib. Hi Tech 38, 420–433. doi: 10.1108/lht-01-2019-0028
Sweileh, W. M. (2017). Global research trends of World Health Organization’s top eight emerging pathogens. Glob. Health 13, 1–19. doi: 10.1186/s12992-017-0233-9
Tang, K.-Y., Chang, C.-Y., and Hwang, G. J. (2021). Trends in artificial intelligence-supported e-learning: a systematic review and co-citation network analysis (1998-2019). Interact. Learn. Environ. 1–19. doi: 10.1080/10494820.202111875001
Tarus, J. K., Niu, Z., and Yousif, A. (2017). A hybrid knowledge-based recommender system for e-learning based on ontology and sequential pattern mining. Future Gener. Comput. Syst. 72, 37–48. doi: 10.1016/j.future.2017.02.049
Teo, T. (2016). Modelling Facebook usage among university students in Thailand: the role of emotional attachment in an extended technology acceptance model. Interact. Learn. Environ. 24, 745–757. doi: 10.1080/10494820.2014.917110
Trentin, G. (1997). Telematics and on-line teacher training: the Polaris Project. J. Comput. Assist. Learn. 13, 261–270. doi: 10.1046/j.1365-2729.1997.00029.x
Turchet, L., Fischione, C., Essl, G., Keller, D., and Barthet, M. (2018). Internet of musical things: visit and challenges. IEEE Access 6, 61994–62017. doi: 10.1109/ACCESS.2018.2872625
van Eck, N. J., and Waltman, L. (2010). Software survey: VOSviewer, a computer program for bibliometric mapping. Scientometrics 84, 523–538. doi: 10.1007/s11192-009-0146-3
Van Eck, N. J., and Waltman, L. (2017). Citation-based clustering of publications using CitNetExplorer and VOSviewer. Scientometrics 111, 1053–1070. doi: 10.1007/s11192-017-2300-7
Vílchez-Román, C., Sanguinetti, S., and Mauricio-Salas, M. (2020). Applied bibliometrics and information visualization for decision-making processes in higher education institutions. Lib. Hi Tech 39, 263–283.
Voskoglou, M. G., and Salem, A.-B. M. (2020). Benefits and limitations of the artificial with respect to the traditional learning of mathematics. Mathematics 8:611. doi: 10.3390/math8040611
Wang, Q., and Ngai, E. W. (2020). Event study methodology in business research: a bibliometric analysis. Indust. Manag. Data Syst. 120, 1863–1900. doi: 10.1108/imds-12-2019-0671
Wang, T., Lin, C. L., and Su, Y. S. (2021). Continuance intention of university students and online learning during the COVID-19 pandemic: a modified expectation confirmation model perspective. Sustainability 13:4586.
Wu, J., Wang, K., He, C., Huang, X., and Dong, K. (2021). Characterizing the patterns of China’s policies against COVID-19: a bibliometric study. Inf. Process. Manag. 58:102562. doi: 10.1016/j.ipm.2021.102562
Xie, H., and Mai, Q. (2021). College English cross-cultural teaching based on cloud computing MOOC platform and artificial intelligence. J. Intell. Fuzzy Syst. 40, 7335–7345. doi: 10.3233/JIFS-189558
Yang, C. C. Y., Chen, I. Y. L., and Ogata, H. (2021). Toward precision education: educational data mining and learning analytics for identifying students’ learning patterns with ebook systems. Educ. Technol. Soc. 24, 152–163.
Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? Int. J. Educ. Technol. High. Educ. 16, 1–27. doi: 10.5430/irhe.v6n2p1
Zhang, C., Wang, S., Sun, S., and Wei, Y. (2020). Knowledge mapping of tourism demand forecasting research. Tour. Manag. Perspect. 35:100715. doi: 10.1016/j.tmp.2020.100715
Zhao, Y., Cheng, S., Yu, X., and Xu, H. (2020). Chinese public’s attention to the COVID-19 epidemic on social media: observational descriptive study. J. Med. Int. Res. 22:e18825. doi: 10.2196/18825
Zou, X., Yue, W. L., and Le Vu, H. (2018). Visualization and analysis of mapping knowledge domain of road safety studies. Accid. Anal. Prev. 118, 131–145. doi: 10.1016/j.aap.2018.06.010
Keywords : artificial intelligence, online learning, technological education, bibliometrics research, Web of Science Publications
Citation: Jia K, Wang P, Li Y, Chen Z, Jiang X, Lin C-L and Chin T (2022) Research Landscape of Artificial Intelligence and e-Learning: A Bibliometric Research. Front. Psychol. 13:795039. doi: 10.3389/fpsyg.2022.795039
Received: 14 October 2021; Accepted: 17 January 2022; Published: 16 February 2022.
Reviewed by:
Copyright © 2022 Jia, Wang, Li, Chen, Jiang, Lin and Chin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Yang Li, [email protected] ; Chien-Liang Lin, [email protected]
Some recent work in artificial intelligence
Ieee account.
- Change Username/Password
- Update Address
Purchase Details
- Payment Options
- Order History
- View Purchased Documents
Profile Information
- Communications Preferences
- Profession and Education
- Technical Interests
- US & Canada: +1 800 678 4333
- Worldwide: +1 732 981 0060
- Contact & Support
- About IEEE Xplore
- Accessibility
- Terms of Use
- Nondiscrimination Policy
- Privacy & Opting Out of Cookies
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2023 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

How to Write a Research Paper on Artificial Intelligence
In the modern world, the subject of artificial intelligence and the field of development of intelligent technologies have ceased to be the prerogative of a purely scientific community. It is impossible to overestimate the importance of creating a system of artificial intelligence functioning at the necessary and sufficient level, for which the presence of reason will be recognized.
There is a significant success of IT developers, neuroscientists, psychologists, physicists, and other specialists, who for a long time have been limited to individual scientific disciplines and now united in the context of interdisciplinarity. However, reading about it is exciting and promising, but writing about it is exceptionally hard. To help you out, we’ve interviewed professional academic writers from Smart Writing Service and they shared up-to-date tips on how to write a research paper on artificial intelligence.
Talk About the Most Trending Approaches
- The scientific community sees different versions of developments in the field of artificial intelligence. D. Hawkins, for example, proposes an integrative approach that combines engineering, neurobiological, cognitive, and even ethical approaches. Within the framework of the integrative approach, there is no reason to expect from a reasonable machine that it should look, act, feel, or think like a person. “The thoughts and behavior of a rational machine may differ significantly from those of a human being, and it will have intelligence, which is determined by the predictive ability of hierarchical memory, and not by human-like behavior.”
- The physicist and mathematician Roger Penrose, who works in the field of the general theory of relativity and quantum theory, proves the impossibility of unfolding human intelligence into algorithms. Behind all these arguments is the “obviousness” of the assumption that “a mind endowed with consciousness simply cannot work like a computer, despite the algorithmic nature of many components of our mental activity.”
- Sphere of application. About the spheres of application of artificial intelligence, Ignacy Belda argues: “Artificial intelligence gradually entered our lives. Sooner or later, the day will come when there will be systems with the same level of creativity, sensation, and emotional intelligence as a person. On the day when this happens, we will understand that we are not alone.” The classic textbook for artificial intelligence in the United States was the work of well-known computer scientists Stewart Russell and Peter Norvig “Artificial Intelligence: A Modern Approach,” in which artificial intelligence is defined as “the science of agents who receive results from their environment-related actions
State What Make AI Project Successful
Traditionally, an indicator of overall success in the development of artificial intelligence systems is considered to be the ability to externally model typical human functions, qualities, and properties, thereby surpassing a person in typical human activities. Manifestations and “self-realization” of the developed samples are perceived through the prism of the human factor and the so-called “effect of AI” (meaninglessness and “demythologization” of activity), which is a latent, but nonetheless global problem in this sphere.
The problem is particularly relevant due to the lack of criteria for interpreting and “understanding” what we have as the results of activities in the field of developing artificial intelligence. Namely, a purely algorithmic mechanism lacking the capacity for understanding, or a psycho-machine with the potential for the emergence of proto-metal qualities, that is, the makings of the psyche and, perhaps, intelligence. Despite the terminological features of the very concept of “artificial intelligence,” in the world scientific community, it is considered that the presence of consciousness, and not of intelligence, will become a necessary and sufficient basis for recognizing a machine reasonable.
Offer Not Trivial Solutions
In contrast to the development of logo-machines, offer a more creative idea: the formation of a psycho-machine. The purpose of the psycho-machine is not to replace a person in complex, unpleasant, or non-odd activities or to compete with a person in intellectual or logical tasks. For this machine, it is not at all necessary to demonstrate intellectual or mental indicators, but instead having an extensive structured and well-written base of the corresponding algorithms, which will allow it to quite successfully cope with activities that are beyond the power of a person due to the human factor. The idea of the psycho-machine is much more ambitious, and even spiritual and specifically existential in its own way.
In essence, we are talking about creating something much larger than the man himself, something super-anthropic or even meta-anthropic. And this is precisely the idea of creating a psycho-machine, which is the apotheosis and quintessence of human capabilities, as well as the resolution of the so-called “Complex of God.” The established technology should immeasurably surpass the human skills and abilities in the sphere of mental, intellectual, spiritual, and existential.
No related posts.
Leave a Comment Cancel reply
Notify me of follow-up comments by email.
This site uses Akismet to reduce spam. Learn how your comment data is processed .
How would you rate Smodin?
Help us improve Smodin by leaving us feedback!
World’s #1 Research Paper Generator
Over 5,000 research papers generated daily
Have an AI Research and write your Paper in just 5 words
See it for yourself: get your free research paper started with just 5 words, how smodin makes research paper writing easy, instantly find sources for any sentence.

Our AI research tool in the research paper editor interface makes it easy to find a source or fact check any piece of text on the web. It will find you the most relevant or related piece of information and the source it came from. You can quickly add that reference to your document references with just a click of a button. We also provide other modes for research such as “find support statistics”, “find supporting arguments”, “find useful information”, and other research methods to make finding the information you need a breeze. Make research paper writing and research easy with our AI research assistant.
Easily Cite References

Our research paper generator makes citing references in MLA and APA styles for web sources and references an easy task. The research paper writer works by first identifying the primary elements in each source, such as the author, title, publication date, and URL, and then organizing them in the correct format required by the chosen citation style. This ensures that the references are accurate, complete, and consistent. The product provides helpful tools to generate citations and bibliographies in the appropriate style, making it easier for you to document your sources and avoid plagiarism. Whether you’re a student or a professional writer, our research paper generator saves you time and effort in the citation process, allowing you to focus on the content of your work.
Free AI Research Paper Generator & Writer - Say Goodbye to Writer's Block!
Are you struggling with writer's block? Even more so when it comes to your research papers. Do you want to write a paper that excels, but can't seem to find the inspiration to do so? Say goodbye to writer's block with Smodin’s Free AI Research Paper Generator & Writer!
Smodin’s AI-powered tool generates high-quality research papers by analyzing millions of papers and using advanced algorithms to create unique content. All you need to do is input your topic, and Smodin’s Research Paper generator will provide you with a well-written paper in no time.
Why Use Smodin Free AI Research Paper Generator & Writer?
Writing a research paper can be a complicated task, even more so when you have limited time and resources. A research paper generator can help you streamline the process, by quickly finding and organizing relevant sources. With Smodin's research paper generator, you can produce high-quality papers in minutes, giving you more time to focus on analysis and writing
Benefits of Smodin’s Free Research Paper Generator
- Save Time: Smodin AI-powered generator saves you time by providing you with a well-written paper that you can edit and submit.
- Quality Content: Smodin uses advanced algorithms to analyze millions of papers to ensure that the content is of the highest quality.
- Easy to Use: Smodin is easy to use, even if you're not familiar with the topic. It is perfect for students, researchers, and professionals who want to create high-quality content.
How to Write a Research Paper?
All you need is an abstract or a title, and Smodin’s AI-powered software will quickly find sources for any topic or subject you need. With Smodin, you can easily produce multiple sections, including the introduction, discussion, and conclusion, saving you valuable time and effort.
Who can write a Research Paper?
Everyone can! Smodin's research paper generator is perfect for students, researchers, and anyone else who needs to produce high-quality research papers quickly and efficiently. Whether you're struggling with writer's block or simply don't have the time to conduct extensive research, Smodin can help you achieve your goals.
Tips for Using Smodin's Research Paper Generator
With our user-friendly interface and advanced AI algorithms, you can trust Smodin's paper writer to deliver accurate and reliable results every time. While Smodin's research paper generator is designed to be easy to use, there are a few tips you can follow to get the most out of Smodinl. First, be sure to input a clear and concise abstract or title to ensure accurate results. Second, review and edit the generated paper to ensure it meets your specific requirements and style. And finally, use the generated paper as a starting point for your research and writing, or to continue generating text.
The Future of Research Paper Writing
As technology continues to advance, the future of research paper writing is likely to become increasingly automated. With tools like Smodin's research paper generator, researchers and students can save time and effort while producing high-quality work. Whether you're looking to streamline your research process or simply need a starting point for your next paper, Smodin's paper generator is a valuable resource for anyone interested in academic writing.
So why wait? Try Smodin's free AI research paper generator and paper writer today and experience the power of cutting-edge technology for yourself. With Smodin, you can produce high-quality research papers in minutes, saving you time and effort while ensuring your work is of the highest caliber.
© 2023 Smodin LLC
Free Research Paper Samples, Research Proposal Examples and Tips | UsefulResearchPapers.com
Research paper on artificial intelligence.
August 24, 2013 UsefulResearchPapers Research Papers 0
Artificial Intelligence or AI is an artificially created intelligence and the name of the branch of mainly computer science that seeks to understand and develop the AI theory and functioning, and tries to build intelligent systems.
There are divided opinions about what exactly the term artificial intelligence covers. Stuart Russell and Peter Norvig presented different perspectives within the discipline by organizing them according to the two dimensions of mental processes and behavior, where four alternative options follow from which you can attempt to create intelligent systems. The four options are: systems that think like humans, systems that think rationally, systems that act like humans, and finally the systems behaving rationally.
You can provide you with research paper help on Artificial Intelligence!
AI has its roots in a number of different disciplines that have helped lt with ideas and methods. Philosophy has contributed the theories of reasoning and learning, mathematics has given the formal theories: logic, probability, decision-taking and computability; psychology has given it tools to investigate the human brain; linguistics has presented the theories about the structure and meaning of language, and finally computer science with tools suited to create AI.
The first experiment that is generally regarded as AI was performed by Warren Sturgis McCulloch and Walter Pitts (1943), but science Artificial Intelligence was founded not formally and had its name at a conference at Dartmouth College in 1956. In the conference, the researchers from Carnegie Tech, an Allen Newell and Herbert Simon, presented expository program called the Logic Theorist.
The general distrust of the ability of computers meant that the early researchers had a lot of success stories where they managed to get a computer to do something that no one previously thought possible. The Logic Theorist was followed by the General Problem Solver and the Geometry Theorem samples.
AI in computer games are used for enabling NPCs to move realistically and also be able to learn and change their behavior, and thus act more intelligently.
In such computer game genre as a first person shooter, the following techniques are used in order to make bots (AI controlled) act more realistic:
- Search – enabling NPCs to navigate.
- Decision Algorithms – enabling NPCs to act in an intelligent manner.
- Tactical techniques – groups of NPCs can cooperate and interact in an intelligent way.
- Machine Learning – enabling NPCs learn and change their behavior, and thus act more intelligently than before.
To prepare a decent research paper or proposal on artificial intellect it is very important to understand exactly what you are writing about. You have to process a lot of relevant data on the subject, taken from reliable and verified sources. In addition, you will need to present the result of your research on paper. If you want to do it in a right way, but have not enough skills for that, we recommend you to use free sample research paper topics on artificial intelligence.
Are you looking for a top-notch custom research paper on Artificial Intelligence topics? Is confidentiality as important to you as the high quality of the product?
Try our writing service at EssayLib.com! We can offer you professional assistance at affordable rates. Our experienced PhD and Master’s writers are ready to take into account your smallest demands. We guarantee you 100% authenticity of your paper and assure you of dead on time delivery. Proceed with the order form:

Please, feel free to visit us at EssayLib.com and learn more about our service!
Similar Posts:
- Artificial Intelligence Research Proposal
- Research Paper on Artificial Neural Network
- Research Paper on Natural Language Processing
Copyright © 2023 | WordPress Theme by MH Themes
- Submit Paper
- Check Paper Status
- Download Certificate/Paper

- --> --> --> --> --> E-mail [email protected] --> -->