> cs > cs.AI
Help | Advanced Search
Authors and titles for recent submissions.
- Fri, 17 Nov 2023
- Thu, 16 Nov 2023
- Wed, 15 Nov 2023
- Tue, 14 Nov 2023
- Mon, 13 Nov 2023
Fri, 17 Nov 2023 (showing first 25 of 115 entries)
Links to: arXiv , form interface , find , cs , new , 2311 , contact , h elp ( Access key information)
- Published: 17 April 2021
Artificial intelligence and machine learning research: towards digital transformation at a global scale
- Akila Sarirete 1 ,
- Zain Balfagih 1 ,
- Tayeb Brahimi 1 ,
- Miltiadis D. Lytras 1 , 2 &
- Anna Visvizi 3 , 4
Journal of Ambient Intelligence and Humanized Computing volume 13 , pages 3319–3321 ( 2022 ) Cite this article
Working on a manuscript?
Artificial intelligence (AI) is reshaping how we live, learn, and work. Until recently, AI used to be a fanciful concept, more closely associated with science fiction rather than with anything else. However, driven by unprecedented advances in sophisticated information and communication technology (ICT), AI today is synonymous technological progress already attained and the one yet to come in all spheres of our lives (Chui et al. 2018 ; Lytras et al. 2018 , 2019 ).
Considering that Machine Learning (ML) and AI are apt to reach unforeseen levels of accuracy and efficiency, this special issue sought to promote research on AI and ML seen as functions of data-driven innovation and digital transformation. The combination of expanding ICT-driven capabilities and capacities identifiable across our socio-economic systems along with growing consumer expectations vis-a-vis technology and its value-added for our societies, requires multidisciplinary research and research agenda on AI and ML (Lytras et al. 2021 ; Visvizi et al. 2020 ; Chui et al. 2020 ). Such a research agenda should oscilate around the following five defining issues (Fig. 1 ):
Source: The Authors
An AI-Driven Digital Transformation in all aspects of human activity/
Integration of diverse data-warehouses to unified ecosystems of AI and ML value-based services
Deployment of robust AI and ML processing capabilities for enhanced decision making and generation of value our of data.
Design of innovative novel AI and ML applications for predictive and analytical capabilities
Design of sophisticated AI and ML-enabled intelligence components with critical social impact
Promotion of the Digital Transformation in all the aspects of human activity including business, healthcare, government, commerce, social intelligence etc.
Such development will also have a critical impact on government, policies, regulations and initiatives aiming to interpret the value of the AI-driven digital transformation to the sustainable economic development of our planet. Additionally the disruptive character of AI and ML technology and research will required further research on business models and management of innovation capabilities.
This special issue is based on submissions invited from the 17th Annual Learning and Technology Conference 2019 that was held at Effat University and open call jointly. Several very good submissions were received. All of them were subjected a rigorous peer review process specific to the Ambient Intelligence and Humanized Computing Journal.
A variety of innovative topics are included in the agenda of the published papers in this special issue including topics such as:
Stock market Prediction using Machine learning
Detection of Apple Diseases and Pests based on Multi-Model LSTM-based Convolutional Neural Networks
ML for Searching
Machine Learning for Learning Automata
Entity recognition & Relation Extraction
Intelligent Surveillance Systems
Activity Recognition and K-Means Clustering
Distributed Mobility Management
Review Rating Prediction with Deep Learning
Cybersecurity: Botnet detection with Deep learning
Neuro-Fuzzy Inference systems
Monarch Butterfly Optimized Control with Robustness Analysis
GMM methods for speaker age and gender classification
Regression methods for Permeability Prediction of Petroleum Reservoirs
Surface EMG Signal Classification
Human Activity Recognition in Smart Environments
Teaching–Learning based Optimization Algorithm
Big Data Analytics
Diagnosis based on Event-Driven Processing and Machine Learning for Mobile Healthcare
Over a decade ago, Effat University envisioned a timely platform that brings together educators, researchers and tech enthusiasts under one roof and functions as a fount for creativity and innovation. It was a dream that such platform bridges the existing gap and becomes a leading hub for innovators across disciplines to share their knowledge and exchange novel ideas. It was in 2003 that this dream was realized and the first Learning & Technology Conference was held. Up until today, the conference has covered a variety of cutting-edge themes such as Digital Literacy, Cyber Citizenship, Edutainment, Massive Open Online Courses, and many, many others. The conference has also attracted key, prominent figures in the fields of sciences and technology such as Farouq El Baz from NASA, Queen Rania Al-Abdullah of Jordan, and many others who addressed large, eager-to-learn audiences and inspired many with unique stories.
While emerging innovations, such as Artificial Intelligence technologies, are seen today as promising instruments that could pave our way to the future, these were also the focal points around which fruitful discussions have always taken place here at the L&T. The (AI) was selected for this conference due to its great impact. The Saudi government realized this impact of AI and already started actual steps to invest in AI. It is stated in the Kingdome Vision 2030: "In technology, we will increase our investments in, and lead, the digital economy." Dr. Ahmed Al Theneyan, Deputy Minister of Technology, Industry and Digital Capabilities, stated that: "The Government has invested around USD 3 billion in building the infrastructure so that the country is AI-ready and can become a leader in AI use." Vision 2030 programs also promote innovation in technologies. Another great step that our country made is establishing NEOM city (the model smart city).
Effat University realized this ambition and started working to make it a reality by offering academic programs that support the different sectors needed in such projects. For example, the master program in Energy Engineering was launched four years ago to support the energy sector. Also, the bachelor program of Computer Science has tracks in Artificial Intelligence and Cyber Security which was launched in Fall 2020 semester. Additionally, Energy & Technology and Smart Building Research Centers were established to support innovation in the technology and energy sectors. In general, Effat University works effectively in supporting the KSA to achieve its vision in this time of national transformation by graduating skilled citizen in different fields of technology.
The guest editors would like to take this opportunity to thank all the authors for the efforts they put in the preparation of their manuscripts and for their valuable contributions. We wish to express our deepest gratitude to the referees, who provided instrumental and constructive feedback to the authors. We also extend our sincere thanks and appreciation for the organizing team under the leadership of the Chair of L&T 2019 Conference Steering Committee, Dr. Haifa Jamal Al-Lail, University President, for her support and dedication.
Our sincere thanks go to the Editor-in-Chief for his kind help and support.
Chui KT, Lytras MD, Visvizi A (2018) Energy sustainability in smart cities: artificial intelligence, smart monitoring, and optimization of energy consumption. Energies 11(11):2869
Article Google Scholar
Chui KT, Fung DCL, Lytras MD, Lam TM (2020) Predicting at-risk university students in a virtual learning environment via a machine learning algorithm. Comput Human Behav 107:105584
Lytras MD, Visvizi A, Daniela L, Sarirete A, De Pablos PO (2018) Social networks research for sustainable smart education. Sustainability 10(9):2974
Lytras MD, Visvizi A, Sarirete A (2019) Clustering smart city services: perceptions, expectations, responses. Sustainability 11(6):1669
Lytras MD, Visvizi A, Chopdar PK, Sarirete A, Alhalabi W (2021) Information management in smart cities: turning end users’ views into multi-item scale development, validation, and policy-making recommendations. Int J Inf Manag 56:102146
Visvizi A, Jussila J, Lytras MD, Ijäs M (2020) Tweeting and mining OECD-related microcontent in the post-truth era: A cloud-based app. Comput Human Behav 107:105958
Authors and affiliations.
Effat College of Engineering, Effat Energy and Technology Research Center, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia
Akila Sarirete, Zain Balfagih, Tayeb Brahimi & Miltiadis D. Lytras
King Abdulaziz University, Jeddah, 21589, Saudi Arabia
Miltiadis D. Lytras
Effat College of Business, Effat University, P.O. Box 34689, Jeddah, Saudi Arabia
Institute of International Studies (ISM), SGH Warsaw School of Economics, Aleja Niepodległości 162, 02-554, Warsaw, Poland
You can also search for this author in PubMed Google Scholar
Correspondence to Akila Sarirete .
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
About this article
Cite this article.
Sarirete, A., Balfagih, Z., Brahimi, T. et al. Artificial intelligence and machine learning research: towards digital transformation at a global scale. J Ambient Intell Human Comput 13 , 3319–3321 (2022). https://doi.org/10.1007/s12652-021-03168-y
Published : 17 April 2021
Issue Date : July 2022
DOI : https://doi.org/10.1007/s12652-021-03168-y
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Find a journal
- Publish with us
The Journal of Artificial Intelligence Research (JAIR) is dedicated to the rapid dissemination of important research results to the global artificial intelligence (AI) community. The journal’s scope encompasses all areas of AI, including agents and multi-agent systems, automated reasoning, constraint processing and search, knowledge representation, machine learning, natural language, planning and scheduling, robotics and vision, and uncertainty in AI.
Vol. 78 (2023)
Actor Prioritized Experience Replay
Maximisation of admissible multi-objective heuristics, scalable neural-probabilistic answer set programming, maintenance of plan libraries for case-based planning: offline and online policies, asymptotics of k-fold cross validation, diagnosing ai explanation methods with folk concepts of behavior, how to tell easy from hard: complexities of conjunctive query entailment in extensions of alc, non-crossing anonymous mapf for tethered robots, a comprehensive survey on deep graph representation learning methods, select and augment: enhanced dense retrieval knowledge graph augmentation, embedding ontologies in the description logic alc by axis-aligned cones, amortized variational inference: a systematic review, clustering what matters: optimal approximation for clustering with outliers, sequence-oriented diagnosis of discrete-event systems, a benchmark study on knowledge graphs enrichment and pruning methods in the presence of noisy relationships, exploiting functional constraints in automatic dominance breaking for constraint optimization.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- Sensors (Basel)
The Impact of Artificial Intelligence on Data System Security: A Literature Review
1 ISEC Lisboa, Instituto Superior de Educação e Ciências, 1750-142 Lisbon, Portugal; [email protected]
2 Research Unit on Governance, Competitiveness and Public Policies (GOVCOPP), University of Aveiro, 3810-193 Aveiro, Portugal
Diverse forms of artificial intelligence (AI) are at the forefront of triggering digital security innovations based on the threats that are arising in this post-COVID world. On the one hand, companies are experiencing difficulty in dealing with security challenges with regard to a variety of issues ranging from system openness, decision making, quality control, and web domain, to mention a few. On the other hand, in the last decade, research has focused on security capabilities based on tools such as platform complacency, intelligent trees, modeling methods, and outage management systems in an effort to understand the interplay between AI and those issues. the dependence on the emergence of AI in running industries and shaping the education, transports, and health sectors is now well known in the literature. AI is increasingly employed in managing data security across economic sectors. Thus, a literature review of AI and system security within the current digital society is opportune. This paper aims at identifying research trends in the field through a systematic bibliometric literature review (LRSB) of research on AI and system security. the review entails 77 articles published in the Scopus ® database, presenting up-to-date knowledge on the topic. the LRSB results were synthesized across current research subthemes. Findings are presented. the originality of the paper relies on its LRSB method, together with an extant review of articles that have not been categorized so far. Implications for future research are suggested.
The assumption that the human brain may be deemed quite comparable to computers in some ways offers the spontaneous basis for artificial intelligence (AI), which is supported by psychology through the idea of humans and animals operating like machines that process information by devices of associative memory [ 1 ]. Nowadays, researchers are working on the possibilities of AI to cope with varying issues of systems security across diverse sectors. Hence, AI is commonly considered an interdisciplinary research area that attracts considerable attention both in economics and social domains as it offers a myriad of technological breakthroughs with regard to systems security [ 2 ]. There is a universal trend of investing in AI technology to face security challenges of our daily lives, such as statistical data, medicine, and transportation [ 3 ].
Some claim that specific data from key sectors have supported the development of AI, namely the availability of data from e-commerce [ 4 ], businesses [ 5 ], and government [ 6 ], which provided substantial input to ameliorate diverse machine-learning solutions and algorithms, in particular with respect to systems security [ 7 ]. Additionally, China and Russia have acknowledged the importance of AI for systems security and competitiveness in general [ 8 , 9 ]. Similarly, China has recognized the importance of AI in terms of housing security, aiming at becoming an authority in the field [ 10 ]. Those efforts are already being carried out in some leading countries in order to profit the most from its substantial benefits [ 9 ]. In spite of the huge development of AI in the last few years, the discussion around the topic of systems security is sparse [ 11 ]. Therefore, it is opportune to acquaint the last developments regarding the theme in order to map the advancements in the field and ensuing outcomes [ 12 ]. In view of this, we intend to find out the principal trends of issues discussed on the topic these days in order to answer the main research question: What is the impact of AI on data system security?
The article is organized as follows. In Section 2 , we put forward diverse theoretical concepts related to AI in systems security. In Section 3 , we present the methodological approach. In Section 4 , we discuss the main fields of use of AI with regard to systems security, which came out from the literature. Finally, we conclude this paper by suggesting implications and future research avenues.
2. Literature Trends: AI and Systems Security
The concept of AI was introduced following the creation of the notion of digital computing machine in an attempt to ascertain whether a machine is able to “think” [ 1 ] or if the machine can carry out humans’ tasks [ 13 ]. AI is a vast domain of information and computer technologies (ICT), which aims at designing systems that can operate autonomously, analogous to the individuals’ decision-making process [ 14 ].In terms of AI, a machine may learn from experience through processing an immeasurable quantity of data while distinguishing patterns in it, as in the case of Siri [ 15 ] and image recognition [ 16 ], technologies based on machine learning that is a subtheme of AI, defined as intelligent systems with the capacity to think and learn [ 1 ].
Furthermore, AI entails a myriad of related technologies, such as neural networks [ 17 ] and machine learning [ 18 ], just to mention a few, and we can identify some research areas of AI:
- (I) Machine learning is a myriad of technologies that allow computers to carry out algorithms based on gathered data and distinct orders, providing the machine the capabilities to learn without instructions from humans, adjusting its own algorithm to the situation, while learning and recoding itself, such as Google and Siri when performing distinct tasks ordered by voice [ 19 ]. As well, video surveillance that tracks unusual behavior [ 20 ];
- (II) Deep learning constitutes the ensuing progress of machine learning, in which the machine carry out tasks directly from pictures, text, and sound, through a wide set of data architecture that entails numerous layers in order to learn and characterize data with several levels of abstraction imitating thus how the natural brain processes information [ 21 ]. This is illustrated, for example, in forming a certificate database structure of university performance key indicators, in order to fix issues such as identity authentication [ 21 ];
- (III) Neural networks are composed of a pattern recognition system that machine/deep learning operates to perform learning from observational data, figuring out its own solutions such as an auto-steering gear system with a fuzzy regulator, which enables to select optimal neural network models of the vessel paths, to obtain in this way control activity [ 22 ];
- (IV) Natural language processing machines analyze language and speech as it is spoken, resorting to machine learning and natural language processing, such as developing a swarm intelligence and active system, while mounting friendly human-computer interface software for users, to be implemented in educational and e-learning organizations [ 23 ];
- (V) Expert systems are composed of software arrangements that assist in achieving answers to distinct inquiries provided either by a customer or by another software set, in which expert knowledge is set aside in a particular area of the application that includes a reasoning component to access answers, in view of the environmental information and subsequent decision making [ 24 ].
Those subthemes of AI are applied to many sectors, such as health institutions, education, and management, through varying applications related to systems security. These abovementioned processes have been widely deployed to solve important security issues such as the following application trends ( Figure 1 ):
- (a) Cyber security, in terms of computer crime, behavior research, access control, and surveillance, as for example the case of computer vision, in which an algorithmic analyses images, CAPTCHA (Completely Automated Public Turing test to tell Computers and Humans Apart) techniques [ 6 , 7 , 12 , 19 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 , 38 ];
- (b) Information management, namely in supporting decision making, business strategy, and expert systems, for example, by improving the quality of the relevant strategic decisions by analyzing big data, as well as in the management of the quality of complex objects [ 2 , 4 , 5 , 11 , 14 , 24 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 , 54 , 55 , 56 , 57 , 58 , 59 , 60 ];
- (c) Societies and institutions, regarding computer networks, privacy, and digitalization, legal and clinical assistance, for example, in terms of legal support of cyber security, digital modernization, systems to support police investigations and the efficiency of technological processes in transport [ 8 , 9 , 10 , 15 , 17 , 18 , 20 , 21 , 23 , 28 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 , 69 , 70 , 71 , 72 , 73 ];
- (d) Neural networks, for example, in terms of designing a model of human personality for use in robotic systems [ 1 , 13 , 16 , 22 , 74 , 75 ].
Subthemes/network of all keywords of AI—source: own elaboration.
Through these streams of research, we will explain how the huge potential of AI can be deployed to over-enhance systems security that is in use both in states and organizations, to mitigate risks and increase returns while identifying, averting cyber attacks, and determine the best course of action [ 19 ]. AI could even be unveiled as more effective than humans in averting potential threats by various security solutions such as redundant systems of video surveillance, VOIP voice network technology security strategies [ 36 , 76 , 77 ], and dependence upon diverse platforms for protection (platform complacency) [ 30 ].
The design of the abovementioned conceptual and technological framework was not made randomly, as we did a preliminary search on Scopus with the keywords “Artificial Intelligence” and “Security”.
3. Materials and Methods
We carried out a systematic bibliometric literature review (LRSB) of the “Impact of AI on Data System Security”. the LRSB is a study concept that is based on a detailed, thorough study of the recognition and synthesis of information, being an alternative to traditional literature reviews, improving: (i) the validity of the review, providing a set of steps that can be followed if the study is replicated; (ii) accuracy, providing and demonstrating arguments strictly related to research questions; and (iii) the generalization of the results, allowing the synthesis and analysis of accumulated knowledge [ 78 , 79 , 80 ]. Thus, the LRSB is a “guiding instrument” that allows you to guide the review according to the objectives.
The study is performed following Raimundo and Rosário suggestions as follows: (i) definition of the research question; (ii) location of the studies; (iii) selection and evaluation of studies; (iv) analysis and synthesis; (v) presentation of results; finally (vi) discussion and conclusion of results. This methodology ensures a comprehensive, auditable, replicable review that answers the research questions.
The review was carried out in June 2021, with a bibliographic search in the Scopus database of scientific articles published until June 2021. the search was carried out in three phases: (i) using the keyword Artificial Intelligence “382,586 documents were obtained; (ii) adding the keyword “Security”, we obtained a set of 15,916 documents; we limited ourselves to Business, Management, and Accounting 401 documents were obtained and finally (iii) exact keyword: Data security, Systems security a total of 77 documents were obtained ( Table 1 ).
Source: own elaboration.
The search strategy resulted in 77 academic documents. This set of eligible break-downs was assessed for academic and scientific relevance and quality. Academic Documents, Conference Paper (43); Article (29); Review (3); Letter (1); and retracted (1).
Peer-reviewed academic documents on the impact of artificial intelligence on data system security were selected until 2020. In the period under review, 2021 was the year with the highest number of peer-reviewed academic documents on the subject, with 18 publications, with 7 publications already confirmed for 2021. Figure 2 reviews peer-reviewed publications published until 2021.
Number of documents by year. Source: own elaboration.
The publications were sorted out as follows: 2011 2nd International Conference on Artificial Intelligence Management Science and Electronic Commerce Aimsec 2011 Proceedings (14); Proceedings of the 2020 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2020 (6); Proceedings of the 2019 IEEE International Conference Quality Management Transport and Information Security Information Technologies IT and Qm and Is 2019 (5); Computer Law and Security Review (4); Journal of Network and Systems Management (4); Decision Support Systems (3); Proceedings 2021 21st Acis International Semi Virtual Winter Conference on Software Engineering Artificial Intelligence Networking and Parallel Distributed Computing Snpd Winter 2021 (3); IEEE Transactions on Engineering Management (2); Ictc 2019 10th International Conference on ICT Convergence ICT Convergence Leading the Autonomous Future (2); Information and Computer Security (2); Knowledge Based Systems (2); with 1 publication (2013 3rd International Conference on Innovative Computing Technology Intech 2013; 2020 IEEE Technology and Engineering Management Conference Temscon 2020; 2020 International Conference on Technology and Entrepreneurship Virtual Icte V 2020; 2nd International Conference on Current Trends In Engineering and Technology Icctet 2014; ACM Transactions on Management Information Systems; AFE Facilities Engineering Journal; Electronic Design; Facct 2021 Proceedings of the 2021 ACM Conference on Fairness Accountability and Transparency; HAC; ICE B 2010 Proceedings of the International Conference on E Business; IEEE Engineering Management Review; Icaps 2008 Proceedings of the 18th International Conference on Automated Planning and Scheduling; Icaps 2009 Proceedings of the 19th International Conference on Automated Planning and Scheduling; Industrial Management and Data Systems; Information and Management; Information Management and Computer Security; Information Management Computer Security; Information Systems Research; International Journal of Networking and Virtual Organisations; International Journal of Production Economics; International Journal of Production Research; Journal of the Operational Research Society; Proceedings 2020 2nd International Conference on Machine Learning Big Data and Business Intelligence Mlbdbi 2020; Proceedings Annual Meeting of the Decision Sciences Institute; Proceedings of the 2014 Conference on IT In Business Industry and Government An International Conference By Csi on Big Data Csibig 2014; Proceedings of the European Conference on Innovation and Entrepreneurship Ecie; TQM Journal; Technology In Society; Towards the Digital World and Industry X 0 Proceedings of the 29th International Conference of the International Association for Management of Technology Iamot 2020; Wit Transactions on Information and Communication Technologies).
We can say that in recent years there has been some interest in research on the impact of artificial intelligence on data system security.
In Table 2 , we analyze for the Scimago Journal & Country Rank (SJR), the best quartile, and the H index by publication.
Scimago journal and country rank impact factor.
Note: * data not available. Source: own elaboration.
Information Systems Research is the most quoted publication with 3510 (SJR), Q1, and H index 159.
There is a total of 11 journals on Q1, 3 journals on Q2 and 2 journals on Q3, and 2 journal on Q4. Journals from best quartile Q1 represent 27% of the 41 journals titles; best quartile Q2 represents 7%, best quartile Q3 represents 5%, and finally, best Q4 represents 5% each of the titles of 41 journals. Finally, 23 of the publications representing 56%, the data are not available.
As evident from Table 2 , the significant majority of articles on artificial intelligence on data system security rank on the Q1 best quartile index.
The subject areas covered by the 77 scientific documents were: Business, Management and Accounting (77); Computer Science (57); Decision Sciences (36); Engineering (21); Economics, Econometrics, and Finance (15); Social Sciences (13); Arts and Humanities (3); Psychology (3); Mathematics (2); and Energy (1).
The most quoted article was “CCANN: An intrusion detection system based on combining cluster centers and nearest neighbors” from Lin, Ke, and Tsai 290 quotes published in the Knowledge-Based Systems with 1590 (SJR), the best quartile (Q1) and with H index (121). the published article proposes a new resource representation approach, a cluster center, and the nearest neighbor approach.
In Figure 3 , we can analyze the evolution of citations of documents published between 2010 and 2021, with a growing number of citations with an R2 of 0.45%.
Evolution and number of citations between 2010 and 2021. Source: own elaboration.
The h index was used to verify the productivity and impact of the documents, based on the largest number of documents included that had at least the same number of citations. Of the documents considered for the h index, 11 have been cited at least 11 times.
In Appendix A , Table A1 , citations of all scientific articles until 2021 are analyzed; 35 documents were not cited until 2021.
Appendix A , Table A2 , examines the self-quotation of documents until 2021, in which self-quotation was identified for a total of 16 self-quotations.
In Figure 4 , a bibliometric analysis was performed to analyze and identify indicators on the dynamics and evolution of scientific information using the main keywords. the analysis of the bibliometric research results using the scientific software VOSviewe aims to identify the main keywords of research in “Artificial Intelligence” and “Security”.
Network of linked keywords. Source: own elaboration.
The linked keywords can be analyzed in Figure 4 , making it possible to clarify the network of keywords that appear together/linked in each scientific article, allowing us to know the topics analyzed by the research and to identify future research trends.
By examining the selected pieces of literature, we have identified four principal areas that have been underscored and deserve further investigation with regard to cyber security in general: business decision making, electronic commerce business, AI social applications, and neural networks ( Figure 4 ). There is a myriad of areas in where AI cyber security can be applied throughout social, private, and public domains of our daily lives, from Internet banking to digital signatures.
First, it has been discussed the possible decreasing of unnecessary leakage of accounting information [ 27 ], mainly through security drawbacks of VOIP technology in IP network systems and subsequent safety measures [ 77 ], which comprises a secure dynamic password used in Internet banking [ 29 ].
Second, it has been researched some computer user cyber security behaviors, which includes both a naïve lack of concern about the likelihood of facing security threats and dependence upon specific platforms for protection, as well as the dependence on guidance from trusted social others [ 30 ], which has been partly resolved through a mobile agent (MA) management systems in distributed networks, while operating a model of an open management framework that provides a broad range of processes to enforce security policies [ 31 ].
Third, AI cyber systems security always aims at achieving stability of the programming and analysis procedures by clarifying the relationship of code fault-tolerance programming with code security in detail to strengthen it [ 33 ], offering an overview of existing cyber security tasks and roadmap [ 32 ].
Fourth, in this vein, numerous AI tools have been developed to achieve a multi-stage security task approach for a full security life cycle [ 38 ]. New digital signature technology has been built, amidst the elliptic curve cryptography, of increasing reliance [ 28 ]; new experimental CAPTCHA has been developed, through more interference characters and colorful background [ 8 ] to provide better protection against spambots, allowing people with little knowledge of sign languages to recognize gestures on video relatively fast [ 70 ]; novel detection approach beyond traditional firewall systems have been developed (e.g., cluster center and nearest neighbor—CANN) of higher efficiency for detection of attacks [ 71 ]; security solutions of AI for IoT (e.g., blockchain), due to its centralized architecture of security flaws [ 34 ]; and integrated algorithm of AI to identify malicious web domains for security protection of Internet users [ 19 ].
In sum, AI has progressed lately by advances in machine learning, with multilevel solutions to the security problems faced in security issues both in operating systems and networks, comprehending algorithms, methods, and tools lengthily used by security experts for the better of the systems [ 6 ]. In this way, we present a detailed overview of the impacts of AI on each of those fields.
4.1. Business Decision Making
AI has an increasing impact on systems security aimed at supporting decision making at the management level. More and more, it is discussed expert systems that, along with the evolution of computers, are able to integrate systems into corporate culture [ 24 ]. Such systems are expected to maximize benefits against costs in situations where a decision-making agent has to decide between a limited set of strategies of sparse information [ 14 ], while a strategic decision in a relatively short period of time is required demanded and of quality, for example by intelligent analysis of big data [ 39 ].
Secondly, it has been adopted distributed decision models coordinated toward an overall solution, reliant on a decision support platform [ 40 ], either more of a mathematical/modeling support of situational approach to complex objects [ 41 ], or more of a web-based multi-perspective decision support system (DSS) [ 42 ].
Thirdly, the problem of software for the support of management decisions was resolved by combining a systematic approach with heuristic methods and game-theoretic modeling [ 43 ] that, in the case of industrial security, reduces the subsequent number of incidents [ 44 ].
Fourthly, in terms of industrial management and ISO information security control, a semantic decision support system increases the automation level and support the decision-maker at identifying the most appropriate strategy against a modeled environment [ 45 ] while providing understandable technology that is based on the decisions and interacts with the machine [ 46 ].
Finally, with respect to teamwork, AI validates a theoretical model of behavioral decision theory to assist organizational leaders in deciding on strategic initiatives [ 11 ] while allowing understanding who may have information that is valuable for solving a collaborative scheduling problem [ 47 ].
4.2. Electronic Commerce Business
The third research stream focuses on e-commerce solutions to improve its systems security. This AI research stream focuses on business, principally on security measures to electronic commerce (e-commerce), in order to avoid cyber attacks, innovate, achieve information, and ultimately obtain clients [ 5 ].
First, it has been built intelligent models around the factors that induce Internet users to make an online purchase, to build effective strategies [ 48 ], whereas it is discussed the cyber security issues by diverse AI models for controlling unauthorized intrusion [ 49 ], in particular in some countries such as China, to solve drawbacks in firewall technology, data encryption [ 4 ] and qualification [ 2 ].
Second, to adapt to the increasingly demanding environment nowadays of a world pandemic, in terms of finding new revenue sources for business [ 3 ] and restructure business digital processes to promote new products and services with enough privacy and manpower qualified accordingly and able to deal with the AI [ 50 ].
Third, to develop AI able to intelligently protect business either by a distinct model of decision trees amidst the Internet of Things (IoT) [ 51 ] or by ameliorating network management through active networks technology, of multi-agent architecture able to imitate the reactive behavior and logical inference of a human expert [ 52 ].
Fourth, to reconceptualize the role of AI within the proximity’s spatial and non-spatial dimensions of a new digital industry framework, aiming to connect the physical and digital production spaces both in the traditional and new technology-based approaches (e.g., industry 4.0), promoting thus innovation partnerships and efficient technology and knowledge transfer [ 53 ]. In this vein, there is an attempt to move the management systems from a centralized to a distributed paradigm along the network and based on criteria such as for example the delegation degree [ 54 ] that inclusive allows the transition from industry 4.0 to industry 5.0i, through AI in the form of Internet of everything, multi-agent systems and emergent intelligence and enterprise architecture [ 58 ].
Fifth, in terms of manufacturing environments, following that networking paradigm, there is also an attempt to manage agent communities in distributed and varied manufacturing environments through an AI multi-agent virtual manufacturing system (e.g., MetaMorph) that optimizes real-time planning and security [ 55 ]. In addition, in manufacturing, smart factories have been built to mitigate security vulnerabilities of intelligent manufacturing processes automation by AI security measures and devices [ 56 ] as, for example, in the design of a mine security monitoring configuration software platform of a real-time framework (e.g., the device management class diagram) [ 26 ]. Smart buildings in manufacturing and nonmanufacturing environments have been adopted, aiming at reducing costs, the height of the building, and minimizing the space required for users [ 57 ].
Finally, aiming at augmenting the cyber security of e-commerce and business in general, other projects have been put in place, such as computer-assisted audit tools (CAATs), able to carry on continuous auditing, allowing auditors to augment their productivity amidst the real-time accounting and electronic data interchange [ 59 ] and a surge in the demand of high-tech/AI jobs [ 60 ].
4.3. AI Social Applications
As seen, AI systems security can be widely deployed across almost all society domains, be in regulation, Internet security, computer networks, digitalization, health, and other numerous fields (see Figure 4 ).
First, it has been an attempt to regulate cyber security, namely in terms of legal support of cyber security, with regard to the application of artificial intelligence technology [ 61 ], in an innovative and economical/political-friendly way [ 9 ] and in fields such as infrastructures, by ameliorating the efficiency of technological processes in transport, reducing, for example, the inter train stops [ 63 ] and education, by improving the cyber security of university E-Gov, for example in forming a certificate database structure of university performance key indicators [ 21 ] e-learning organizations by swarm intelligence [ 23 ] and acquainting the risk a digital campus will face according to ISO series standards and criteria of risk levels [ 25 ] while suggesting relevant solutions to key issues in its network information safety [ 12 ].
Second, some moral and legal issues have risen, in particular in relation to privacy, sex, and childhood. Is the case of the ethical/legal legitimacy of publishing open-source dual-purpose machine-learning algorithms [ 18 ], the needed legislated framework comprising regulatory agencies and representatives of all stakeholder groups gathered around AI [ 68 ], the gendering issue of VPAs as female (e.g., Siri) as replicate normative assumptions about the potential role of women as secondary to men [ 15 ], the need of inclusion of communities to uphold its own code [ 35 ] and the need to improve the legal position of people and children in particular that are exposed to AI-mediated risk profiling practices [ 7 , 69 ].
Third, the traditional industry also benefits from AI, given that it can improve, for example, the safety of coal mine, by analyzing the coal mine safety scheme storage structure, building data warehouse and analysis [ 64 ], ameliorating, as well, the security of smart cities and ensuing intelligent devices and networks, through AI frameworks (e.g., United Theory of Acceptance and Use of Technology—UTAUT) [ 65 ], housing [ 10 ] and building [ 66 ] security system in terms of energy balance (e.g., Direct Digital Control System), implying fuzzy logic as a non-precise program tool that allows the systems to function well [ 66 ], or even in terms of data integrity attacks to outage management system OMSs and ensuing AI means to detect and mitigate them [ 67 ].
Fourth, the citizens, in general, have reaped benefits from areas of AI such as police investigation, through expert systems that offer support in terms of profiling and tracking criminals based on machine-learning and neural network techniques [ 17 ], video surveillance systems of real-time accuracy [ 76 ], resorting to models to detect moving objects keeping up with environment changes [ 36 ], of dynamical sensor selection in processing the image streams of all cameras simultaneously [ 37 ], whereas ambient intelligence (AmI) spaces, in where devices, sensors, and wireless networks, combine data from diverse sources and monitor user preferences and their subsequent results on users’ privacy under a regulatory privacy framework [ 62 ].
Finally, AI has granted the society noteworthy progress in terms of clinical assistance in terms of an integrated electronic health record system into the existing risk management software to monitor sepsis at intensive care unit (ICU) through a peer-to-peer VPN connection and with a fast and intuitive user interface [ 72 ]. As well, it has offered an AI organizational solution of innovative housing model that combines remote surveillance, diagnostics, and the use of sensors and video to detect anomalies in the behavior and health of the elderly [ 20 ], together with a case-based decision support system for the automatic real-time surveillance and diagnosis of health care-associated infections, by diverse machine-learning techniques [ 73 ].
4.4. Neural Networks
Neural networks, or the process through which machines learn from observational data, coming up with their own solutions, have been lately discussed over some stream of issues.
First, it has been argued that it is opportune to develop a software library for creating artificial neural networks for machine learning to solve non-standard tasks [ 74 ], along a decentralized and integrated AI environment that can accommodate video data storage and event-driven video processing, gathered from varying sources, such as video surveillance systems [ 16 ], which images could be improved through AI [ 75 ].
Second, such neural networks architecture has progressed into a huge number of neurons in the network, in which the devices of associative memory were designed with the number of neurons comparable to the human brain within supercomputers [ 1 ]. Subsequently, such neural networks can be modeled on the base of switches architecture to interconnect neurons and to store the training results in the memory, on the base of the genetic algorithms to be exported to other robotic systems: a model of human personality for use in robotic systems in medicine and biology [ 13 ].
Finally, the neural network is quite representative of AI, in the attempt of, once trained in human learning and self-learning, could operate without human guidance, as in the case of a current positioning vessel seaway systems, involving a fuzzy logic regulator, a neural network classifier enabling to select optimal neural network models of the vessel paths, to obtain control activity [ 22 ].
4.5. Data Security and Access Control Mechanisms
Access control can be deemed as a classic security model that is pivotal do any security and privacy protection processes to support data access from different environments, as well as to protect unauthorized access according to a given security policy [ 81 ]. In this vein, data security and access control-related mechanisms have been widely debated these days, particularly with regard to their distinct contextual conditions in terms, for example, of spatial and temporal environs that differ according to diverse, decentralized networks. Those networks constitute a major challenge because they are dynamically located on “cloud” or “fog” environments, rather than fixed desktop structures, demanding thus innovative approaches in terms of access security, such as fog-based context-aware access control (FB-CAAC) [ 81 ]. Context-awareness is, therefore, an important characteristic of changing environs, where users access resources anywhere and anytime. As a result, it is paramount to highlight the interplay between the information, now based on fuzzy sets, and its situational context to implement context-sensitive access control policies, as well, through diverse criteria such as, for example, following subject and action-specific attributes. In this way, different contextual conditions, such as user profile information, social relationship information, and so on, need to be added to the traditional, spatial and temporal approaches to sustain these dynamic environments [ 81 ]. In the end, the corresponding policies should aim at defining the security and privacy requirements through a fog-based context-aware access control model that should be respected for distributed cloud and fog networks.
5. Conclusion and Future Research Directions
This piece of literature allowed illustrating the AI impacts on systems security, which influence our daily digital life, business decision making, e-commerce, diverse social and legal issues, and neural networks.
First, AI will potentially impact our digital and Internet lives in the future, as the major trend is the emergence of increasingly new malicious threats from the Internet environment; likewise, greater attention should be paid to cyber security. Accordingly, the progressively more complexity of business environment will demand, as well, more and more AI-based support systems to decision making that enables management to adapt in a faster and accurate way while requiring unique digital e-manpower.
Second, with regard to the e-commerce and manufacturing issues, principally amidst the world pandemic of COVID-19, it tends to augment exponentially, as already observed, which demands subsequent progress with respect to cyber security measures and strategies. the same, regarding the social applications of AI that, following the increase in distance services, will also tend to adopt this model, applied to improved e-health, e-learning, and e-elderly monitoring systems.
Third, subsequent divisive issues are being brought to the academic arena, which demands progress in terms of a legal framework, able to comprehend all the abovementioned issues in order to assist the political decisions and match the expectations of citizens.
Lastly, it is inevitable further progress in neural networks platforms, as it represents the cutting edge of AI in terms of human thinking imitation technology, the main goal of AI applications.
To summarize, we have presented useful insights with respect to the impact of AI in systems security, while we illustrated its influence both on the people’ service delivering, in particular in security domains of their daily matters, health/education, and in the business sector, through systems capable of supporting decision making. In addition, we over-enhance the state of the art in terms of AI innovations applied to varying fields.
Future Research Issues
Due to the aforementioned scenario, we also suggest further research avenues to reinforce existing theories and develop new ones, in particular the deployment of AI technologies in small medium enterprises (SMEs), of sparse resources and from traditional sectors that constitute the core of intermediate economies and less developed and peripheral regions. In addition, the building of CAAC solutions constitutes a promising field in order to control data resources in the cloud and throughout changing contextual conditions.
We would like to express our gratitude to the Editor and the Referees. They offered extremely valuable suggestions or improvements. the authors were supported by the GOVCOPP Research Unit of Universidade de Aveiro and ISEC Lisboa, Higher Institute of Education and Sciences.
Overview of document citations period ≤ 2010 to 2021.
Overview of document self-citation period ≤ 2010 to 2020.
Conceptualization, R.R. and A.R.; data curation, R.R. and A.R.; formal analysis, R.R. and A.R.; funding acquisition, R.R. and A.R.; investigation, R.R. and A.R.; methodology, R.R. and A.R.; project administration, R.R. and A.R.; software, R.R. and A.R.; validation, R.R. and A.R.; resources, R.R. and A.R.; writing—original draft preparation, R.R. and A.R.; writing—review and editing, R.R. and A.R.; visualization, R.R. and A.R.; supervision, R.R. and A.R.; project administration, R.R. and A.R.; All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Institutional Review Board Statement
Informed consent statement, data availability statement, conflicts of interest.
The authors declare no conflict of interest. the funders had no role in the design of the study, in the collection, analyses, or interpretation of data, in the writing of the manuscript, or in the decision to publish the results.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Since the 1950s, scientists and engineers have designed computers to "think" by making decisions and finding patterns like humans do. In recent years, artificial intelligence has become increasingly powerful, propelling discovery across scientific fields and enabling researchers to delve into problems previously too complex to solve. Outside of science, artificial intelligence is built into devices all around us, and billions of people across the globe rely on it every day. Stories of artificial intelligence—from friendly humanoid robots to SkyNet—have been incorporated into some of the most iconic movies and books.
But where is the line between what AI can do and what is make-believe? How is that line blurring, and what is the future of artificial intelligence? At Caltech, scientists and scholars are working at the leading edge of AI research, expanding the boundaries of its capabilities and exploring its impacts on society. Discover what defines artificial intelligence, how it is developed and deployed, and what the field holds for the future.
View Artificial Intelligence Terms to Know >
What Is AI ?
Artificial intelligence is transforming scientific research as well as everyday life, from communications to transportation to health care and more. Explore what defines AI, how it has evolved since the Turing Test, and the future of artificial intelligence.
READ MORE >
What Is the Difference Between "Artificial Intelligence" and "Machine Learning"?
The term "artificial intelligence" is older and broader than "machine learning." Learn how the terms relate to each other and to the concepts of "neural networks" and "deep learning."
How Do Computers Learn?
Machine learning applications power many features of modern life, including search engines, social media, and self-driving cars. Discover how computers learn to make decisions and predictions in this illustration of two key machine learning models.
How Is AI Applied in Everyday Life?
While scientists and engineers explore AI's potential to advance discovery and technology, smart technologies also directly influence our daily lives. Explore the sometimes surprising examples of AI applications.
What Is Big Data?
The increase in available data has fueled the rise of artificial intelligence. Find out what characterizes big data, where big data comes from, and how it is used.
Read More >
Will Machines Become More Intelligent Than Humans?
Whether or not artificial intelligence will be able to outperform human intelligence—and how soon that could happen—is a common question fueled by depictions of AI in movies and other forms of popular culture. Learn the definition of "singularity" and see a timeline of advances in AI over the past 75 years.
How Does AI Drive Autonomous Systems?
Learn the difference between automation and autonomy, and hear from Caltech faculty who are pushing the limits of AI to create autonomous technology, from self-driving cars to ambulance drones to prosthetic devices.
Can We Trust AI?
As AI is further incorporated into everyday life, more scholars, industries, and ordinary users are examining its effects on society. The Caltech Science Exchange spoke with AI researchers at Caltech about what it might take to trust current and future technologies.
What is Generative AI?
Generative AI applications such as ChatGPT, a chatbot that answers questions with detailed written responses; and DALL-E, which creates realistic images and art based on text prompts; became widely popular beginning in 2022 when companies released versions of their applications that members of the public, not just experts, could easily use.
Ask a Caltech Expert
Where can you find machine learning in finance? Could AI help nature conservation efforts? How is AI transforming astronomy, biology, and other fields? What does an autonomous underwater vehicle have to do with sustainability? Find answers from Caltech researchers.
Terms to Know
A set of instructions or sequence of steps that tells a computer how to perform a task or calculation. In some AI applications, algorithms tell computers how to adapt and refine processes in response to data, without a human supplying new instructions.
Artificial intelligence describes an application or machine that mimics human intelligence.
A system in which machines execute repeated tasks based on a fixed set of human-supplied instructions.
A system in which a machine makes independent, real-time decisions based on human-supplied rules and goals.
The massive amounts of data that are coming in quickly and from a variety of sources, such as internet-connected devices, sensors, and social platforms. In some cases, using or learning from big data requires AI methods. Big data also can enhance the ability to create new AI applications.
An AI system that mimics human conversation. While some simple chatbots rely on pre-programmed text, more sophisticated systems, trained on large data sets, are able to convincingly replicate human interaction.
A subset of machine learning . Deep learning uses machine learning algorithms but structures the algorithms in layers to create "artificial neural networks." These networks are modeled after the human brain and are most likely to provide the experience of interacting with a real human.
An approach that includes human feedback and oversight in machine learning systems. Including humans in the loop may improve accuracy and guard against bias and unintended outcomes of AI.
A computer-generated simplification of something that exists in the real world, such as climate change , disease spread, or earthquakes . Machine learning systems develop models by analyzing patterns in large data sets. Models can be used to simulate natural processes and make predictions.
Interconnected sets of processing units, or nodes, modeled on the human brain, that are used in deep learning to identify patterns in data and, on the basis of those patterns, make predictions in response to new data. Neural networks are used in facial recognition systems, digital marketing, and other applications.
A hypothetical scenario in which an AI system develops agency and grows beyond human ability to control it.
The data used to " teach " a machine learning system to recognize patterns and features. Typically, continual training results in more accurate machine learning systems. Likewise, biased or incomplete datasets can lead to imprecise or unintended outcomes.
An interview-based method proposed by computer pioneer Alan Turing to assess whether a machine can think.
Artificial Skin Gives Robots Sense of Touch and Beyond
Artificial Intelligence: The Good, the Bad, and the Ugly
The AI Researcher Giving Her Field Its Bitter Medicine
More Caltech Computer and Information Sciences Research Coverage
Research Papers on Artificial Intelligence
Research papers on artificial intelligence (AI) are crucial for advancing our understanding and development of intelligent systems. These papers cover a wide range of topics, including machine learning, natural language processing, computer vision, and robotics. They provide valuable insights, innovative approaches, and breakthroughs in the field. Research papers on artificial intelligence foster collaboration, drive progress and serve as a reference for students, professionals, and enthusiasts. By exploring algorithms, methodologies, and applications, these papers contribute to the ever-evolving landscape of AI research. They play a significant role in shaping the future of AI technology and its impact on various industries.
Artificial intelligence, as a field of study, emerged from the desire to create intelligent systems capable of performing tasks that typically require human intelligence. Research papers on artificial intelligence have been instrumental in advancing the field and have contributed to the rapid growth and widespread adoption of AI technologies in various domains.
The early research papers on AI focused on foundational concepts and theories, such as symbolic reasoning and expert systems. However, with the advent of machine learning, particularly deep learning, the field witnessed a paradigm shift. Machine learning algorithms that can automatically learn patterns and make predictions from data revolutionized AI research and application.
Research papers on artificial intelligence serve as the foundation for further advancements and applications in the field. They provide a means for researchers to communicate their findings, share methodologies, and collaborate with peers, ultimately driving the progress and innovation in AI research and development.
Discussion of AI Technologies
Research papers on artificial intelligence encompass a wide array of AI technologies, each explored and analyzed through rigorous investigation and experimentation. These papers serve as a conduit for sharing novel advancements, methodologies, and empirical results in the field. By delving into various AI technologies, researchers contribute to the continuous evolution and refinement of AI systems. Below are some key AI technologies commonly discussed in research papers:
Machine learning algorithms form the backbone of many AI applications, and research papers extensively explore their development and optimization. These papers investigate different types of machine learning approaches, such as supervised learning, unsupervised learning, and reinforcement learning. They propose innovative algorithms, architectures, and optimization techniques to enhance the performance and efficiency of machine-learning models. Furthermore, research papers on machine learning often address specific challenges, such as dealing with high-dimensional data, handling imbalanced datasets, and improving interpretability.
Natural Language Processing (NLP)
Research papers on artificial intelligence also focus on NLP, a subfield that deals with enabling machines to understand, interpret, and generate human language. These papers explore techniques for tasks such as text classification, sentiment analysis, information retrieval, and language translation. NLP research papers often present state-of-the-art models and methodologies, including deep learning architectures like recurrent neural networks (RNNs) and transformer models. They delve into language representation, word embeddings, syntactic and semantic parsing, and discourse analysis, among other topics.
Computer vision research papers tackle the challenges of enabling machines to interpret and understand visual information. These papers delve into various computer vision tasks, including object detection, image recognition, image segmentation, and image generation. They propose innovative convolutional neural network (CNN) architectures, feature extraction techniques, and image processing algorithms. Computer vision research papers also explore areas such as video analysis, 3D reconstruction, visual tracking, and scene understanding, contributing to advancements in autonomous vehicles, surveillance systems, and augmented reality.
Research papers on artificial intelligence and robotics focus on developing intelligent robots capable of autonomous decision-making, perception, and interaction with the physical world. These papers cover topics like motion planning, sensor fusion, robot learning, and human-robot interaction. They investigate algorithms for robot localization and mapping, object manipulation, grasping, and navigation in complex environments. Robotics research papers often include experimental evaluations using real robots or simulations, showcasing the practical applicability and performance of the proposed approaches.
Ethical and Societal Implications
As AI technologies become more pervasive, research papers also explore the ethical and societal implications associated with their development and deployment. These papers discuss topics such as bias and fairness in AI algorithms, transparency and interpretability of AI systems, privacy concerns in data collection and usage, and the impact of AI on employment and social structures. Ethical and societal implications research papers aim to provide guidelines, regulations, and frameworks for responsible AI development and usage, ensuring that AI technologies align with societal values and benefit humanity as a whole.
Reinforcement learning is a crucial subfield of machine learning, and research papers dedicated to this topic focus on teaching agents to make optimal decisions based on trial-and-error interactions with an environment. These papers delve into algorithms such as Q-learning, policy gradients, and deep reinforcement learning. They explore various applications, including game playing, robotics, autonomous control, and recommendation systems. Reinforcement learning research papers also investigate topics like exploration-exploitation trade-offs, reward shaping, and multi-agent reinforcement learning.
Privacy-Preserving Machine Learning
Privacy-preserving machine learning is an emerging domain that addresses the challenge of leveraging sensitive data while preserving individuals' privacy. Research papers in this area propose innovative techniques such as federated learning, secure multi-party computation, and differential privacy. These papers explore methods to train machine learning models on distributed data without sharing the raw data itself, ensuring data privacy and security. Privacy-preserving machine learning research papers also analyze the trade-offs between privacy guarantees and model performance.
Top Research papers
Research papers play a crucial role in shaping the field of artificial intelligence (AI) by presenting groundbreaking ideas, innovative approaches, and significant advancements. Here are some of the top research papers that have made a substantial impact in the realm of AI:
"Deep Residual Learning for Image Recognition" by Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun (2016)
This research paper on artificial intelligence introduced the concept of residual networks (ResNets), which revolutionized image recognition tasks. ResNets allowed for the training of extremely deep neural networks by introducing skip connections that facilitated the flow of information across layers. This paper demonstrated that deeper networks could achieve higher accuracy, challenging the previous belief that increasing network depth leads to diminishing performance gains.
"Generative Adversarial Networks" by Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, et al. (2014)
This seminal paper introduced the concept of Generative Adversarial Networks (GANs). GANs consist of two neural networks, a generator, and a discriminator, competing against each other. The generator aims to produce synthetic data that resembles the real data distribution, while the discriminator's task is to differentiate between real and synthetic data. GANs have since become a cornerstone of generative modeling, enabling the creation of realistic images, videos, and other types of data.
"Attention Is All You Need" by Vaswani et al. (2017)
This influential research paper on artificial intelligence introduced the Transformer model, which revolutionized the field of natural language processing (NLP). Transformers employ self-attention mechanisms to capture contextual relationships between words in a sequence, eliminating the need for recurrent neural networks (RNNs) or convolutional layers. The Transformer model achieved state-of-the-art performance in various NLP tasks, including machine translation, text summarization, and language understanding.
"ImageNet Classification with Deep Convolutional Neural Networks" by Krizhevsky, Sutskever, and Hinton (2012)
This groundbreaking paper introduced the AlexNet model, a deep convolutional neural network (CNN), which significantly advanced image classification performance. AlexNet won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2012, demonstrating the power of deep learning. The paper's success paved the way for the widespread adoption of deep CNN architectures in computer vision tasks.
"Reinforcement Learning" by Richard S. Sutton and Andrew G. Barto (1998)
This influential book presents a comprehensive overview of reinforcement learning, a subfield of machine learning concerned with learning optimal behaviors through interactions with an environment. The book provides a theoretical foundation, algorithms, and practical insights into reinforcement learning, serving as a go-to resource for researchers and practitioners in the field. Reinforcement learning has been instrumental in solving complex AI problems, including game playing, robotics, and autonomous control.
"Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks" by Alec Radford, Luke Metz, and Soumith Chintala (2016)
This research paper on artificial intelligence introduced Deep Convolutional Generative Adversarial Networks (DCGANs), which extended the GAN framework specifically for image synthesis. DCGANs demonstrated the ability to generate high-quality, diverse images from random noise vectors. This work contributed to the advancement of unsupervised learning and paved the way for subsequent research in image generation, style transfer, and image editing.
"Neural Machine Translation by Jointly Learning to Align and Translate" by Bahdanau, Cho, and Bengio (2014)
This influential paper introduced the attention mechanism in sequence-to-sequence models, greatly improving the performance of neural machine translation. The attention mechanism allows the model to focus on different parts of the input sequence during the translation process, enabling better alignment and understanding. This work significantly advanced the state-of-the-art in machine translation and inspired further research on attention-based models in various other sequence-to-sequence tasks.
"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" by Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova (2018)
This paper introduced the BERT model, which achieved state-of-the-art results in various natural language processing tasks by pre-training a transformer-based neural network on a large corpus of unlabeled text data.
"DeepFace: Closing the Gap to Human-Level Performance in Face Verification" by Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, and Lior Wolf (2014)
This research paper on artificial intelligence presented the DeepFace model, which utilized deep convolutional neural networks to achieve remarkable performance in face verification tasks, narrowing the performance gap between machines and humans.
- Research papers in AI contribute to the exchange of knowledge, methodologies, and breakthroughs among researchers, fostering collaboration and innovation.
- Research papers cover a wide range of AI technologies, including machine learning, natural language processing, computer vision, and robotics.
- Several influential research papers have significantly shaped the field of AI, including those on deep learning, generative adversarial networks, attention mechanisms, and reinforcement learning.
- Research papers also explore the ethical and societal implications of AI, addressing issues such as fairness, transparency, and privacy.
- Prominent research papers in AI inspire and guide future research directions, pushing the boundaries of the field and unlocking its potential.
- Common AI Applications
Applied Artificial Intelligence
Dr. Hardik A. Gohel is the founding director of the Applied Artificial Intelligence (AAI) laboratory at University of Houston-Victoria (UHV). He is also a principal investigator in a multiyear funded project from the Department of Defense (DoD) and the Department of Energy (DoE). His areas of research are Artificial Intelligence, Digital Healthcare, Cybersecurity, and Advanced Computing.
- M. Mujawar, H. Gohel, S.K.Bhardwaj, S. Srinivasan, N. Hickman, A. Kaushik, " Nano-enabled biosensing systems for intelligent healthcare: towards COVID-19 management ", Elsevier Journal of materialstoday CHEMISTRY, doi.org/10.1016/j.mtchem.2020.100306
- H. Gohel, H. Upadhyay, L. Lagos, K. Cooper, A. Sanzetenea, " Predictive Maintenance Architecture Development for Nuclear Infrastructure using Machine Learning ", Elsevier Journal of Nuclear Engineering and Technology, doi.org/10.1016/j.net.2019.12.029
- S. Bhardwaj, H. Gohel and S. Namuduri, " A Multiple Input Deep Neural Network Architecture for Solution of One-Dimensional Poisson Equation ," in IEEE Antennas and Wireless Propagation Letters. doi: 10.1109/LAWP.2019.2933181
- H. Upadhyay, H. Gohel, A. Pons, L. Lagos, “ Virtual Memory Introspection Framework for Cyber Threat Detection in Virtual Environment ”, Advances in Science, Technology and Engineering Systems Journal Vol. 3, No. 1, 25-29, 2018
- H. Gohel, B.K. Garsondiya, A. Kothia, H. Jani, " Operational Study of Brain Reading–Neuroimaging in Human Brain Computer Interface(H-BCI) ", American Research Journals, Vol. 2, Issue 6, pp. 1-6, 2017
- H. Gohel, H. Upadhyay, L. Lagos, “ Collaborative Trustworthy Security and Privacy Framework for Social Media ” The Network and Distributed System Security Symposium (NDSS), 2019
- H. Upadhyay, H. Gohel, A. Pons and L. Lagos, " Windows Virtualization Architecture For Cyber Threats Detection ," IEEE Conference on Data Intelligence and Security, 2018, pp. 119-122. doi: 10.1109/ICDIS.2018.00025
- D. Levy, H. Gohel, H. Upadhyay, A. Pons and L. E. Lagos, " Design of Virtualization Framework to Detect Cyber Threats in Linux Environment ," IEEE Conference on Cyber Security and Cloud Computing, 2017, pp. 316-320. doi: 10.1109/CSCloud.2017.18
- H. Gohel, P. Sharma. Intelligent Web Security Testing with Threat Assessment and Client Server Penetration . Springer Advances in Intelligent Systems and Computing, 2016, pp. 555-567 doi:10.1007/978-981-10-0135-2_54
- M. Anouncia, H. Gohel, S. Vairamuthu, " Data Visualization - Trends and Challenges Toward Multidisciplinary Perception ", ISBN 978-981-15-2282-6, Springer Publications Inc.,2020
- H. Gohel " Human Brain Computer Interface (H-BCI) ", ISBN 978-3-659-77990-9 Lambert Academic Publishing, 2015
- H. Gohel, H. Upadhyay, " Developing Security Intelligence in Big Data " Knowledge Computing and Its Applications. Springer Publications, 2018, pp. 25-50, doi: 10.1007/978-981-10-6680-1_2
- H. Gohel, H. Upadhyay, “Cyber Threat Analysis with Memory Forensics”, CSIC Communications – Knowledge Digest for IT Community, Vol 40, Issue 11, pp. 17-19, 2017
- H. Gohel, H. Upadhyay, “Developing Solutions with Big Data Technology”, CSIC Communications – Knowledge Digest for IT Community, Vol 41, Issue 2, pp. 18-21, 2017
Ai system for iot devices energy management.
This research presents an artificial intelligence based the energy management system for IoT devices which reduce the overall energy consumption by intelligently activating and deactivating the IoT devices while also managing the quality of service parameters. Both hardware and software aspects are considered for devising the efficient energy conservation models for IoT. Energy transparency has been achieved by modelling energy consumed during sensing, processing, memory access and communication. A multi-agent system has been introduced to model the IoT devices and their energy consumptions. Genetic algorithm is used to optimize the parameters of the multi-agent system. Finally, simulation tools such as MATLAB Simulink and OpenModelica are used to test the system. The results shows an 18.65% decrease in overall energy consumption by the implementation of decentralized intelligence of the multi-agent system for IoT.
Nuclear Predictive Maintenance with Machine Learning
Nuclear infrastructure systems play an important role in national security. The functions and missions of nuclear infrastructure systems are vital to government, businesses, society and citizen's lives. It is crucial to design nuclear infrastructure for scalability, reliability and robustness. To do this, we can use machine learning, which is a state of the art technology used in various fields ranging from voice recognition, Internet of Things (IoT) device management and autonomous vehicles. In this research, we propose to design and develop a machine learning algorithm to perform predictive maintenance of nuclear infrastructure. Support vector machine and logistic regression algorithms will be used to perform the prediction.
Predicting Nuclear Machine Failure using Logistic Regression
Security and Privacy Framework for Social Media
Social media technology provides a novel platform for faster, dynamic, manageable, cost-effective and adaptable professional network service provisioning. As such, social media is ideal for online outreach and getting traction with individuals, government organizations and business enterprises. However, as social media technology continues to provide an increasing number of functionalities to expand a social network, there is a growing need to design, develop and evaluate the security and privacy of personal and professional social media services. The proposed collaborative trustworthy security and privacy framework for social media provides a new avenue to develop improved security and privacy that can both verify and validate social media content before being made available online. It also monitors, assesses and reacts when online security and privacy is compromised.
The NVIDIA® qualification guarantees the system's CPU, Tesla T4 and overall system performance under wide-temperature conditions is above par by putting the system through stringent tests and ensures its features can be seamlessly integrated with the NVIDIA® Tesla T4. UHV’s AI servers proved itself worthy of our applied research solutions to amplify AI inference performances when we choose Tesla GPU qualified platforms.
As an NVIDIA® Tesla qualified GPU computing platform, PowerEdge T640 is one of the most compact systems. It is a unique industrial-grade edge AI platform supporting dual NVIDIA® Tesla T640 GPU cards. The system allows innovators to run multiple models simultaneously such as engaging advanced applications with false-fail and redundant GPU configurations or assign the two T640s to separate tasks, set one for video transcoding while setting the other for AI inference tasks. It supports 2 Intel Xeon Silver 4214 2.2Ghz 12 core CPUs with expansion capabilities. It also features compact dimensions and low power consumption characteristics. With T640 boosted AI inference processing power, it is ideal for medical image and video analysis, deep learning machine vision, autonomous machines and more.
UHV has two PowerEdge T640 with GPU system with following configuration:
CPU : 2 Intel Xeon Silver 4214 2.2Ghz 12 core CPUs RAM : 512 GB 3200MT RDIMM memory STORAGE : 16 2TB 7.2KRPM 2.5” SATA Hard Drives (32TB total storage, around 28TB of usable space depending on RAID configuration) OS DRIVE : BOSS Control Card with 240GB M.2 SSD NETWORK : Dual port onboard 10Gb ethernet ports, second card with dual 10Gb SFP+ ports GPU : Nvidia Tesla V100S 32GB GPU card
We have Windows Server 2022 Datacenter in both of them, with SQL server 2019 with machine learning services to perform in- memory analytics. It’s for faster AI computing and parallel processing.
- COSC 6405 : Programming for Data Science
- COSC 6315 : Data Science using Machine Learning
- COSC 6312 : Fundamentals of Cybersecurity
- COSC 6339 : Network Design Management
- COSC 4300 : Digital Forensics
- COSC 4339 : Telecommunication and Networks
You Should be here
- Search for: Toggle Search
Eureka! NVIDIA Research Breakthrough Puts New Spin on Robot Learning
A new AI agent developed by NVIDIA Research that can teach robots complex skills has trained a robotic hand to perform rapid pen-spinning tricks — for the first time as well as a human can.
The stunning prestidigitation, showcased in the video above, is one of nearly 30 tasks that robots have learned to expertly accomplish thanks to Eureka, which autonomously writes reward algorithms to train bots.
Eureka has also taught robots to open drawers and cabinets, toss and catch balls, and manipulate scissors, among other tasks.
The Eureka research, published today , includes a paper and the project’s AI algorithms, which developers can experiment with using NVIDIA Isaac Gym , a physics simulation reference application for reinforcement learning research. Isaac Gym is built on NVIDIA Omniverse , a development platform for building 3D tools and applications based on the OpenUSD framework. Eureka itself is powered by the GPT-4 large language model .
“Reinforcement learning has enabled impressive wins over the last decade, yet many challenges still exist, such as reward design, which remains a trial-and-error process,” said Anima Anandkumar, senior director of AI research at NVIDIA and an author of the Eureka paper. “Eureka is a first step toward developing new algorithms that integrate generative and reinforcement learning methods to solve hard tasks.”
AI Trains Robots
Eureka-generated reward programs — which enable trial-and-error learning for robots — outperform expert human-written ones on more than 80% of tasks, according to the paper. This leads to an average performance improvement of more than 50% for the bots.
Robot arm taught by Eureka to open a drawer.
The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision.
Using GPU-accelerated simulation in Isaac Gym, Eureka can quickly evaluate the quality of large batches of reward candidates for more efficient training.
Eureka then constructs a summary of the key stats from the training results and instructs the LLM to improve its generation of reward functions. In this way, the AI is self-improving. It’s taught all kinds of robots — quadruped, bipedal, quadrotor, dexterous hands, cobot arms and others — to accomplish all kinds of tasks.
The research paper provides in-depth evaluations of 20 Eureka-trained tasks, based on open-source dexterity benchmarks that require robotic hands to demonstrate a wide range of complex manipulation skills.
The results from nine Isaac Gym environments are showcased in visualizations generated using NVIDIA Omniverse.
Humanoid robot learns a running gait via Eureka.
“Eureka is a unique combination of large language models and NVIDIA GPU-accelerated simulation technologies,” said Linxi “Jim” Fan, senior research scientist at NVIDIA, who’s one of the project’s contributors. “We believe that Eureka will enable dexterous robot control and provide a new way to produce physically realistic animations for artists.”
It’s breakthrough work bound to get developers’ minds spinning with possibilities, adding to recent NVIDIA Research advancements like Voyager , an AI agent built with GPT-4 that can autonomously play Minecraft .
NVIDIA Research comprises hundreds of scientists and engineers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics.
Learn more about Eureka and NVIDIA Research .