The Generative AI market is reaching a point of no return. As the name itself suggests, Generative AI (GenAI) models generate images, music, speech, code, video, or text, even as they interpret and manipulate existing data. According to Grand View Research Inc., the global generative AI market size is forecast to touch $109.37 billion by 2030, a compounded annual growth rate (CAGR) of 35.6% from 2023 to 2030.
According to the 2023 Gartner Hype Cycle, innovations that are set to transform organizations within the next decade include GenAI applications; foundation models; and AI trust, risk, and security management (AI TRiSM). In fact, foundation models feature at the very peak of “inflated expectations” of the Hype Cycle, with Gartner predicting that by 2027, 60% of natural language processing (NLP) use cases will be based on this — a significant jump from less than 5% in 2021.
Source: Gartner, Inc.
But what are foundation models? The latest machine learning (ML) approach behind GenAI is based on a neural network architecture called “transformers.” Combining transformer architecture with unsupervised (or self-supervised) learning has given rise to large models, or the so-called foundational models. By building atop a foundation model, companies can create models tailored to specific use cases or domains, also known as vertical models. Early examples of models such as GPT-3, BERT, T5 or DALL-E offer a glimpse of the immense potential of this technology: one can enter a short prompt, and the system generates a writeup or an image as needed. Using this principle, Large Language Models (LLMs), which we are now all familiar with, have been trained on large amounts of text data for NLP tasks and contain over 100 million parameters.
Moreover, by 2026, more than 80% of enterprises will either be using GenAI application programming interfaces (APIs) or models or deploy GenAI-enabled applications in production environments, or both. This is a huge spike from less than 5% in 2023. These applications use GenAI to cater to user experience (UX) and ensure the delivery of a user’s desired outcomes. Within the workforce, such applications will permeate a wide spectrum of skill sets. For instance, the most common pattern for GenAI-embedded capabilities today is “text-to-X, which democratizes access for workers, to what used to be specialized tasks, via prompt engineering using natural language,” according to Arun Chandrasekaran, Distinguished VP Analyst at Gartner. He acknowledged, though, that these applications still present obstacles such as hallucinations (making convincing errors) and inaccuracies that may limit widespread impact and adoption.
GenAI, according to Chandrasekaran, has become a top priority for the C-suite, sparking innovation in new tools, which go beyond just foundation models. As a result, there has been an increase in demand for GenAI across sectors including healthcare, life sciences, legal, financial services, and government. In light of this demand, he added that technology leaders should start with models that show high accuracy in performance leaderboards — those that have “superior ecosystem support” and sufficient “enterprise guardrails” to ensure security and privacy.
In this context, AI TRiSM, according to Gartner, as defined by Gartner, ensures governance, trustworthiness, fairness, reliability, robustness, efficacy, and data protection for AI models. AI TRiSM encompasses solutions and techniques for model interpretability, data and content anomaly detection, AI data protection, model operations, and resistance against adversarial attacks. This framework is vital for responsible AI implementation, with Gartner anticipating widespread adoption within the next two to five years. According to the research firm, organizations that integrate AI TRiSM into their operations by 2026 can expect a 50% enhancement in adoption rates, achievement of business objectives, and user acceptance. For further insights, read “Why CISOs Need to Factor Risks from AI, Gen AI.”
IBM also acknowledges that Generative AI offers a compelling chance to enhance employee productivity and boost enterprise efficiency. However, as C-Suite executives explore generative AI solutions, they encounter several questions: Which use cases will bring the most value to their business? What AI technology is the most suitable for their requirements? Is it secure and sustainable? What are the governance protocols? And how can they guarantee the success of their AI projects? These queries are highlighted in an IBM blog post.
Consultancy firm Deloitte’s survey reveals that almost 75% of respondents state their companies are experimenting with Generative AI. Among them, 65% have integrated this technology within their businesses, and 31% are applying it for external purposes. However, a significant 56% of participants either don’t know or are uncertain about the existence of ethical guidelines within their organizations for the use of Generative AI.
These results are featured in Deloitte’s second annual report on the “State of Ethics and Trust in Technology.” The report emphasizes that companies are grappling with various concerns related to Generative AI models, such as data privacy, widening the digital divide, the risk of plagiarism, dissemination of harmful content and misinformation, and the displacement of workers.
The study involved polling over 1,700 professionals from diverse business and technical backgrounds in various industries. Its aim was to evaluate the implementation of ethical standards concerning emerging technologies within their respective organizations. Deloitte defines emerging technology as encompassing cognitive technologies, digital reality, ambient experiences, autonomous vehicles, quantum computing, distributed ledger technology, and robotics. Within cognitive technologies, there are components such as AI, generative AI, machine learning (ML), neural networks, bots, natural language processing, and neural nets.
The research highlights a growing belief in the positive impact of cognitive technologies on society, although concerns about their potential harm are escalating at an even faster pace. In this year’s survey, 39% of participants expressed that cognitive technologies hold the greatest potential for good among all emerging technologies, compared to 33% last year. However, 57% of respondents identified cognitive technologies as the most likely to present significant ethical risks, a notable increase from 41% in 2022.
Moreover, a substantial 73% of respondents stated that their organizations are reassigning tasks for some employees due to the adoption of new technologies. Within these organizations, 85% choose to retain individuals whose roles are impacted, and an additional 67% invest in retraining or upskilling these employees for new positions. This counters prevailing beliefs that emerging technology will lead to widespread job loss. In a broader context, when respondents were asked to identify key ethical concerns related to the broader use of Generative AI in businesses, only 7% mentioned the worry of Generative AI replacing human jobs. The report notes, among other recommendations, that companies should explore proof-of-concepts and pilot programs. Pilots that do not meet requirements or are considered too high-risk can be discontinued at this stage, preventing companies from taking significant risks.
____________
Written By: Techquity India
OpenAI releases an AI strategy blueprint detailing what the US needs to do to stay…
For years, quantum computing has been touted as the Holy Grail of computational technology, a…
Back when I was just graduating college, if you’d asked me what my dream job…
Homegrown Indic language AI models promise to be a key driver of the Indian AI…
The Global AI Conclave, presented by Moneycontrol and CNBC-TV18, is set to bring together visionaries,…
Imagine waking up peacefully on a Thursday morning without the worry of rushing to the…