The age of artificial intelligence (AI) and generative AI is changing the way we live, play, and work. Consider these figures. The global AI market size was valued at $136.55 billion in 2022 and projected to touch $1,811.75 billion by 2030, according to Grand View Research. India’s AI market size touched $680.1 million in 2022, and the IMARC Group expects the market to touch $3,935.5 million by 2028 on the back of a growing pool of skilled talent in STEM fields, favourable government initiatives, rapid digitization, and the increasing demand for AI-driven solutions across various industrial verticals.
India’s AI market size touched $680.1 million in 2022, and the IMARC Group expects the market to touch $3,935.5 million by 2028.
The generative AI market, too, is booming. According to Fortune Business Insights, the global generative AI market size was valued at $29 billion in 2022 and is projected to grow from $43.87 billion in 2023 to $667.96 billion by 2030. Generative AI is a type of machine learning that can create new content such as images, audio, simulations, videos, texts, and even code. That said, the seemingly unlimited capabilities of generative AI and large language models (LLMs) are evolving so quickly that it’s all too easy to put risk management on the back burner, according to the “Wipro State of Cybersecurity Report 2023,” released this August.
Source: A survey by Salesforce on generative AI use among the general population within the U.S., the UK, Australia and India, reveals that the public is split between users and non-users. Within each country, the online populations surveyed reported usage as follows (note: cultural bias may impact results).
The report underscores that AI is being embedded across the enterprise — in new and existing products and software — to create improved customer experiences, more intelligent software, and broad operational efficiencies but simultaneously points out that managing the risk, security and compliance of generative AI is posing a formidable challenge to Chief Information Security Officers (CISOs). The report highlights how the enterprise threat landscape from edge to cloud is becoming more porous since it includes millions of distributed endpoints, poorly protected remote sites and home offices, IoT/IIoT/OT devices, shadow clouds next to legitimate clouds, mobile devices that are never backed up by IT and scores of global partners with greater levels of access privileges. This has helped hacking become a multi-billion-dollar well-funded industry since they have access to the same advanced technology tools as the businesses they target.
AI, along with its machine learning (ML) component, has the potential to sharply change the cybersecurity landscape. It can grow and learn. It can accelerate defence reactions fast enough to keep ahead of the bad actors by recognizing attacks that don’t necessarily match previously seen patterns. But like all tools, AI is only as good as the people using it.
To avoid the kinks in the AI cybersecurity armour, “a proper deployment is truly a partnership,” according to the report. It elaborates: “You need the right people to write the code, the right people to test it and, critically, the right people to oversee the AI effort on an ongoing basis. To deploy cost-effective AI governance, enterprises must design a risk-based AI framework that includes constant monitoring and oversight to prevent it from creating security holes and backdoors that could allow data leakage to cyber thieves and business competitors.”
In April, a report titled “The Mind of the CISO” by cybersecurity firm Trellix, too, underscored the key pain points that global CISOs across every major sector were encountering. These include:
A report released this month by research and advisory firm Gartner Inc., too, underscores that organizations that operationalize AI transparency, trust, and security will see their AI models achieve a 50% improvement in terms of adoption, business goals, and user acceptance in the next three years. It adds, though, that without a robust AI TRiSM program, AI models can work against the business introducing unexpected risks. TRiSM stands for trust, risk and security management, which conventional controls don’t provide, according to Gartner.
Mark Horvath, VP Analyst at Gartner, emphasizes that CISOs must not allow AI to take control of their organizations. Instead, they should advocate for AI TRiSM to enhance AI outcomes. This can be achieved by accelerating the transition of AI models to production, ensuring better governance, and streamlining the AI model portfolio, potentially eliminating up to 80% of inaccurate and unauthorized information.
AI not only introduces significant data risks due to the use of sensitive datasets for training but also faces challenges related to the accuracy of model outputs and the quality of datasets, which may fluctuate over time, leading to adverse consequences. The adoption of AI TRiSM, as outlined in the report, empowers organizations to comprehend their AI models’ activities, assess their alignment with original intentions, and anticipate performance and business value outcomes.
____________
Written By: Techquity India
Scammy AI detection tools are robbing some people of their money … and some of…
Instagram is one of the most powerful platforms for businesses today, with more than 2…
The recent passing of Ratan Tata has left an indelible void in the hearts of…
Feeling stuck in your current career? It’s natural to crave something different, especially if your…
On August 30, 2024, Prime Minister Narendra Modi visited Maharashtra to lay the foundation stone…
The involvement and contribution of technology, in today’s day and age, into our daily lives…