Why CISOs Need to Factor Risks from AI, Gen AI

The age of artificial intelligence (AI) and generative AI is changing the way we live, play, and work. Consider these figures. The global AI market size was valued at $136.55 billion in 2022 and projected to touch $1,811.75 billion by 2030, according to Grand View Research. India’s AI market size touched $680.1 million in 2022, and the IMARC Group expects the market to touch $3,935.5 million by 2028 on the back of a growing pool of skilled talent in STEM fields, favourable government initiatives, rapid digitization, and the increasing demand for AI-driven solutions across various industrial verticals.

India’s AI market size touched $680.1 million in 2022, and the IMARC Group expects the market to touch $3,935.5 million by 2028.

The generative AI market, too, is booming. According to Fortune Business Insights, the global generative AI market size was valued at $29 billion in 2022 and is projected to grow from $43.87 billion in 2023 to $667.96 billion by 2030. Generative AI is a type of machine learning that can create new content such as images, audio, simulations, videos, texts, and even code. That said, the seemingly unlimited capabilities of generative AI and large language models (LLMs) are evolving so quickly that it’s all too easy to put risk management on the back burner, according to the “Wipro State of Cybersecurity Report 2023,” released this August.

Source: A survey by Salesforce on generative AI use among the general population within the U.S., the UK, Australia and India, reveals that the public is split between users and non-users. Within each country, the online populations surveyed reported usage as follows (note: cultural bias may impact results).

Generative AI and Cybersecurity

The report underscores that AI is being embedded across the enterprise — in new and existing products and software — to create improved customer experiences, more intelligent software, and broad operational efficiencies but simultaneously points out that managing the risk, security and compliance of generative AI is posing a formidable challenge to Chief Information Security Officers (CISOs). The report highlights how the enterprise threat landscape from edge to cloud is becoming more porous since it includes millions of distributed endpoints, poorly protected remote sites and home offices, IoT/IIoT/OT devices, shadow clouds next to legitimate clouds, mobile devices that are never backed up by IT and scores of global partners with greater levels of access privileges. This has helped hacking become a multi-billion-dollar well-funded industry since they have access to the same advanced technology tools as the businesses they target.

AI, along with its machine learning (ML) component, has the potential to sharply change the cybersecurity landscape. It can grow and learn. It can accelerate defence reactions fast enough to keep ahead of the bad actors by recognizing attacks that don’t necessarily match previously seen patterns. But like all tools, AI is only as good as the people using it.

To avoid the kinks in the AI cybersecurity armour, “a proper deployment is truly a partnership,” according to the report. It elaborates: “You need the right people to write the code, the right people to test it and, critically, the right people to oversee the AI effort on an ongoing basis. To deploy cost-effective AI governance, enterprises must design a risk-based AI framework that includes constant monitoring and oversight to prevent it from creating security holes and backdoors that could allow data leakage to cyber thieves and business competitors.”

Key Pain Points for CISOs Across the World

In April, a report titled “The Mind of the CISO” by cybersecurity firm Trellix, too, underscored the key pain points that global CISOs across every major sector were encountering. These include:

  1. Lack of Support: All CISOs in India surveyed said they struggle at some level to get support from the executive board for the resources needed to maintain cybersecurity strength. 62% think their jobs would be easier if all employees across the entire business were better aware of the challenges of cybersecurity. In addition, 30% of CISOs cite a lack of skilled talent on their team as a primary challenge.
  2. High Pressure: 84% of CISOs in India have managed a major cybersecurity incident once, and 44% report this has happened more than once. 84% of respondents feel fully or mostly accountable for the incidents and 52% experienced major attrition from the Security Operations team as a direct result.
  3. Working with Too Many Wrong Solutions: With organizations reporting using an average of 25 individual security solutions, 34% say a top hurdle is having too many pieces of technology without a sole source of truth. CISOs can find the number of security solutions available to them overwhelming, unnecessary, and challenging.
  4. Need Better Solutions: 98% agree having the right tools in place would save them considerable time. 50% want access to a single integrated enterprise tool to optimize security investments.

The Bottomline

A report released this month by research and advisory firm Gartner Inc., too, underscores that organizations that operationalize AI transparency, trust, and security will see their AI models achieve a 50% improvement in terms of adoption, business goals, and user acceptance in the next three years. It adds, though, that without a robust AI TRiSM program, AI models can work against the business introducing unexpected risks. TRiSM stands for trust, risk and security management, which conventional controls don’t provide, according to Gartner.

Mark Horvath, VP Analyst at Gartner, emphasizes that CISOs must not allow AI to take control of their organizations. Instead, they should advocate for AI TRiSM to enhance AI outcomes. This can be achieved by accelerating the transition of AI models to production, ensuring better governance, and streamlining the AI model portfolio, potentially eliminating up to 80% of inaccurate and unauthorized information.

AI not only introduces significant data risks due to the use of sensitive datasets for training but also faces challenges related to the accuracy of model outputs and the quality of datasets, which may fluctuate over time, leading to adverse consequences. The adoption of AI TRiSM, as outlined in the report, empowers organizations to comprehend their AI models’ activities, assess their alignment with original intentions, and anticipate performance and business value outcomes.

____________

Written By: Techquity India 

Share
techquity_admin

Recent Posts

SaaS Fatigue is Real: Is Subscription Overload Killing Productivity?

There is such a thing as too much of a good thing! Just ask companies dealing…

5 days ago

The Rise of AI Video Generation: Lights, Camera, Algorithm?

Imagine if, instead of renting cameras, hiring actors, and booking a set, you could type…

7 days ago

Reverse Mentoring: The Powerful Leadership Hack You’re Missing

Workplace dynamics have seen monumental shifts over the last several years, with diversity and inclusion…

1 week ago

How Will Trump’s AI Policy Impact India?

Reports suggest the Trump administration’s AI policy will show a greater risk tolerance for the…

2 weeks ago

Failing Forward: How Embracing Failures Can Supercharge Innovation

“Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.” –Samuel Beckett The…

2 weeks ago

Is AI Music Set to Be the Next Big Thing?

Ever-more capable AI music tools emerging are set to spark a meteoric explosion in the…

2 weeks ago