Science & Technology

When is generative AI the wrong tool?

When is generative AI the wrong tool?

Technology tamfitronics

This audio is auto-generated. Please let us know if you have feedback.

Enterprises are bullish on generative AI’s potential, with plans to pour more resources into initiatives and embark on companywide shifts around its adoption.

Less than two years after OpenAI’s ChatGPT releasegenerative AI is already the most prevalent use of artificial intelligence, according to Gartner research.

“We really have to help our teams, our business peers, our executives, who are sort of swept up by the hype, get a grip on where this is actually useful and where other things might be better,” said Rita Sallam, distinguished VP analyst and Gartner Fellow in the data and analytics teamat the firm’s IT Symposium/Xpo last week. “We really have to keep the organization grounded as to when it makes sense.”

CIOs have to clearly communicate and explain when the technology is an effective solution and when it’d be best to try other options, like knowledge graphs or reinforcement learning. After all, organizations are relying on technology leaders’ expertise to avoid costly missteps.

Failed technology projects can hurt an organization’s reputation, customer relationships and the bottom line. Organizations that deployed AI in 2023 spent between $300,000 and $2.9 million in the proof-of-concept phase, and many generative AI experiments never make it past the nascent stage, according to Gartner research.

Sallam said generative is generally not the best tool for enterprises to:

  • Plan and optimize
  • Predict and forecast
  • Make critical decisions
  • Run autonomous systems

Enterprises are full of potential use cases, but choosing the ones that will bring the most value and the least amount of risk is key.

Generative AI’s weaknesses — including a lack of reliability, a tendency to hallucinate and limited reasoning — can derail many use-case ideas, Sallam said. Technology leaders can turn to other forms of artificial intelligence, including predictive machine learning, rule-based systems and other optimization techniques, for better results.

Large language models struggle to carry out exact calculations, making it difficult to use generative AI for use cases like marketing allocation or route optimization, according to Sallam. Instead, CIOs can use knowledge graphs and composite AI, defined as a combination of AI techniques. The necessary guardrails needed to ensure responsible, secure use of the technology can hinder experiments like automated trading and agents. Reinforcement learning would be a better route, Sallam said.

Wrong place, wrong task

Generative AI thrives in content generation, knowledge discovery and conversational user interfaces. This has spurred countless solutions targeting text and coding, Q&A systems, knowledge management and virtual assistants.

Enterprises have been allured. Just 6% of organizations have deferred generative AI investments, according to a Capgemini survey published in July.

“Don’t get me wrong, I think the potential is huge,” Sallam said. But the hype has pushed leaders to overly focus, and potentially overinvest, in generative AI at the expense of the enterprise, according to Sallam.

“The hype is dangerous,” Sallam said. “Organizations that solely focus on generative AI can risk failure in their AI projects and miss out on many important opportunities, so we want to make sure that the hype around generative AI doesn’t take the oxygen out of the room.”

Vendors have recently put an emphasis on AI-powered agents with autonomous capabilities, for example. Slack and SAP announced agent capabilities in existing solutions in recent weeks. Salesforce moved its Agentforce platform to general availability this week. Microsoft plans to add agents to Copilot Studio next month.

“We hardly hear them talking about copilots now,” Sallam said. “They’ve moved on to agents, and that definitely holds promise … but the reality now is that’s still a work in progress. You still have to be careful.”

CIOs must consider how autonomous capabilities fit into governance and risk management frameworksespecially as enterprises underline the importance of human control and intervention. Sallam said that techniques like reinforcement learning provide an alternative for powering autonomous systems.

Technology leaders should also urge caution around use cases that could introduce bias-based risk. Rule-based systems and composite AI offer a more reliable option, Sallam said. Incorporating generative AI into critical decisions related to hiring or allocating loans could create a recipe for disaster.

“You’re not going to want to leave that up to your large language model,” Sallam said.

Spread the love

Leave a Reply