Inscrivez-vous maintenant pour un meilleur devis personnalisé!

Generative AI may be creating more work than it saves

May, 24, 2024 Hi-network.com
AI depicted on screens
Andriy Onufriyenko/Getty Images

There's common agreement that generative artificial intelligence (AI) tools can help people save time and boost productivity. Yet while these technologies make it easy to run code or produce reports quickly, the backend work to build and sustain large language models (LLMs) may need more human labor than the effort saved up front. Plus, many tasks may not necessarily require the firepower of AI when standard automation will do. 

That's the word from Peter Cappelli, a management professor at the University of Pennsylvania Wharton School, who spoke at a recent MIT event. On a cumulative basis, generative AI and LLMs may create more work for people than alleviate tasks. LLMs are complicated to implement, and "it turns out there are many things generative AI could do that we don't really need doing," said Cappelli.

Also: Rote automation is so last year: AI pushes more intelligence into software development 

While AI is hyped as a game-changing technology, "projections from the tech side are often spectacularly wrong," he pointed out. "In fact, most of the technology forecasts about work have been wrong over time." He said the imminent wave of driverless trucks and cars, predicted in 2018, is an example of rosy projections that have yet to come true.

Broad visions of technology-driven transformation often get tripped up in the gritty details. Proponents of autonomous vehicles promoted what "driverless trucks could do, rather than what needs to be done, and what is required for clearing regulations -- the insurance issues, the software issues, and all those issues." Plus, Cappelli added: "If you look at their actual work, truck drivers do lots of things other than just driving trucks, even on long-haul trucking."

A similar analogy can be drawn to using generative AI for software development and business. Programmers "spend a majority of their time doing things that don't have anything to do with computer programming," he said. "They're talking to people, they're negotiating budgets, and all that kind of stuff. Even on the programming side, not all of that is actually programming."  

Also: Agile development can unlock the power of generative AI - here's how

The technological possibilities of innovation are intriguing, but the rollout tends to be slowed by realities on the ground. In the case of generative AI, any labor-saving and productivity benefits may be outweighed by the amount of backend work needed to build and sustain LLMs and algorithms.

Both generative and operational AI "generate new work," Cappelli pointed out. "People have to manage databases, they have to organize materials, they have to resolve these problems of dueling reports, validity, and those sorts of things. It's going to generate a lot of new tasks, somebody is going to have to do those."

Also: Generative AI is the technology that IT feels most pressure to exploit

He said operational AI that's been in place for some time is still a work in progress. "Machine learning with numbers has been markedly underused. Some part of this has been database management questions. It takes a lot of effort just to put the data together so you can analyze it. Data is often in different silos in different organizations, which are politically difficult and just technically difficult to put together."

Cappelli cites several issues in the move toward generative AI and LLMs that must be overcome:

  • Addressing a problem/opportunity with generative AI/LLMs may be overkill - "There are lots of things that large language models can do that probably don't need doing," he stated. For example, business correspondence is seen as a use case, but most work is done through form letters and rote automation already. Add the fact that "a form letter has already been cleared by lawyers, and anything written by large language models has probably got to be seen by a lawyer. And that is not going to be any kind of a time saver." 
  • It will get more costly to replace rote automation with AI - "It's not so clear that large language models are going to be as cheap as they are now," Cappelli warned. "As more people use them, computer space has to go up, electricity demands alone are big. Somebody's got to pay for it."
  • People are needed to validate generative AI output - Generative AI reports or outputs may be fine for relatively simple things such as emails, but for more complex reporting or undertakings, there needs to be validation that everything is accurate. "If you're going to use it for something important, you better be sure that it's right. And how are you going to know if it's right? Well, it helps to have an expert; somebody who can independently validate and knows something about the topic. To look for hallucinations or quirky outcomes, and that it is up-to-date. Some people say you could use other large language models to assess that, but it's more a reliability issue than a validity issue. We have to check it somehow, and this is not necessarily easy or cheap to do."
  • Generative AI will drown us in too much and sometimes contradictory information- "Because it's pretty easy to generate reports and output, you're going to get more responses," Cappelli said. Also, an LLM may even deliver different responses for the same prompt. "This is a reliability issue -- what would you do with your report? You generate one that makes your division look better, and you give that to the boss." Plus, he cautioned: "Even the people who build these models can't tell you those answers in any clear-cut way. Are we going to drown people with adjudicating the differences in these outputs?"  
  • People still prefer to make decisions based on gut feelings or personal preferences - This issue will be tough for machines to overcome. Organizations may invest large sums of money in building and managing LLMs for roles, such as picking job candidates, but study after study shows people tend to hire people they like, versus what the analytics conclude, said Cappelli. "Machine learning could already do that for us. If you built the model, you would find that your line managers who are already making the decisions don't want to use it. Another example of 'if you build it, they won't necessarily come.'"

Cappelli suggested the most useful generative AI application in the near term is sifting through data stores and delivering analysis to support decision-making processes. "We are washing data right now that we haven't been able to analyze ourselves," he said. "It's going to be way better at doing that than we are," he said. Along with database management, "somebody's got to worry about guardrails and data pollution issues."

tag-icon Tags chauds: Innovation et Innovation

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.