Does AI really boost productivity at work? Research shows gains don’t come cheap or easy

Artificial intelligence (AI) is being touted as a way to boost lagging productivity growth.

The AI productivity push has some powerful multinational backers: the tech companies who make AI products and the consulting companies who sell AI-related services. It also has interest from governments.

Next week, the federal government will hold a roundtable on economic reform, where AI will be a key part of the agenda.

However, the evidence AI actually enhances productivity is far from clear.

To learn more about how AI is working and being procured in real organisations, we are interviewing senior bureaucrats in the Victorian Public Service. Our research is ongoing, but results from the first 12 participants are showing some shared key concerns.

Our interviewees are bureaucrats who buy, use and administer AI services. They told us increasing productivity through AI requires difficult, complex, and expensive organisational groundwork. The results are hard to measure, and AI use may create new risks and problems for workers.

Introducing AI can be slow and expensive

Public service workers told us introducing AI tools to existing workflows can be slow and expensive. Finding time and resources to research products and retrain staff presents a real challenge.


This article is part of The Conversation’s series on jobs in the age of AI. Leading experts examine what AI means for workers at different career stages, how AI is reshaping our economy – and what you can do to prepare.


Not all organisations approach AI the same way. We found well-funded entities can afford to test different AI uses for “proofs of concept”. Smaller ones with fewer resources struggle with the costs of implementing and maintaining AI tools.

In the words of one participant:

It’s like driving a Ferrari on a smaller budget […] Sometimes those solutions aren’t fit for purpose for those smaller operations, but they’re bloody expensive to run, they’re hard to support.

‘Data is the hard work’

Making an AI system useful may also involve a lot of groundwork.

Off-the-shelf AI tools such as Copilot and ChatGPT can make some relatively straightforward tasks easier and faster. Extracting information from large sets of documents or images is one example, and transcribing and summarising meetings is another. (Though our findings suggest staff may feel uncomfortable with AI transcription, particularly in internal and confidential situations.)

But more complex use cases, such as call centre chatbots or internal information retrieval tools, involve running an AI model over internal data describing business details and policies. Good results will depend on high-quality, well-structured data, and organisations may be liable for mistakes.

However, few organisations have invested enough in the quality of their data to make commercial AI products work as promised.

Without this foundational work, AI tools won’t perform as advertised. As one person told us, “data is the hard work”.

Privacy and cybersecurity risks are real

Using AI creates complex data flows between an organisation and servers controlled by giant multinational tech companies. Large AI providers promise these data flows comply with laws about, for instance, keeping organisational and personal data in Australia and not using it to train their systems.

However, we found users were cautious about the reliability of these promises. There was also considerable concern about how products could introduce new AI functions without organisations knowing. Using those AI capabilities may create new data flows without the necessary risk assessments or compliance checking.

If organisations handle sensitive information or data that could create safety risks if leaked, vendors and products must be monitored to ensure they comply with existing rules. There are also risks if workers use publicly available AI tools such as ChatGPT, which don’t guarantee confidentiality for users.

How AI is really used

We found AI has increased productivity on “low-skill” tasks such as taking meeting notes and customer service, or work done by junior workers. Here AI can help smooth the outputs of workers who may have poor language skills or are learning new tasks.

But maintaining quality and accountability typically requires human oversight of AI outputs. The workers with less skill and experience, who would benefit most from AI tools, are also the least able to oversee and double-check AI output.

In areas where the stakes and risks are higher, the amount of human oversight necessary may undermine whatever productivity gains are made.

What’s more, we found when jobs become primarily about overseeing an AI system, workers may feel alienated and less satisfied with their experience of work.

We found AI is often used for questionable purposes, too. Workers may use AI to take shortcuts, without understanding the nuances of compliance within organisational guidelines.

Not only are there data security and privacy concerns, but using AI to review and extract information can introduce other ethical risks such as magnifying existing human bias.

In our research, we saw how those risks prompted organisations to use more AI – for enhanced workplace surveillance and forms of workplace control. A recent Victorian government inquiry recognised that these methods may be harmful to workers.

Productivity is tricky to measure

There’s no easy way for an organisation to measure changes in productivity due to AI. We found organisations often rely on feedback from a few skilled workers who are good at using AI, or on claims from vendors.

One interviewee told us:

I’m going to use the word ‘research’ very loosely here, but Microsoft did its own research about the productivity gains organisations can achieve by using Copilot, and I was a little surprised by how high those numbers came back.

Organisations may want AI to facilitate staff cuts or increase throughput.

But these measures don’t consider changes in the quality of products or services delivered to customers. They also don’t capture how the workplace experience changes for remaining workers, or the considerable costs that primarily go to multinational consultancies and tech firms.


The authors thank the research participants for sharing their insights, the researchers who contributed their expertise to the initial analysis of interview transcripts, and the Office of the Victorian Information Commissioner for supporting participant recruitment.

by : Jake Goldenfein, Senior Lecturer, The University of Melbourne

Source link

Capital Media

Read Previous

Maintenance of Cash Reserve Ratio by Banks

Read Next

new study maps the progress in South Africa