top of page

Subscribe

AI assistants are eye-wateringly expensive to build and operate, so how much will we be paying for them in the future?

  • Writer: Robert Salier
    Robert Salier
  • Dec 4, 2024
  • 7 min read

Updated: Apr 18


ree

Introduction

This article discusses how AI assistants such as ChatGPT, Copilot and Gemini are hugely powerful productivity tools that will become almost indispensable to many people and businesses. It then delves into just how expensive these AI assistants are to design, build and operate.  The article then explores how much the providers of these AI assistants will need to charge to get an acceptable return on investment, and how they may extract this amount of money from the users of AI assistants.


AI Assistants are Hugely Powerful Productivity Tools

If you’ve been using an AI assistant such as ChatGPT from OpenAI, Gemini from Google and Copilot from Microsoft as a work productivity tool, you may like me, find it is kind of like having a very inexperienced graduate working for you.


Whilst I have no affiliation with Microsoft, and am not wanting to promote any AI assistant over another, I’d recommend checking out the Copilot Scenario Library if you haven’t already.  This scenario library demonstrates how the Microsoft’s AI Assistant can be used as a productivity tool in a range of professions and industries.  E.g. in Microsoft’s Use Cases for Marketing the AI assistant helps with things like market research, trend analysis, drafting a proposal, and creating a PowerPoint presentation from that proposal.  Other AI assistants are similarly powerful tools.


Like the output of most fresh graduates, the drafts that any of these AI assistants create will have some issues, may miss considerations, and may contain some errors to be spotted and corrected.  You may decide to re-write and/or create substantial portions yourself, but overall these AI assistants can substantially increase your productivity compared to starting with a completely blank page. And remember, these assistants are getting better and more capable all the time. If you haven't tried these tools in the last six months, you may be surprised. That's a statement that holds what-ever date you read this, at least for the foreseeable future.


We are in the early days

Like streaming services a few years ago, and the telecommunications and dot-com booms before that, AI is currently in a “gold rush” or “land-grab” with the big players prepared to sustain big losses in the short term to grab market share.  We can expect that as the market matures, less will be given for free, and subscription prices will rise sharply once people are get to a point where they cannot live without their digital assistant(s).


These AI assistants are eye-wateringly expensive to build and operate, and these companies (including OpenAI) are not charities.  They need a healthy return on this investment.  Or in other words, in the long term we, the customers, will pay one way or the other.


So, how expensive are AI assistants to design, build and operate?   That’s a question that’s easy to ask but much harder to answer unless you happen to be an AI Product Manager working for one of the companies that produce AI assistants.  Like most cost-accounting it’s hard to calculate, e.g. how to factor various shared costs that contribute to multiple products and technologies such as R&D and operations staff, sales and support.  Perhaps most significantly, as commercial and competing companies, it is not in their interest to publicly share much of the detail.


Having said that, there is some publicly available information that can help get a feel for the magnitude of the costs involved.


Research and Development Costs

Let’s start with the cost of Research and Development.  Around the world, across many companies, many thousands of people are working on R&D of AI.


This graph shows R&D investment at Google, which spent over USD$45 billion (over AUD$69 Billion) on R&D in 2023 alone.  Since Google’s future depends very heavily on AI assistants and search, a very significant proportion, probably a large majority of this R&D is spent on AI and directly related areas.



Annual R&D expenditure of Google
Annual R&D expenditure of Google


Now let’s look at Microsoft.  According to Copilot, Microsoft have invested over USD$94 billion (AUD$144 billion) to-date, and that they are continuing to increase their annual investment each year.  That’s around AUD$17.50 for each and every person on our planet.  It’s around AUD$30 per adult 18 years or older regardless of where in the world they live and whether they are interested in and can afford AI.  Only a portion of these people will be potential customers of AI assistants.  Also, Microsoft will be fighting with its competitors for share of that addressable market, so just to break even on the R&D alone, Microsoft will need to recover way more than $30 per user per year, probably multiples of $30.



Annual R&D expenditure of Microsoft
Annual R&D expenditure of Microsoft


Ongoing Cost of Operation

Turning to operational costs, once the AI technology has been developed, the biggest ongoing costs to run AI assistants are in the data centres themselves, i.e. the buildings, endless rows of computing and storage infrastructure that need replacing every 3 to 5 years on average, staff labour costs, and the electricity to provide power and cool the electronics.


AI infrastructure uses a lot of power.  So much power that Microsoft has partnered with Constellation Energy to restart a nuclear reactor at the Three Mile Island nuclear power plant in Pennsylvania, for powering its data centres.  Microsoft have also partnered with TerraPower to purchase electricity from multiple small modular [nuclear] reactors (SMRs).



Three Mile Island Power Plant to be restarted to power Microsoft Datacentres
Three Mile Island Power Plant to be restarted to power Microsoft Datacentres


Amazon has invested in X-energy, a company developing SMRs, and has signed agreements with utility companies to build SMRs in Virginia and Washington USA. 


Google has signed a deal with Kairos Power to develop, deploy and purchase electricity from SMRs in multiple locations.



Concept image of a Data Centre campus with integrated Small Modular Reactor
Concept image of a Data Centre campus with integrated Small Modular Reactor


The Global AI Infrastructure Investment Partnership (GAIIP), backed by tech investment firms plus Microsoft and NVIDIA aims to invest up to $100 billion in data centres and supporting power infrastructure.


Closer to home, Microsoft have said they will be spending AUD$5 billion in Australia to expand its hyperscale cloud computing and AI infrastructure in Australia over the next two years alone.  Microsoft already has data centres in Australia currently serving huge numbers of customers. The vast majority of this growth is being driven by the compute needs of AI. There are around 20.5 million adults in Australia, so that works out to be around $240 per adult Australian.  This is just the cost of the data centres themselves without any profit margin and excluding the huge investments in R&D of the AI technology discussed above.


How much will we have to pay?

The above R&D and operational costs are incomplete glimpses, just to give a feel for the magnitude of the overall costs.  Much more rigorous data and analysis would be needed to estimate per-customer costs with even moderate confidence.   Also remember these figures are just costs, with companies needing to make a healthy profit on top, paid by customers somehow or other.

The R&D and operations costs would need to be considered collectively, annualised, profit margin factored in, then divided by the number of people with the desire and the means to pay for AI assistants.

Given the magnitude of the R&D plus operations costs, it seems reasonable to conclude that to make a healthy profit, providers of AI assistants would need a per-customer return counted in hundreds of dollars per year, not tens.


How are we being charged now?

Already there are paid versions of Gemini, ChatGPT and Copilot. In Australia Google “Gemini advanced” is currently AUD$20 per month, “ChatGPT Plus” is $30 per month and Microsoft “Copilot pro is $33 per month.


Currently, Apple are not charging for their recently launched “Apple Intelligence”, but many industry analysts expect it will at some point create a subscription service for “Apple Intelligence+”, and add it to a higher price tier of “Apple One”.


Apple also appears to be making money indirectly on Apple Intelligence by driving sales of new hardware.  From an end-customer point of view I was disappointed to learn that Apple Intelligence will only work on an iPhone 15 Pro or iPhone 16.  Here in Australia, the cheapest Apple iPhone 16 is AUD$1,399, going up to $2,699 (congratulations Apple, you’ve successfully managed to have people to pay high-end laptop prices for a phone!).  I alreadu already have to upgrade my iPhone to enjoy Apple Intelligence.  Its only a matter of time before my fairly new Mac and iPad Pro will not support the latest version of Apple Intelligence to keep my productivity competitive.


Apple would say that the need to upgrade end-user hardware is a consequence of their privacy architecture where many of their AI models run directly on the device rather than in the cloud.   I’m sure that was a carefully considered decision, with Apple adding pull-through iPhone sales into their business case from people upgrading their phones, tablets and computers to get AI.


Apple’s approach is an example of the “ecosystem strategy” popular with Big Tech, i.e. where interconnected sets of platforms, products and services work together to provide an integrated user experience.  Money is made indirectly by attracting and retaining users to their overall ecosystem.  It’s effectively an advanced form of cross-selling and “pull-through” sales.


Microsoft is also using the ecosystem approach to monetise AI, integrating Copilot into a range of business and consumers services, with Microsoft Office/365 being the most well-known.  We can expect to see Google leveraging their other ecosystem assets such as search, Android tablets and phones, although OpenAI doesn’t have much of an ecosystem to harness.


How will we be charged in the future?

Will Big Tech make enough money from AI Assistants by leveraging their ecosystem, driving hardware upgrades, and/or a simple “freemium” pricing model like they have today? At the current prices that seems unlikely.


It’s easy to imagine a future where we pay for different areas of expertise. E.g. for an AI assistant specializing in market research, statistics, graphic design, social media strategy, campaigns … the list goes on and on.


One thing is for sure ... when choosing what price to charge for any one of these specialist assistants, Big Tech will be considering how much the equivalent human labour would cost, the relative value of an equivalent expert AI assistant, and just how much we are prepared to pay.



Related Posts

See All
bottom of page