Point72 Ventures

Clunky copilot or benevolent boss? Why I believe AI should prompt us

By Sri Chandrasekar

Consider how much time predictive typing on your phone or in email saves you. Perhaps one minute per day? If so, those savings don’t exactly reduce our work to the 15-hour work week John Maynard Keynes predicted in the 1930s. In fact, one study of 37,000 people found predictive typing actually slowed them down. But that’s the sort of AI available to the mass market these days and there is a lesson in that.

I believe predictive typing is representative of the way many companies are applying AI—in ways that feel obvious but ultimately don’t increase productivity. I personally don’t think we need an AI to finish our sentences. And we shouldn’t need to prompt an AI to write us something. Instead, I think AIs should be prompting us.

It’s my belief that the right tool with access to the full repository of our work information can draw upon enough context to tell us what to prioritize. Rather than an AI copilot, it’d be a chief of staff that knows enough to set the agenda.

Perhaps this sounds counterintuitive or dystopian, but let’s explore what it would look like applied to four common use cases where, in my observation, AI copilots haven’t yet made a meaningful productivity difference.

Most jobs aren’t actually that easy to automate

Let’s define what we mean by AI: software that uses algorithms to simulate human intelligence by reasoning, drawing conclusions, applying judgment, and possibly, acting on the result. The current generation of models perform best where the success criteria are repetitive and clearly defined, such as making a diagnosis, producing an image, or answering a question.

When it comes to divergent, nonlinear thinking, skilled humans still outperform AI on a range of tasks—like most jobs require. Because while a majority of workers do some repetitive tasks, my sense is that most jobs are actually full of heterogeneous tasks. It’s part of what makes work interesting. There may be some areas where automation can completely eliminate rote, repetitive jobs, but I observe that software as a service (SaaS) has largely done that or made serious inroads. The big opportunities in AI, I believe, will not necessarily all come from further process automation.

The big opportunities in AI, I believe, will not necessarily all come from further process automation.

At least some Wall Street analysts are reporting that only one publicly traded software company has reported revenue/profit gains as a result of using gen AI. To me, it’s worth asking whether copiloting really drives productivity gains. AI startups also appear to have lower gross margins than other software startups: that computation is expensive, especially compared to the output.

Yes, there are many studies about jobs “exposed to AI” on the assumption the gains will come from process automation. But I find most literature on this flawed because it confuses potential with probability, in the same way people in the 1950s felt flying cars were imminent because they were possible. Mind you, that primary “replacement” study was produced by OpenAI, which sells AI software. And Pew Research’s definition of “job exposure”  is based on conjecture and thought experiments, not actual tests.

So where’s the AI opportunity? Consider what actually restricts worker productivity: I think it’s that people don’t know what action to take next. They’re blocked when their instructions are vague or abstract, or where they’d have to sift through an unrealistic volume of information to prioritize their day.

What if instead of AI saving people one minute per day writing emails, it saved them one hour in organizing their materials for a meeting? Or saved them two hours by canceling meetings that lack an agenda? Or told them the optimal way to spend their day?

AI may be able to help us get clear on the most valuable activities so we can focus on just those. I have observed that this is difficult for people because while enterprises are awash with information, a human can only access and weigh so much. Brains have limits. Our working memory is just seven numbers in a string or 3-5 essential items, and the prevailing theory is we can only maintain 150 close relationships.

Machines do not face these precise limits. This is not to bash the brain. It’s the result of 500 million years of experimentation. But AI by comparison has a vastly larger active memory and capacity for more concurrent connections. It doesn’t just review some Slack messages. It can access all of them, all of the time, for every decision.

So why don’t I see more AI startups taking the chief of staff approach, where the AI uses all that data to make decisions and prompt us?

Perhaps startups find the copilot model appealing because it requires less context. Or perhaps these startups are following in the well-worn groove of some software applications that came before, which aimed to solve task management issues. But imagine the chief of staff model applied to these use cases, where instead of you filling in the software, the software fills in itself. You might go from digitally recording a meeting you booked to the AI booking your meetings, declining requests, and also making cancellable dinner reservations.

One of the great benefits I see to the chief of staff model is it could attune itself to you. If you correct it, I believe it would be able to learn your preferences just as a person would. I am excited about the idea of a tool that taps into AI’s potential to be a learning machine, realizing that you prefer new business calls in the morning and to not have meetings the day after a red-eye flight.

I can see this applying to industries such as the following hypothetical examples:

  • Medical billing—In the copilot model, AI can help workers input medical reimbursement codes marginally faster (the predictive typing approach). But in the chief of staff model, I believe it could aim to make people more productive. It could scan all those files and say, “Here are the 37 billings that are the most valuable, do those first. Reject that dental visit, don’t reject the cosmetics claim,” and so on.
  • Legal issues—AI drafting or reusing clauses is an okay use case, though only slightly faster than without AI because a human must still verify the work. I believe AI applied to helping you identify legal risks could generate much more value. For example, “These 17 leases will auto-renew, do you want that? And this contract you signed three years ago has a pricing uplift charge you should contest. And this contract has a typo that substantially alters the meaning of this provision, and you should correct it.”
  • Customer success—I believe an AI chief of staff could use a wide variety of data (emails, tickets, app telemetry, phone calls, and more) to pinpoint what’s working and what’s not in a customer relationship. It could possibly observe what works across similar accounts based on the geography, product features, and champion profile to suggest a next-best action to that account manager. This approach could theoretically transform a one-to-many customer success strategy into a many-to-many approach for all accounts—something that today is very difficult at scale.
  • Product management and user research—I believe AI could simulate user behavior at scale to predict how users will interact with (or break) software. These insights might proactively guide engineers towards additional features that would increase engagement, flag bugs, or safeguard against bad behavior. It would be something of a virtual product manager or chief of staff for engineers.

How does this work practically?

I think the chief of staff play may best be tested as an enterprise use case. There, people usually aren’t fully deciding on how to spend their time, and the data is multitudinous and reasonably well-structured in record-of-work software. If organizations are setting clear goals for their people but the prioritization is nebulous, I believe an AI chief of staff could work its magic.

This is something I’m thinking about a lot as an investor. And also, as an individual who interacts with predictive typing in my email where I suspect it slows me down. I believe AI can do much more for us. If you’re a founder or researcher in this space who wants to talk, I’d welcome the conversation.

This is not an advertisement nor an offer to sell nor a solicitation of an offer to invest in any entity or other investment vehicle.  The information herein is not intended to be used as a guide to investing or as a source of any specific investment recommendation, and it makes no implied or express recommendation concerning the suitability of an investment for any particular investor.  The opinions, projections and other forward-looking statements are based on assumptions that the authors’ believe to be reasonable but are subject to a wide range of risks and uncertainties, and, therefore, actual outcomes and future events may differ materially from those expressed or implied by such statements.  Point72 Private Investments, LLC or an affiliate may seek to invest in one or more of the companies discussed herein.