2024 Might Not Go As Planned

Happy New Year! I wish you all a happy, healthy, and prosperous 2024.

My predictions for AI in 2024 were pretty optimistic, overall, so I’m hoping that 2024 really is those things for all of us. That’s the plan, at least.

Still, there are a few worries out there, not likely to happen, but if they do they could derail the plan for the year: The Three Black Swans of AI. Though they are all very different from each other, they are all extrinsic events: something that is not AI intruding into the world of AI. Here are the three I think are the most likely, even if still not at all likely. If they happen, you were warned! (If they don’t happen, I’ll just recycle them next year…)

Copyright Lawsuits Knock LLMs Offline

Who? NY Times and lots of content creators.
Why? Billions of $ (said in your best Carl Sagan voice)

The NY Times has filed a lawsuit against OpenAI and Microsoft for “billions of dollars in statutory and actual damages.” Others are suing too.

There are a lot of interesting questions at law here, and IANAL, so I can’t offer an opinion of the merits of the case, other than to say that copyright law is tricky and vicious if you get it wrong, as Vanilla Ice can tell you.1

The best outcome entails the shuffling of some money and a happy-talk press release from the parties. But there’s always a chance that the amount of money is so large that it (a) unleashes a torrent of copycat lawsuits and/or (b) causes investment in LLMs to stop dead in its tracks. The worst outcome is that LLMs have to be re-trained from scratch with untainted data. Microsoft/OpenAI, Google, Meta may well be the only players left standing as they have the deep pockets and deep legal bench to fight their way through.

A Massive Security Breach

Who? Governmental and Freelance Hackers and/or Security Researchers
Why? Power, $, and Glory

We’ve already seen how LLMs can be goaded into leaking their training information. If a large LLM vendor gets hacked and the hackers can access fine-tuned, non-public LLMs, they might be able to extract sensitive, private data.

There’s really two ways this could happen. The first is the traditional somebody-phishes-a-login break-in to a vendor, the second is the magic-prompt-causes-chatbot-to-spill-the-beans. The first is the normal sort of InfoSec problem, but the second is something we’ve seen already happen where OpenAI’s “GPTs” will tell you the system prompt and download presumably secret file data.

You can count on state intelligence agencies already investing in hacking into and planting backdoors into LLMs.2 They tend to be covert, so news of any successes wouldn’t be broadcast. But information could eventually leak out.

A large breach would cause large enterprises to slow down investment and move away from SaaS LLM vendors, which would be a loss for those of us who can’t afford a truck-full of GPUs.3

China Invades Taiwan, shutting off NVIDIA’s pipeline

Who? China
Why? Take Wrapping up the Chinese Civil War off the To-do List

I mentioned this in my predictions as a cautionary note. NVIDIA manufactures its high-end GPUs in Taiwan, and seems content to continue to do so. An invasion of Taiwan would disrupt supplies for years, freezing the state of AI for the foreseeable future. There are lesser versions of this (a blockade, etc.) that could be quite disruptive.4

It’s beyond me to assess the likelihood of this happening in 2024, but if you want to worry there are plenty of articles to trouble you. (There are also plenty that say nonot a real worry).

Regardless, the situation may be about to change, as Taiwan holds its elections on January 13th. It appears that China has a favorite (the KMT) which is running behind in the polls. It will be interesting to see how China reacts to the results of the election.

Final Thoughts

As the saying goes, it’s difficult to make predictions, especially about the future. Probably doubly so about “surprises”. It’s easy to say that 2024 will have a bunch of surprises for us: it will. So I’m hoping for a lop-sided year of good surprises with few to no bad ones.

  1. https://www.briffa.com/blog/classic-copyright-cases-ice-ice-baby/

    Mr. Van Winkle (Vanilla Ice’s real name) wrote the song at age 16, so he can be forgiven for not understanding the nuances of copyright law. Microsoft is considerably older than 16… ↩︎
  2. To be more precise, it’s unlikely that an LLM itself can be hacked (Neural Networks are too narrow in design), but the supporting infrastructure around LLMs can be. Though I wouldn’t be completely surprised if an open-source LLM (or LLM engine) had something interesting planted into it. ↩︎
  3. There’s a white swan, if I can call it that, where this year new AI hardware comes to market at a vastly cheaper price and with greater availability. Although some of the in-the-works plans I’ve seen involved using TSMC to do the chip fab, which absolutely does not solve any problems. ↩︎
  4. Anything like this would have an upside of making 2022’s US federal CHIPS act, which provides in the ballpark of $100 billion for increased domestic semiconductor manufacturing, look like it was the work of psychics. ↩︎

Comments are closed.