March 26th, 2023

Two weeks ago, the immediate future was all about engineering prompts, fine-tuning Large Language Models and hooking them to other apps in order to tame AI and, eventually, get to human-like abilities in specific niches. Since then, Microsoft has issued a 150-page report claiming that AGI is here – maybe not in the form that people initially expected, but here nonetheless.

Writing weekly wrap-ups is testing these days as sentences can become irrelevant and ideas obsolete within minutes. But let’s not be deterred and try to identify key takeaways for the long-term (you won’t read about Google’s Bard here).

Github, digital infrastructure with global scale

Let’s start with something unrelated to AI - or only indirectly. On Friday last week, Github announced that they had changed their RSA SSH host key because it had been exposed inadvertently. The key did not grant access to customer data but could have been used to impersonate GitHub or eavesdrop on user’s operations as per Github’s official statement. This may sound like a non-event in a week otherwise filled of major announcements (see below), yet it’s worth a mention given how critical the Microsoft-owned company has grown over the years: today, 100m software developers use Github to manage their projects code using Git (a standalone version control system – accessible independently of Github), but also to collaborate with other developers, to log and answer issues, to provide support...

The service wrapper made Github dominant, which only accelerated when its AI-based coding assistant GitHub Copilot was launched in June 2021 (trained on all open-source code hosted by Github). As Vijih Assar put it in an article on digital infrastructure ownership published last year, Github "has shaped [users] thinking so deeply that [they] could not live without it". As of February 2023, Copilot generated 46% of all code built on Github – up from 27% in June 2022. Users are embedded and last week's announcement of Copilot X is unlikely to make them switch. If anything, it should reinforce the trend.

The API-to-Everything

Up until Wednesday last week, we thought we would cover various AI-services, those that had access to the internet, those that did not have access but could give the impression that they had, and how to build an integrated platform to let offline models access the web.

But on Wednesday, OpenAi announced their plugin platform for ChatGPT, including internet access plus services from third-parties onboarded via an extremely streamlined process (basically writing a manifest in plain English and that's it). This is giving ChatGPT "eyes and ears" and, given what we know already about the next iteration of the underlying model, GPT-4, plugins mean that the API-to-Everything – from and to highly powerful AI - is very close.

ChatGPT initial plugins - source : Openai.com - March 2023

OpenAi's Masterclass

OpenAi is setting an incredible pace, and if you feel that it’s going too fast, it may be because the plan is perfectly executed: by the time you understand the potential of whatever product they have shipped, they already have a solution ready to exceed that potential. This means that all the unveiled steps of their roadmap had been lined-up for months. And the more third-parties they involve, the longer ahead they need to plan. Here, a number of companies obviously got time to work on their plugins, which made us wonder: why was ChatGPT released 3 months ago without access to the web whilst they probably knew from the outset where it was going? And by the way, for how long did they have GPT-4 ready?

Both the Microsoft team who drafted the report issued last week, and Bill Gates personally, had glimpses of the GTP-4 capabilities in Autumn 2022: Bill Gates “watched in awe as they asked GPT […] 60 multiple-choice questions from the AP Bio exam” in September. Microsoft could see early-on that GPT-4 was capable of incredible things (see section F of the report, Additional examples for interaction with the world). It would also make sense that Microsoft’s multi-billion investment announced in January 2023 was not only a leap of faith, but was supported by hard evidence thoroughly challenged by Microsoft’s technical and legal teams. If that’s the case, releasing ChatGPT on November 30th based on GPT-3.5 and without internet access rather than waiting for GPT-4 reveals an elaborated release strategy. They could have waited given the competitive environment.

Most Tech experts focus on products (technical aspects, performance, UX, …), sometimes on companies culture, but the business/operational side of things is often overlooked. Releasing a new technology is not simple, it implies a lot of planning and strategy. Not thinking things through can lead to ridiculous situations, like Nvidia’s release of a software library called “culhito” last week. When the technology has paradigm-shift potential and will likely disrupt many industries, like the iPhone in 2007, the strategic dimension is even more critical. But Apple products would never cause people to freak out about an existential threat, like GPT-4 could do in an unprecedented manner.

In that context, OpenAi did not take any shortcuts: they released attenuated versions of their technology to let people get comfortable and educate themselves about the potential whilst there were working on a roll-out of much more powerful versions few months later. If you take a step back, you must recognize what OpenAi has achieved: regardless of whether you believe such powerful AI is a good thing, the execution is a masterclass for all the challenges it represented.

Meanwhile, financial regulators worry about inflation

Bill Gates' blog post last week was entitled "the Age of AI has begun", and it is only going to accelerate despite naïve calls for a slow-down. There is little doubt that reasonably good LLMs will soon be a commodity and, if this is the end-game, you want to be the platform not the product. To build a platform, there is very strong first-mover advantage because of the network effect. The most basic principle of Game Theory implies that the slow-down is not coming. And OpenAi, by dropping the non-profit status, have shown that Game Theory applies to them too.

Soon the pre-AI period will seem as distant as the days when using a computer meant typing at a C:> prompt rather than tapping on a screen.

Bill Gates - GatesNotes.com - March 21, 2023

With an API-to-everything and LLMs’ ability to develop “ideas” independently, the meaning of concepts like “intellectual property” or “creativity” will likely shift significantly over the next 5 years. Some business models will be impacted, but that’s probably not the most disruptive consequence the current revolution. Last week, we touched on the impact in the workplace, as humans can hardly compete AI for many office / desktop-based tasks. Announcements since then speak to an accelerated timeline for AI adoption by companies, and therefore a real shock in terms of processes and in terms of labor.

It does not mean that people will not transition to other roles or other sectors, but the transition cannot be smooth, especially for those who have not exercised their ability to learn for a while. It would be reasonable to stop calling for a slow-down, and instead start focusing on making plans to leave no one behind. Whether they are in complete denial (like many IT/Digital teams in companies) or just distracted by more pressing systemic threats (e.g banks failures), it is unclear if country leaders or governments measure the need to prepare a response. If they have been elected recently, the shock will take place before the next election so accompanying populations through the AI transition should probably go up the priority list. They simply can not leave it to Influencers who have already started positioning themselves with a rhetoric along the lines of "AI is going to destroy 25% of the jobs, sign-up here to learn how to keep yours".

Finally, the AI transition in the workplace is not only a threat from a political perspective but also a real challenge for financial stability: if AI-adoption at company-level triggers a major shock on the labor market, it should lead to intense deflationary pressure. How ironic in the current context…

We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more