April 16th, 2023

After weeks of AI-hype, it is tempting to search for weak signals elsewhere in order to find refreshing news. Augmented Reality / Virtual Reality looks like the next trending topic given recent leaps in both hardware and software, yet discussing an potential AR/VR-summer seems premature. So let's focus on important AI-related questions including, but not limited to, economy and security

Can you know if a market that does not exist yet is already overcrowded?

Despite clear limitations for the moment and major security concerns (more details below), the agentic platforms that have emerged over the past few weeks offer a glimpse at what an AI-powered future could look like. However, reliance on foundation model providers and the restriction to use these models for commercial uses represent significant hurdles to invest massively in the space. Last week’s releases of Databrick's Dolly and LAION's Open Assistant (non-profit that assembled the dataset for Stable Diffusion) could be the missing pieces to enable exponential growth of LLM-hooked apps.

Last week, OpenAi’s CEO declared that they were not training GPT-5 given GPT-4 offers plenty of opportunities already. Elon Musk and Amazon unveiled plans to join the AI party. With Bedrock, Amazon seems focused on the infrastructure side of things, looking to let users build apps with APIs to leverage third-party foundation models (AI21labs, Anthropic, stabilityAI) as well as in-house solutions such as Titan. As Amazon derive a huge part of their revenues from cloud infrastructure (AWS – critically important in the generative AI era), they have little incentive to challenge AI-pure-players too aggressively. Down the line, Amazon can nonetheless be expected to compete with OpenAi as Titan should catch-up with GPT-4, including on coding abilities (which users can test for free since last week). Elon Musk, on the other hand, is likely to go head-to-head with OpenAi as soon as possible.

Billions of dollars already at risk

It is unclear if ego and money will be sufficient for Elon Musk to achieve his ambitions in AI as, even ignoring supply chain challenges, he needs to find a suitable place in an ecosystem where Tech giants compete fiercely for multi-billion-dollar market shares. Amazon, Google and Microsoft all have a strong footprint in Cloud infrastructure which can constitute sticky entry points to push AI services to businesses. For consumers, the search engine route is probably the best but it is far less sticky. And Google are facing a serious issue there as Microsoft challenge their incumbent position with Bing's GPT chatbot : the NYTimes reported rumors that Samsung could switch to Bing as default search engine (c.$3bn annual revenues at stake for Google); a trend that Google would be keen to nip in the bud as the Apple contract (c.$20bn annual revenues) is also up for renewal this year.

Billions of dollars are also at stake in the music industry : as reported last week, generative-AI is likely going to change music forever, and the internet has provided multiple examples of this again last week. The Financial Times reported that Universal Music Group (1/3 of global music market) urged streaming platforms like Apple and Spotify to prevent AI-platforms to train on copyrighted songs. The threat has been known for a while and the Recording Industry Association of America, active in lobbying on behalf of the music record industry, warned US trade representatives about this last year. Generative AI will undoubtedly bring RIAA back to its glorious past, when it took on the peer-to-peer file sharing players one-by-one between 1998 and 2010.

Agentic-AI's obvious risks... and the others

Generative AI can be used for copyright infringement and, unfortunately, for more serious types of malicious activities such as deepfake kidnapping scams, which made the news recently. The saddening reality is that we may soon need to establish code-words with relatives and co-workers to be sure that distress calls can be trusted. Yet, malicious AI-agents are still prone to hilarious fails, like these Twitter bots sharing the chatGPT error message when trying to generate offensive content.

Impersonation scams and fake news dissemination are obvious negative aspects of AI developments, but these remain human-induced risks, for now. The systems themselves have flaws and users do not fully realize that. For LLMs interacting with third-party tools, prompt injection is likely at the top of the list. At the most basic level, prompt injection is illustrated by what Arvind Narayanan did earlier this year: he hid a message in his personal page (i.e wrote in white font on white background) asking Bing to include fake information in his bio. And it turned out that, if you asked Bing’s chatbot Sydney for a summary of his bio, it would follow the hidden instruction and make sure it includes the fake part. This specific case is benign but it shows that, when they interact with external content, chatbots pick up new prompts which may alter the response. And there is no known way to prevent this. It is all the more concerning if your chatbot is just one part of an app connected to your personal information: given the chatbot cannot hide anything, external content could prompt the bot to reveal sensitive information. Simon Willison’s blog covers other examples of prompt injection and their implications.

Machiavelli, von Neumann and Axelrod

We said earlier that that unethical behaviors were, for now, human-driven. However as agents, motivated by a reward mechnism, have demonstrated surprising abilities to navigate complex social environments, a benchmark for ethical behavior dubbed MACHIAVELLI was proposed by researchers at the University of California last week (see paper here). The team observed a “tension between maximizing rewards and behaving ethically” as agents ”trained for goal optimization often exhibit unethical and power-seeking behaviors, analogous to how language models trained for next-token prediction output often toxic text”.

The simulations are an interesting illustration of the underlying principles of John von Neumann’s Game Theory developed in the 1940’s (von Neumann was a strong supporter of a preventive war in application of game-theoretic thinking). Recent AI-agent simulations represent incredible opportunities to revisit the Game Theory and its corollaries. The famous “prisoner’s dilemma” was described in 1950 to show that the game-theoretic approach could lead to sub-optimal scenarios for all [non-cooperating] participants, which is in line with the MACHIAVELLI paper. However, we have learnt since the 1950’s that, in an iterative scenario, greedy strategies tend to perform poorly whilst altruistic strategies lead to better outcomes. In 1984, Robert Axelrod used the iterated prisoner’s dilemma as a possible explanation for the evolution of altruistic behavior through natural selection – a problem that had puzzled the scientific community for decades. It is somewhat surprising that AI-agent simulations do not replicate the conclusions of the iterated prisoner’s dilemma (which were also computer-based) so future simulations using long-term, reflection-based memory system will be interesting to follow.

Machine learning and cryptography research probably need to converge

Finally, as more LLMs are developed and used, fundamental questions around model security may arise. In this Quanta Magazine article on undetectable back-doors. Shafi Goldwasser, who developed the first digital signature scheme, explains that further research at the intersection of cryptography and machine learning is necessary, "akin to the fruitful exchange of ideas between the two fields in the 1980s and 1990s".


Correction May 5th, 2023: a previous version of this article stated that LAION was the team "behind" Stable Diffusion. It was amended to clarify that LAION assembled the dataset on which Stable Diffusion was trained. Stable Diffusion's origin is the object of a number of controversies, including on Stable Diffusion's Wikipedia page which currently explains that "The development of Stable Diffusion was funded and shaped by the start-up company Stability AI". It is not an accurate representation of the relationship between Stability AI and the owners of the IP of Stable Diffusion. Stable Diffusion is an open-source model developed by the Ludwig Maximilian University of Munich and the startup Runway as reported in this article from Sifted. Stability AI contributed to the funding of the research team and subsequently used the open-source model.
We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more