April 2nd, 2023

Last week was not as dense as the previous ones in terms of releases of paradigm-shifting tech, but it was by no means a quiet week : the debate around AI safety was taken mainstream and sometimes turned really ugly.

As always, we try to abstract the noise and flesh out important themes, key questions, and critical lessons from the past week.

Balenciaga, AI and the future of Pop Culture

To start off on a light touch : since mid-March, the Internet has taken over the Balenciaga brand and showed the world what can be achieved out of pure creativity, including in video format - albeit current technology needs improving slightly.

It is too early to draw conclusions on a positive or negative effect for Balenciaga in the long run but, if the masses convert even 1% of the creative potential of what has been showcased, Pop Culture will be noticeably affected. It’s not only about memes or digital Art, the advertising industry will undoubtedly take this onboard, as everything seems possible ...and for a fraction of the usual costs. Some even see ramifications for the content industry.

Yes, it also speaks to fake news and deepfakes… but let’s keep the bad news for later. For now, just take some time to browse this thread from Kris Kashtanova (@icreatelife)

Harry Potter by Balenciaga, demonflyingfox
"Balenciaga" Pope Francis
REDDIT / U/TRIPPY_ART_SPECIAL

Twitter's open-source-washing

Last week, we touched on how much code was shared via Github. We learnt since that it included part of Twitter’s source code, leaked by an employee several months ago. After realizing that the code was public, Twitter took the appropriate measures to have Github remove it.

Twitter did, however, share voluntarily the algorithm underpinning the list of tweets that appear on users’ timelines, pursuant to Elon Musk’s promise earlier this year.

Although the Twitter community got really excited at first, nothing ground-breaking came out of this: despite initial claims that Musk’s tweets were pushed in priority, users probably see them all the time because 1/ he is widely followed, 2/ he generates a lot of engagement (clearly a talent of his) and 3/ he tweets a lot.

We can trust the community to come up with ways to “optimize” tweets for the algorithm but, if being seen is really what you want, paying for a “verified” status may be a more efficient avenue. As monetizing the platform was another promise made by Musk, it’s very likely that paying-up will ultimately be the only realistic option to guarantee visibility.

Twitter Algorithm - source : Github

The bewildering communication of Tech CEOs

Twitter’s recommendation algorithm revealed one thing : Elon Musk’s engagement is specifically monitored by the company. Is it because he is an influential user? Or because he is the CEO?

The question is worth asking given the line is sometimes blurred between what is CEO-content and what is personal-content. This issue has never concerned Musk, who also tweets about his other companies and about… anything, which sometimes led to major controversies including a trial for fraud. Yet the communication style is common, as a CEO often personifies a brand on social media with, in the backdrop, a general fascination for Tech founders.

At the end of last week, OpenAi CEO’s Sam Altman shared a presentation of Worldcoin, a tool expected to prove user’s personhood in order to defeat AI-powered impersonation.

Sam Altman co-founded Worldcoin three years ago as a crossover between crypto and iris-scanning technology. The buzz words were there, but the company “was not ready to tell its story in 2021”.

Without forming any views on Worldcoin’s product (appropriate tech? well or badly executed?) the solution now really has a tangible use-case. And given OpenAi is, amongst others, a reason for the tangible use-case, Altman’s communication for both companies can be confusing to say the least. Even more so as the AI-safety debate is turning political and Worldcoin has a strong political dimension which was reiterated in last week’s presentation.

Twitter’s upcoming feature “Verified Organizations”, which will attach an organization’s logo to its “affiliate” accounts, could clarify this type of situation providing organizations are willing to pay $1000 a month. In the meantime, Altman positioned Worldcoin as evidence that HE too was conscious of AI-risk, that he had a vision to address it... and even a solution.

Geopolitics and the probability of effective AI regulation

AI-risk was the theme of the week. We talked about “naïve” calls for a slowdown in last week's wrap-up (and why they were naive) but, if anything, calls became louder last week. Over 3k personalities signed an open letter to pause AI developments for 6 months, thereby taking the debate to the public scene. Fair to say that it went in all directions, and suddenly AI started resembling Crypto : it became virtually impossible to isolate legitimate claims from completely unfunded beliefs due to the echo chamber of radical opinions. And of trolls, lots of trolls. Many highly-regarded experts, across spectrum, fell for it.

Eliezer Yudkowsky sits at one extreme-end of the spectrum – the Rationalists end a.k.a AI-Doomers. He explained in an interview to Time Magazine that, in his view, the only way to control powerful AI development was to control the infrastructure, and therefore to be ready to use airstrikes to destroy datacenters. He called for an international coordination on the matter. An intense debate followed on whether the actual A[G]I-threat justified the approach, but no one argued that it was indeed the only way to control A[G]I-development with certainty. The irony is that the US currently are the only country where these airstrikes could take place given no other country can build GPT-4 equivalents (yet).

Long before Yudkowsky's interview, the geopolitical stakes of tech hardware were already on countries leaders' agendas: earlier this year, Joe Biden visited the Netherlands to discuss restrictions on transfer of ASML's technology, in particular to China. ASML is the sole provider of EUV lithography machines used to produce advanced semiconductors. Without technology transfer, it could take thirty years to build a similar expertise according to Professor Chris Miller, author of Chip War. Downstream only three companies can build the most powerful chips : TSMC (Taiwan), Intel and Samsung. TSMC notably make Nvidia's GPUs (widely used in AI). Given its critical position in the global value chain, TSMC probably does more for Taiwan’s protection by the US than the Taiwanese diplomats themselves. To illustrate the complexity of international relations around advanced semiconductors : TSMC have plants in the US but announced recently that they would not scale further there as they are subject to double taxation for all profits in the US. To establish a double-tax treaty with Taiwan, the US would first need to recognize Taiwan as an independent country. And that alone could trigger airstrikes.

So international coordination seems a long way-off… and in the meantime US companies will keep building.

Alphabet/Google seem determined to catch-up... without leveraging OpenAi tools

Last week, the Information broke the news that a top AI Engineer at Google had resigned following the use of OpenAi tools to improve their own AI-chatbot Bard. Everyone does it, see Alpaca, but it violates OpenAi’s terms of use so it’s where the line in the sand is drawn for Google. Google firmly denied the information.

The article also mentioned that Google and Deepmind – intense rivals within the Alphabet universe – would join forces to catch-up with OpenAi. As noted by @tszzl on Twitter, the project name, Gemini, reminds of NASA’s second human spaceflight program designed to overcome the lead in human spaceflight capability the Soviet Union had obtained in the early years of the Space Race


Update April 7th, 2023: this article previously mentioned Adobe Firefly as the go-to platform for commercial uses given better visibility on copyright of training data. Although the Adobe approach is more robust than other generative AI platforms, the question on whether image licensing covers training seems to remain open.

We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more