Aligned : introduction

I was recently offered a chance to contribute to a project by a French group of thinkers called Technoréalisme. If you read French, they're well worth looking up; they offer a unique perspective on AI and I wrote about one of their works earlier this year (in French only). Their latest project aims to tackle thorny issues tied to the rise of AI tools – specifically, how these tools might deepen our cognitive vulnerabilities and increase our exposure to manipulation by malicious actors, whether politically or ideologically motivated.

While I felt that my own contribution might be limited, their project reminded me that I'd tried to tackle these very issues myself on at least three occasions since 2022, only to stall out each time. My first attempt, back in early November 2022, had to be parked following the release of ChatGPT. AI was going mainstream and I needed to re-assess. Subsequent attempts also fizzled out ; always for valid reasons. This pattern, ironically, might speak to the very cognitive vulnerabilities I intended to expose.

The invitation from Technoréalisme, however, provided the nudge I needed to revisit those scattered notes and finally try to put them into shape.

Because this series grew out of exploring AI's downsides, it might seem quite critical of the technology. That's not entirely reflective of my overall stance. After all, PITTI stands for the promotion of innovation in information techniques and technologies. I don't endorse every criticism levelled against AI, but I try to be realistic about its intrinsic limitations and risks, especially regarding the cognitive impact on society. There are undeniable downsides, but in pinpointing how AI currently falls short, it's vital to distinguish between its technical shortcomings and more fundamental conceptual challenges.

On the technical side, the machine is bound to improve. The critical question then becomes: will things necessarily get better, or potentially worse, as the technology matures?

The first part of this series explores the potential disconnect between technological and societal innovation, and what makes a tool truly transformative for both.

The second part sheds a technical light on the profound transformations in the information value chain over the past 25 years.

The third part tackles the conceptual dangers arising from the spread of sophisticated influence tools as AI continues to advance.

This series brings together various thoughts and fragments posted here since 2022, aiming for a more cohesive perspective. Given its focus on the challenges, I hope to find time to balance it out with another series focusing on the positive aspects of Tech and AI. I remain deeply convinced that the opportunities they offer individuals and society are immense.

We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more