Picture this: you go, as you do every day, to your local newsagent to buy your favourite newspaper. This morning, he says to you: “I’m going to make your life easier. I’ve read Le Monde cover to cover. In fact, I read every single newspaper overnight, and even devoured every bit of international news. And I’m going to sum it all up for you in two minutes.” Then the newsagent proceeds to cobble together a summary of the news that he thinks you’ll be interested in. How has he chosen that information from the plethora he just read? Is he capable of recognising fake news? What are his ideological biases? And, lastly, how much can you trust his summary? This is not science fiction: this is exactly what generative AI does by standing between us and all of the content it has “digested”.
In addition to issues of copyright and the fight against dis- and mis-information, which are of course vital to address, by becoming the new intermediaries, generative AI tools could become our new “gateway” to the internet and to the online services we use. According to some, they could even replace traditional search engines.
By having direct control over the access to knowledge and ability to share it that is the very heart of the internet model, generative AI thus threatens our freedom of choice in accessing online content and our freedom of expression. This is a fundamental challenge to the principle of an open internet: every ISP is prohibited from discriminating against users’ access to the content being relayed over their network.
For a "right to parameterisation"
In France, Arcep is responsible for ensuring that this principle of an open internet is upheld. Which is why it is speaking out today about the impact that generative AI is having on these issues, in its response to the European Commission’s public consultation.
What can we do?
Let’s begin by taking the courses of action set out in the report from the AI Task Force that was commissioned by the Government last year: let us train and educate every citizen and business in these new tools, let us support the development of open AI tools that can be audited for bias by third parties, let us demand more transparency on the data used to train them and on the results of the audits, without which they can never be considered trustworthy. Let us also have a clear-eyed view of the partnerships that are currently being forged. And let’s not be naïve: these same Big Tech companies that already have such a hold over our digital lives will have the same ability to determine how information circulates in future. Let’s be certain that the same causes will produce the same effects.
If we want to take advantage of AI’s tremendous potential in areas such as health, understanding climate change, education, economics and society in general, let us ensure that the internet remains an open space of freedom and innovation. Let us demand our “right to control our own settings” so that everyone has control over the AI they use, as suggested by France’s National Consultative Commission on Human Rights. This is how we take (back) control in the name of a desirable digital future, while keeping our ability to buy Le Monde, or any other newspaper, every day and to shoot the breeze with our favourite newsagent.