Q and A with MEP Marietje Schaake
The liberal ALDE group’s spokesperson on transatlantic trade and digital trade has emerged as a leading critic of Big Tech on the European stage.
2/23/18, 10:30 AM CET
MUNICH — Marietje Schaake is not one to mince her words — and the 39-year-old leaves no doubt that she is not intimidated by the market power held by American tech giants.
Instead, the Dutch social-liberal MEP believes, it’s time for Europe to fight back and boost government oversight of algorithms used by technology firms.
“Now, I understand companies don’t like regulation,” she said. “[But] there is absolutely no willingness to have more algorithmic oversight and accountability on the part of these companies.”
Over the last couple of years, Schaake — the liberal ALDE group’s spokesperson on transatlantic trade and digital trade — has emerged as a leading critic of Big Tech on the European stage, just as companies like Facebook and Google are facing increasing scrutiny over their impact on society and are being accused of dodging their political responsibility.
During the Munich Security Conference earlier this month, Schaake met with representatives of the companies that flocked to Germany to meet with political and business leaders. POLITICO sat down with her on the sidelines of the conference.
In his address to world leaders, Eric Schmidt, the former executive chairman and now a technical adviser at Google’s parent company Alphabet, said, ‘To be clear, humans will remain in charge of [artificial intelligence] for the rest of time.’ Is he right?
I thought Eric Schmidt was talking very lightly, even frivolously about very, very serious topics. He said that people watched too many movies; that the challenges that AI might bring are challenges that we can deal with later; that we don’t have to rush ahead.
But he failed to recognize how fundamentally impactful AI is likely going to be. And if he has ways of knowing that the human being will always be sort of superior to the technology then I would like to hear more details than what we heard from him.
You were among the first politicians to warn of biases lurking inside the algorithms at the core of AI. Have other policymakers woken up to the issue?
At long last, there is more awareness of the deeply impactful consequences of the use of technologies on our democracies, to begin with.
[But] now that we’re looking at AI, it’s fair to say that China and Russia have already identified it as the new arms race. That should tell us something — whether we like it or not, that’s how they’re looking at it.
As Europeans, when you look at research budgets, we’re lagging behind. And at the same time, we’re not yet seeing major efforts toward global norms or global ethics discussions to prevent the development of offensive capacities or fully automated killer drones and robots.
Policymakers should jump right into this discussion and we should all think about what is sacred in a time of rapid technological development. And I would say the rule of law is a pillar that must remain robust despite new developments, whether it’s AI or other technology.
What role should the EU play in that?
The EU is increasingly acting as a norm-setter in a hyperconnected world. We’ve seen European leadership toward net neutrality, toward data protection rules.
We are building a digital single market — but we should also think about a digital single space. It has to go further, there is a deep understanding in Europe that safeguarding the rule of law against self-regulation and privatized law enforcement is necessary. But we do have to hurry up. And we have to invest in research.
Europe is morally in the right direction, but it lacks ambition and resources devoted to it.
After meeting with tech companies, is your impression that they’re aware of their growing responsibility?
With great power comes great responsibility — and I don’t see even the beginning of that responsibility being taken, frankly.
There is a big catch-up happening on the part of some social media and tech platforms, but I am afraid it’s mainly because their reputation is at stake, not because there is a fundamental belief that they have a responsibility.
When I was warning Facebook in the years before the presidential elections and asking them “How do you make sure that you don’t become, voluntarily or involuntarily, part of determining the outcome of the election,” that question was ridiculed.
So when it comes to so-called “fake news” — I prefer to talk about “junk news,” the influencing through information, the lack of transparency of political ads, the embedding in political campaigns to help them best be represented on social media, the influencing through bots and propaganda through foreign states — the answer from many tech companies is, “Okay it’s fine, we’ll just tweak our algorithm. We’ll deal with it, don’t worry, no regulation, no regulation.”
Now, I understand companies don’t like regulation. [But] without oversight of these algorithms, without checks and balances inside the very companies that are making fundamental decisions about our democracies, I don’t think we are seeing the beginning of taking these things seriously.
Everything points to a notion of lack of checks and balances. There is absolutely no willingness to have more algorithmic oversight and accountability on the part of these companies. As [Salesforce founder] Marc Benioff said in Davos, if CEOs forsake their responsibility, of course, you have to have regulation.
And I think this is the point at which we are.