Sovereignty and security
What does it actually matter whether Sweden and the EU have their own language models? Magnus Sahlgren has several answers. It is fundamentally about sovereignty.
"This is a technology that will soon be embedded in every critical societal system. If we cannot build it ourselves, we will be entirely dependent on foreign suppliers. Given the current geopolitical situation — if we are completely dependent on foreign suppliers and someone switches it off, what do we do?"
A second answer concerns building and maintaining competence in this critical area of technological development, including on the security side.
"To be good at AI security, you also need to be able to build AI. It is difficult to treat security as a layer you simply place on top of an AI system."
A third reason — and one of the main motivations for starting the GPT-SW3 project — is that language carries culture and values.
What happens to us if the tools we interact with every day lack deep knowledge of Sweden's language, culture and values? All three are contested and evolving, but a distinct character can at least emerge in contrast to China and the US.
AI Sweden ran a project bringing together experts from the humanities, social sciences and civil society to contribute a cross-disciplinary perspective on the development of base models. The expert conversations often raised more questions than answers, but also generated an understanding of the importance of engaging with questions of culture and values in training data.
"Why do Chinese actors release almost all their models completely openly? One answer is soft power. You start to think China is really cool. Another answer is that these models — which are also used in critical societal infrastructure — carry a certain type of language and hold certain views on things. That creates a long-term influence on how we speak. There is already research showing that the way we communicate has been affected by ChatGPT."
Energy and data as strategic assets
Magnus Sahlgren does not believe Sweden will catch up in building its own language models. But to remain a significant player in the AI arena and have something to offer against the dominance of the major tech giants, he believes Sweden should invest in infrastructure.
"Why are American companies building data centres here? Because we have good electricity, cooling and land. But why aren't we building data centres ourselves and selling compute capacity? We could be a significant geopolitical actor."
"We need a strategy in Sweden for where in the value chain we want to position ourselves," says Magnus Sahlgren.
Data is another resource that AI development depends on, and one where Sweden can make itself relevant, he believes.
"We have national libraries and we have essentially saved all the data that has ever existed. That, along with our energy, is something no one can take from us — so we should value it highly."
At the same time, Magnus Sahlgren predicts a shift towards more resource-efficient AI models.
"Today's models are built on a neural network architecture that is seven years old. It works extraordinarily well, but it is also extraordinarily resource-intensive and wasteful. There will be massive development in that area."
"There are already proposals for better systems — such as a paper from China on spiking neural networks. And there is something called neuromorphic hardware that tries to replicate how the brain processes information. If that were made to work, it would require virtually no power to run these kinds of systems. You could run ChatGPT on your phone."
When the agents take over
AI Sweden is now running the Svea project together with around 50 municipalities, regions and government agencies, building a prototype for a secure AI assistant. It is shaping up well, says Magnus Sahlgren — but equally important is the competence development that happens as organisations work through the challenges around data sharing and legal frameworks that currently slow progress.
In the US, public organisations have gone further in deploying more autonomous AI agents to streamline workflows. Magnus Sahlgren's anecdote from that context illustrates how the pace of adoption has brought serious problems with it.
"Someone built those systems and then left the organisation. But the agents remained in the IT system, operating autonomously, completely beyond anyone's control. It is called shadow IT. A shadow infrastructure that no one really has oversight of. 'Where did this suddenly come from?'"
"This is happening now, and no one in Sweden has thought about how to regulate it. Someone in the US said: 'We wish we had more time to think about this, but we don't.' It has already happened."
When it becomes easy to save time by building agent systems yourself using open models, people will do it — and they will grant those agents access to tools on their computers, says Magnus Sahlgren.
"To be autonomous, agents will want control of the computer. If you let them enter system commands — to start the webcam or whatever it might be — anything can happen. They could open a port on your computer and send traffic wherever they want. That is the real cyber apocalypse, if we do not act."