

Switzerland Releases Open-Source AI Model Built For Privacy 26
Switzerland has launched Apertus, a fully open-source, multilingual LLM trained on 15 trillion tokens and over 1,000 languages. "What distinguishes Apertus from many other generative AI systems is its commitment to complete openness," reports CyberInsider. From the report: Unlike popular proprietary models, where users can only interact via APIs or hosted interfaces, Apertus provides open access to its model weights, training datasets, documentation, and even intermediate checkpoints. The source code and all training materials are released under a permissive open-source license that allows commercial use. Since the full training process is documented and reproducible, researchers and watchdogs can audit the data sources, verify compliance with data protection laws, and inspect how the model was trained. Apertus' development explicitly adhered to Swiss data protection and copyright laws, and incorporated retroactive opt-out mechanisms to respect data source preferences.
From a privacy perspective, Apertus represents a compelling shift in the AI landscape. The model only uses publicly available data, filtered to exclude personal information and to honor opt-out signals from content sources. This not only aligns with emerging regulatory frameworks like the EU AI Act, but also provides a tangible example of how AI can be both powerful and privacy-respecting. According to ETH Zurich's Imanol Schlag, technical lead of the project at ETH Zurich, Apertus is "built for the public good" and is a demonstration of how AI can be deployed as a public digital infrastructure, much like utilities or transportation. The model is available via Swisscom's Sovereign Swiss AI Platform. It's also available through Hugging Face and the Public AI Inference Utility.
From a privacy perspective, Apertus represents a compelling shift in the AI landscape. The model only uses publicly available data, filtered to exclude personal information and to honor opt-out signals from content sources. This not only aligns with emerging regulatory frameworks like the EU AI Act, but also provides a tangible example of how AI can be both powerful and privacy-respecting. According to ETH Zurich's Imanol Schlag, technical lead of the project at ETH Zurich, Apertus is "built for the public good" and is a demonstration of how AI can be deployed as a public digital infrastructure, much like utilities or transportation. The model is available via Swisscom's Sovereign Swiss AI Platform. It's also available through Hugging Face and the Public AI Inference Utility.
Anecdote (Score:5, Interesting)
Because I was looking for this information I asked it "what is the length of a honda 41411-VH8-640 flexible drive shaft?".
It thought I was looking for car parts.
I asked Grok and it knew I had one of two string trimmer base models but didn't have the answer either.
Grok recommended 25 websites to look at but I already stupidly spent an hour looking at websites instead of spending twenty minutes taking apart the brush cutter and measuring it with a tape and calipers.
Humanity is safe for now.
Re: (Score:3)
Humanity is safe for now.
Are you sure? Here is just one example of what you can use AI for, taken from Google TV adverts:
"I spilled sugar in my gochujang pasta sauce." One would expect a normal person to say 'scoop it out with a spoon, idiot', but the AI answer is to turn it into cookies. Honestly, if we don't nip this in the bud we'll be breeding a generation of people who simply don't think for themselves.
Re: (Score:2)
Honestly, if we don't nip this in the bud we'll be breeding a generation of people who simply don't think for themselves.
To be fair, we already have that behavior in something like 80% of the population. The rest will continue to think for themselves because they want to and recognize the value of doing to. What will happen is that the unthinking masses get even easier to manipulate because the "sources" they get "facts" from become even more corrupted.
Re: Anecdote (Score:2)
We already have a generation of people who don't think for themselves... It's too late!
Re: (Score:3)
I just pasted your question into DeepSeek, I suggest you try it out of curiosity. I don't know if the answer is right, but it certainly sounds like it knows what you are asking about, and gives an answer. I find DeepSeek somewhat better for this type of query.
Re: (Score:3)
LLM are no database. Use Wikipedia or a more specific site.
LLM are no calculator. You've got calc.exe and matlab.exe if you need more.
Stop using LLM for things they are not made for and then complaining they are bad at them.
Remember that an LLM only always answers, because the algorithm forces it to write something, even when it is unsuitable. Sometimes it manages to gete out "I cannot answer such questions" other times it does it best to fulfill your request even when it isn't good at it.
Re: (Score:2)
LLM are no database. Use Wikipedia or a more specific site. LLM are no calculator. You've got calc.exe and matlab.exe if you need more.
Stop using LLM for things they are not made for and then complaining they are bad at them. Remember that an LLM only always answers, because the algorithm forces it to write something, even when it is unsuitable. Sometimes it manages to gete out "I cannot answer such questions" other times it does it best to fulfill your request even when it isn't good at it.
Really, that's the biggest hold-up with LLMs for me. Their inability to simply state a machine equivalent of, "I don't know," or even, "I need more information about this subject in order to form a coherent answer." Instead, they tend to just answer confidently, often with complete gibberish if you know the subject. Inquisitiveness and an ability to recognize a lack of knowledge within the system would be a MASSIVE step up from where we are today with these systems, and may actually lead to something resemb
Re: (Score:2)
It helps to think about how they are trained.
Step 1: Learn how to complete a prefix to something that fits the prefix. If you extend "The definition for X is ..." for any X the LLM knows, most fitting suffixes will be useful definitions. If X is totally made up, anything that is not complete gibberish is a "good" suffix in the sense that no better suffix exists.
Step 2: Train it to complete in an answer format "system: You answer questions, user: My Question, assistant: my answer, user: my second question, a
Re: (Score:2)
Humanity is safe for now.
Anybody that does real work and needs to use real insight in that, is and always has been. The rest? Not so much. For a data point how completely without insight many people are, look at current events in politics.
Seems dangerous (Score:2)
Not looking forward to where this may lead.
Respect for the Swiss (Score:2)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
I'd rather determine it myself than take the govt's word on blind faith.
Re: (Score:2)
Re: Respect for the Swiss (Score:1)
Re: (Score:2)
I don't care what you do with your taxes.
Re: (Score:3, Interesting)
This is different from Lumo by Proton (Score:5, Interesting)
I did a double-take; the Proton Foundation (the Switzerland-based privacy non-profit best known for its mail service) just announced [proton.me] its open-source Lumo chatbot [proton.me], dubbed as "responsible AI". (That blog post is dated 2025-07-23, but I got their email announcement on Friday.)
Proton's blog announcement also casts doubt on the Swiss government plans, which take advantage of Switzerland's non-membership in the EU:
Re: (Score:2)
Lumo is a service, Apertus is a model.
So if you want so:
ChatGPT => Lumo
GPT-5 => Apertus
But is it? (Score:1)
"but also provides a tangible example of how AI can be both powerful and privacy-respecting"
If it were especially powerful you'd think we'd have seen indications of how it compares to flagship commercial AIs.
Re: (Score:3, Informative)
Re: (Score:2)
You usually compare to models of a similar size. The flagships are way larger models.
It's so private... (Score:2)
...it doesn't even talk to you.
ah (Score:4, Funny)
... so this is where the Crypto AG team went.