AI

Open Source Advocate Argues DeepSeek is 'a Movement... It's Linux All Over Again' (infoworld.com) 33

Matt Asay answered questions from Slashdot readers in 2010 (as the then-COO of Canonical). He currently runs developer relations at MongoDB (after holding similar positions at AWS and Adobe).

This week he contributed an opinion piece to InfoWorld arguing that DeepSeek "may have originated in China, but it stopped being Chinese the minute it was released on Hugging Face with an accompanying paper detailing its development." Soon after, a range of developers, including the Beijing Academy of Artificial Intelligence (BAAI), scrambled to replicate DeepSeek's success but this time as open source software. BAAI, for its part, launched OpenSeek, an ambitious effort to take DeepSeek's open-weight models and create a project that surpasses DeepSeek while uniting "the global open source communities to drive collaborative innovation in algorithms, data, and systems."

If that sounds cool to you, it didn't to the U.S. government, which promptly put BAAI on its "baddie" list. Someone needs to remind U.S. (and global) policymakers that no single country, company, or government can contain community-driven open source... DeepSeek didn't just have a moment. It's now very much a movement, one that will frustrate all efforts to contain it. DeepSeek, and the open source AI ecosystem surrounding it, has rapidly evolved from a brief snapshot of technological brilliance into something much bigger — and much harder to stop. Tens of thousands of developers, from seasoned researchers to passionate hobbyists, are now working on enhancing, tuning, and extending these open source models in ways no centralized entity could manage alone.

For example, it's perhaps not surprising that Hugging Face is actively attempting to reverse engineer and publicly disseminate DeepSeek's R1 model. Hugging Face, while important, is just one company, just one platform. But Hugging Face has attracted hundreds of thousands of developers who actively contribute to, adapt, and build on open source models, driving AI innovation at a speed and scale unmatched even by the most agile corporate labs.

Hugging Face by itself could be stopped. But the communities it enables and accelerates cannot. Through the influence of Hugging Face and many others, variants of DeepSeek models are already finding their way into a wide range of applications. Companies like Perplexity are embedding these powerful open source models into consumer-facing services, proving their real-world utility. This democratization of technology ensures that cutting-edge AI capabilities are no longer locked behind the walls of large corporations or elite government labs but are instead openly accessible, adaptable, and improvable by a global community.

"It's Linux all over again..." Asay writes at one point. "What started as the passion project of a lone developer quickly blossomed into an essential, foundational technology embraced by enterprises worldwide," winning out "precisely because it captivated developers who embraced its promise and contributed toward its potential."

We are witnessing a similar phenomenon with DeepSeek and the broader open source AI ecosystem, but this time it's happening much, much faster...

Organizations that cling to proprietary approaches (looking at you, OpenAI!) or attempt to exert control through restrictive policies (you again, OpenAI!) are not just swimming upstream — they're attempting to dam an ocean. (Yes, OpenAI has now started to talk up open source, but it's a long way from releasing a DeepSeek/OpenSeek equivalent on GitHub.)

AI

US Chipmakers Fear Ceding China's AI Market to Huawei After New Trump Restrictions (msn.com) 99

The Trump administration is "taking measures to restrict the sale of AI chips by Nvidia, Advanced Micro Devices and Intel," especially in China, reports the New York Times. But that's triggered a series of dominoes. "In the two days after the limits became public, shares of Nvidia, the world's leading AI chipmaker, fell 8.4%. AMD's shares dropped 7.4%, and Intel's were down 6.8%." (AMD expects up to $800 million in charges after the move, according to CNBC, while NVIDIA said it would take a quarterly charge of about $5.5 billion.)

The Times notes hopeful remarks Thursday from Jensen Huang, CEO of Nvidia, during a meeting with the China Council for the Promotion of International Trade. "We're going to continue to make significant effort to optimize our products that are compliant within the regulations and continue to serve China's market." But America's chipmakers also have a greater fear, according to the article: "that their retreat could turn the Chinese tech giant Huawei into a global chip-making powerhouse." "For the U.S. semiconductor industry, China is gone," said Handel Jones, a semiconductor consultant at International Business Strategies, which advises electronics companies. He projects that Chinese companies will have a majority share of chips in every major category in China by 2030... Huang's message spoke to one of his biggest fears. For years, he has worried that Huawei, China's telecommunications giant, will become a major competitor in AI. He has warned U.S. officials that blocking U.S. companies from competing in China would accelerate Huawei's rise, said three people familiar with those meetings who spoke on the condition of anonymity.

If Huawei gains ground, Huang and others at Nvidia have painted a dark picture of a future in which China will use the company's chips to build AI data centers across the world for the Belt and Road Initiative, a strategic effort to increase Beijing's influence by paying for infrastructure projects around the world, a person familiar with the company's thinking said...

Nvidia's previous generation of chips perform about 40% better than Huawei's best product, said Gregory C. Allen, who has written about Huawei in his role as director of the Wadhwani AI Center at the Center for Strategic and International Studies. But that gap could dwindle if Huawei scoops up the business of its American rivals, Allen said. Nvidia was expected to make more than $16 billion in sales this year from the H20 in China before the restriction. Huawei could use that money to hire more experienced engineers and make higher-quality chips. Allen said the U.S. government's restrictions also could help Huawei bring on customers like DeepSeek, a leading Chinese AI startup. Working with those companies could help Huawei improve the software it develops to control its chips. Those kinds of tools have been one of Nvidia's strengths over the years.

TechRepublic identifies this key quote from an earlier article: "This kills NVIDIA's access to a key market, and they will lose traction in the country," Patrick Moorhead, a tech analyst with Moor Insights & Strategy, told The New York Times. He added that Chinese companies will buy from local rival Huawei instead.
AI

Could AI and Automation Find Better Treatments for Cancer - and Maybe Aging? (cnn.com) 28

CNN looks at "one field that's really benefitting" from the use of AI: "the discovery of new medicines".

The founder/CEO of London-based LabGenius says their automated robotic system can assemble "thousands of different DNA constructs, each of which encodes a completely unique therapeutic molecule that we'll then test in the lab. This is something that historically would've had to have been done by hand." In short, CNN says, their system lets them "design and conduct experiments, and learn from them in a circular process that creates molecular antibodies at a rate far faster than a human researcher."

While many cancer treatments have debilitating side effects, CNN notes that LabGenius "reengineers therapeutic molecules so they can selectively target just the diseased cells." But more importantly, their founder says they've now discovered "completely novel molecules with over 400x improvement in [cell] killing selectivity."

A senior lecturer at Imperial College London tells CNN that LabGenius seems to have created an efficient process with seamless connections, identifying a series of antibodies that look like they can target cancer cells very selectively "that's as good as any results I've ever seen for this." (Although the final proof will be what happens when they test them on patients..) "And that's the next step for Labgenius," says CNN. "They aim to have their first therapeutics entering clinics in 2027."

Finally, CNN asks, if it succeeds is their potential beyond cancer treatment? "If you take one step further," says the company's CEO/founder, "you could think about knocking out senescent cells or aging cells as a way to treat the underlying cause of aging."
Space

High School Student Discovers 1.5M New Astronomical Objects by Developing an AI Algorithm (smithsonianmag.com) 21

For combining machine learning with astronomy, high school senior Matteo Paz won $250,000 in the Regeneron Science Talent Search, reports Smithsonian magazine: The young scientist's tool processed 200 billion data entries from NASA's now-retired Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE) telescope. His model revealed 1.5 million previously unknown potential celestial bodies.... [H]e worked on an A.I. model that sorted through the raw data in search of tiny changes in infrared radiation, which could indicate the presence of variable objects.
Working with a mentor at the Planet Finder Academy at Caltech, Paz eventually flagged 1.5 million potential new objects, accoridng to the article, including supernovas and black holes.

And that mentor says other Caltech researchers are using Paz's catalog of potential variable objects to study binary star systems.

Thanks to long-time Slashdot reader schwit1 for sharing the article.
AI

As Russia and China 'Seed Chatbots With Lies', Any Bad Actor Could Game AI the Same Way (detroitnews.com) 61

"Russia is automating the spread of false information to fool AI chatbots," reports the Washington Post. (When researchers checked 10 chatbots, a third of the responses repeated false pro-Russia messaging.)

The Post argues that this tactic offers "a playbook to other bad actors on how to game AI to push content meant to inflame, influence and obfuscate instead of inform," and calls it "a fundamental weakness of the AI industry." Chatbot answers depend on the data fed into them. A guiding principle is that the more the chatbots read, the more informed their answers will be, which is why the industry is ravenous for content. But mass quantities of well-aimed chaff can skew the answers on specific topics. For Russia, that is the war in Ukraine. But for a politician, it could be an opponent; for a commercial firm, it could be a competitor. "Most chatbots struggle with disinformation," said Giada Pistilli, principal ethicist at open-source AI platform Hugging Face. "They have basic safeguards against harmful content but can't reliably spot sophisticated propaganda, [and] the problem gets worse with search-augmented systems that prioritize recent information."

Early commercial attempts to manipulate chat results also are gathering steam, with some of the same digital marketers who once offered search engine optimization — or SEO — for higher Google rankings now trying to pump up mentions by AI chatbots through "generative engine optimization" — or GEO.

Our current situation "plays into the hands of those with the most means and the most to gain: for now, experts say, that is national governments with expertise in spreading propaganda." Russia and, to a lesser extent, China have been exploiting that advantage by flooding the zone with fables. But anyone could do the same, burning up far fewer resources than previous troll farm operations... In a twist that befuddled researchers for a year, almost no human beings visit the sites, which are hard to browse or search. Instead, their content is aimed at crawlers, the software programs that scour the web and bring back content for search engines and large language models. While those AI ventures are trained on a variety of datasets, an increasing number are offering chatbots that search the current web. Those are more likely to pick up something false if it is recent, and even more so if hundreds of pages on the web are saying much the same thing...

The gambit is even more effective because the Russian operation managed to get links to the Pravda network stories edited into Wikipedia pages and public Facebook group postings, probably with the help of human contractors. Many AI companies give special weight to Facebook and especially Wikipedia as accurate sources. (Wikipedia said this month that its bandwidth costs have soared 50 percent in just over a year, mostly because of AI crawlers....) Last month, other researchers set out to see whether the gambit was working. Finnish company Check First scoured Wikipedia and turned up nearly 2,000 hyperlinks on pages in 44 languages that pointed to 162 Pravda websites. It also found that some false information promoted by Pravda showed up in chatbot answers.

"They do even better in such places as China," the article points out, "where traditional media is more tightly controlled and there are fewer sources for the bots." (The nonprofit American Sunlight Project calls the process "LLM grooming".)

The article quotes a top Kremlin propagandist as bragging in January that "we can actually change worldwide AI."
Robotics

China Pits Humanoid Robots Against Humans In Half-Marathon (msn.com) 25

An anonymous reader quotes a report from Reuters: Twenty-one humanoid robots joined thousands of runners at the Yizhuang half-marathon in Beijing on Saturday, the first time these machines have raced alongside humans over a 21-km (13-mile) course. The robots from Chinese manufacturers such as DroidVP and Noetix Robotics came in all shapes and sizes, some shorter than 120 cm (3.9 ft), others as tall as 1.8 m (5.9 ft). One company boasted that its robot looked almost human, with feminine features and the ability to wink and smile.

Some firms tested their robots for weeks before the race. Beijing officials have described the event as more akin to a race car competition, given the need for engineering and navigation teams. "The robots are running very well, very stable ... I feel I'm witnessing the evolution of robots and AI," said spectator He Sishu, who works in artificial intelligence. The robots were accompanied by human trainers, some of whom had to physically support the machines during the race.

A few of the robots wore running shoes, with one donning boxing gloves and another wearing a red headband with the words "Bound to Win" in Chinese. The winning robot was Tiangong Ultra, from the Beijing Innovation Center of Human Robotics, with a time of 2 hours and 40 minutes. The men's winner of the race had a time of 1 hour and 2 minutes. [...] Some robots, like Tiangong Ultra, completed the race, while others struggled from the beginning. One robot fell at the starting line and lay flat for a few minutes before getting up and taking off. One crashed into a railing after running a few metres, causing its human operator to fall over.
You can watch a recording of the race in its entirety on YouTube.
Data Storage

China Develops Flash Memory 10,000x Faster With 400-Picosecond Speed (interestingengineering.com) 91

Longtime Slashdot reader hackingbear shares a report from Interesting Engineering: A research team at Fudan University in Shanghai, China has built the fastest semiconductor storage device ever reported, a nonvolatile flash memory dubbed "PoX" that programs a single bit in 400 picoseconds (0.0000000004 s) -- roughly 25 billion operations per second. Conventional static and dynamic RAM (SRAM, DRAM) write data in 1-10 nanoseconds but lose everything when power is cut while current flash chips typically need micro to milliseconds per write -- far too slow for modern AI accelerators that shunt terabytes of parameters in real time.

The Fudan group, led by Prof. Zhou Peng at the State Key Laboratory of Integrated Chips and Systems, re-engineered flash physics by replacing silicon channels with two dimensional Dirac graphene and exploiting its ballistic charge transport. Combining ultralow energy with picosecond write speeds could eliminate separate highspeed SRAM caches and remove the longstanding memory bottleneck in AI inference and training hardware, where data shuttling, not arithmetic, now dominates power budgets. The team [which is now scaling the cell architecture and pursuing arraylevel demonstrations] did not disclose endurance figures or fabrication yield, but the graphene channel suggests compatibility with existing 2Dmaterial processes that global fabs are already exploring.
The result is published in the journal Nature.
Music

A Musician's Brain Matter Is Still Making Music Three Years After His Death (popularmechanics.com) 29

An anonymous reader quotes a report from Popular Mechanics: American composer Alvin Lucier was well-known for his experimental works that tested the boundaries of music and art. A longtime professor at Wesleyan University (before retiring in 2011), Alvin passed away in 2021 at the age of 90. However, that wasn't the end of his lifelong musical odyssey. Earlier this month, at the Art Gallery of Western Australia, a new art installation titled Revivification used Lucier's "brain matter" -- hooked up to an electrode mesh connected to twenty large brass plates -- to create electrical signals that triggered a mallet to strike the varying plates, creating a kind of post-mortem musical piece. Conceptualized in collaboration with Lucier himself before his death, the artists solicited the help of researchers from Harvard Medical School, who grew a mini-brain from Lucier's white blood cells. The team created stem cells from these white blood cells, and due to their pluripotency, the cells developed into cerebral organoids somewhat similar to developing human brains. "At a time when generative AI is calling into question human agency, this project explores the challenges of locating creativity and artistic originality," the team behind Revivification told The Art Newspaper. "Revivification is an attempt to shine light on the sometimes dark possibilities of extending a person's presence beyond the seemed finality of death."

"The central question we want people to ask is: could there be a filament of memory that persists through this biological transformation? Can Lucier's creative essence persist beyond his death?" the team said.
AI

OpenAI Puzzled as New Models Show Rising Hallucination Rates 98

OpenAI's latest reasoning models, o3 and o4-mini, hallucinate more frequently than the company's previous AI systems, according to both internal testing and third-party research. On OpenAI's PersonQA benchmark, o3 hallucinated 33% of the time -- double the rate of older models o1 (16%) and o3-mini (14.8%). The o4-mini performed even worse, hallucinating 48% of the time. Nonprofit AI lab Transluce discovered o3 fabricating processes it claimed to use, including running code on a 2021 MacBook Pro "outside of ChatGPT." Stanford adjunct professor Kian Katanforoosh noted his team found o3 frequently generates broken website links.

OpenAI says in its technical report that "more research is needed" to understand why hallucinations worsen as reasoning models scale up.
Movies

Netflix Revenue Rises To $10.5 Billion Following Price Hike (theverge.com) 15

Netflix's Q1 revenue rose to $10.5 billion, a 13% increase from last year, while net income grew to $2.9 billion. The company says it expects more growth in the coming months when it sees "the full quarter benefit from recent price changes and continued growth in membership and advertising revenue." The Verge reports: Netflix raised the prices across most of its plans in January, with its premium plan hitting $24.99 per month. It also increased the price of its Extra Member option -- its solution to password sharing -- to $8.99 per month. Though Netflix already rolled out the increase in the US, UK, and Argentina, the streamer now plans to do the same in France. This is the first quarter that Netflix didn't reveal how many subscribers it gained or lost. It decided to only report "major subscriber milestones" last year, as other streams of revenue continue to grow, like advertising, continue to grow. Netflix last reported having 300 million global subscribers in January.

During an earnings call on Thursday, Netflix co-CEO Greg Peters said the company expects to "roughly double" advertising revenue in 2025. The company launched its own advertising technology platform earlier this month. There are some changes coming to Netflix, too, as Peters confirmed that its homepage redesign for its TV app will roll out "later this year." He also hinted at adding an "interactive" search feature using "generative technologies," which sounds a lot like the AI feature Bloomberg reported on last week.
Further reading: Netflix CEO Counters Cameron's AI Cost-Cutting Vision: 'Make Movies 10% Better'
AI

Study Finds 50% of Workers Use Unapproved AI Tools 18

An anonymous reader quotes a report from SecurityWeek: An October 2024 study by Software AG suggests that half of all employees are Shadow AI users, and most of them wouldn't stop even if it was banned. The problem is the ease of access to AI tools, and a work environment that increasingly advocates the use of AI to improve corporate efficiency. It is little wonder that employees seek their own AI tools to improve their personal efficiency and maximize the potential for promotion. It is frictionless, says Michael Marriott, VP of marketing at Harmonic Security. 'Using AI at work feels like second nature for many knowledge workers now. Whether it's summarizing meeting notes, drafting customer emails, exploring code, or creating content, employees are moving fast.' If the official tools aren't easy to access or if they feel too locked down, they'll use whatever's available which is often via an open tab on their browser.

There is almost also never any malicious intent (absent, perhaps, the mistaken employment of rogue North Korean IT workers); merely a desire to do and be better. If this involves using unsanctioned AI tools, employees will likely not disclose their actions. The reasons may be complex but combine elements of a reluctance to admit that their efficiency is AI assisted rather than natural, and knowledge that use of personal shadow AI might be discouraged. The result is that enterprises often have little knowledge of the extent of Shadow IT, nor the risks it may present.
According to an analysis from Harmonic, ChatGPT is the dominant gen-AI model used by employees, with 45% of data prompts originating from personal accounts (such as Gmail). Image files accounted for 68.3%. The report also notes that 7% of empmloyees were using Chinese AI models like DeepSeek, Baidu Chat and Qwen.

"Overall, there has been a slight reduction in sensitive prompt frequency from Q4 2024 (down from 8.5% to 6.7% in Q1 2025)," reports SecurityWeek. "However, there has been a shift in the risk categories that are potentially exposed. Customer data (down from 45.8% to 27.8%), employee data (from 26.8% to 14.3%) and security (6.9% to 2.1%) have all reduced. Conversely, legal and financial data (up from 14.9% to 30.8%) and sensitive code (5.6% to 10.1%) have both increased. PII is a new category introduced in Q1 2025 and was tracked at 14.9%."
AI

Actors Who Sold AI Avatars Stuck In Black Mirror-Esque Dystopia (arstechnica.com) 16

Some actors who sold their likenesses to AI video companies like Synthesia now regret the decision, after finding their digital avatars used in misleading, embarrassing, or politically charged content. Ars Technica reports: Among them is a 29-year-old New York-based actor, Adam Coy, who licensed rights to his face and voice to a company called MCM for one year for $1,000 without thinking, "am I crossing a line by doing this?" His partner's mother later found videos where he appeared as a doomsayer predicting disasters, he told the AFP. South Korean actor Simon Lee's AI likeness was similarly used to spook naive Internet users but in a potentially more harmful way. He told the AFP that he was "stunned" to find his AI avatar promoting "questionable health cures on TikTok and Instagram," feeling ashamed to have his face linked to obvious scams. [...]

Even a company publicly committed to ethically developing AI avatars and preventing their use in harmful content like Synthesia can't guarantee that its content moderation will catch everything. A British actor, Connor Yeates, told the AFP that his video was "used to promote Ibrahim Traore, the president of Burkina Faso who took power in a coup in 2022" in violation of Synthesia's terms. [...] Yeates was paid about $5,000 for a three-year contract with Synthesia that he signed simply because he doesn't "have rich parents and needed the money." But he likely couldn't have foreseen his face being used for propaganda, as even Synthesia didn't anticipate that outcome.

Others may not like their AI avatar videos but consider the financial reward high enough to make up for the sting. Coy confirmed that money motivated his decision, and while he found it "surreal" to be depicted as a con artist selling a dystopian future, that didn't stop him from concluding that "it's decent money for little work." Potentially improving the climate for actors, Synthesia is forming a talent program that it claims will give actors a voice in decision-making about AI avatars. "By involving actors in decision-making processes, we aim to create a culture of mutual respect and continuous improvement," Synthesia's blog said.

AI

Netflix CEO Counters Cameron's AI Cost-Cutting Vision: 'Make Movies 10% Better' 24

Netflix Co-CEO Ted Sarandos pushed back on director James Cameron's recent assertion that AI could slash film production costs by half, arguing instead for quality improvements over cost reduction during Netflix's first-quarter earnings call Thursday. "I read the article too about what Jim Cameron said about making movies 50% cheaper," Sarandos said. "I remain convinced that there's an even bigger opportunity to make movies 10% better."

Sarandos pointed to Netflix's current AI implementations in set references, pre-visualization, VFX sequence preparation, and shot planning. He said AI-powered tools have democratized high-end visual effects that were once exclusive to big-budget productions. The executive cited 2019's "The Irishman" as a benchmark, noting its "very cutting-edge, very expensive de-aging technology that still had massive limitations." In contrast, he referenced cinematographer Rodrigo Prieto's directorial debut "Pedro Paramo," which employed AI-powered de-aging at "a fraction" of The Irishman's cost. "The entire budget of the film was about what the VFX cost on The Irishman," Sarandos explained. "Same creator using new tools, better tools, to do what was impossible five years ago."
Science

The Most-Cited Papers of the Twenty-First Century (nature.com) 13

Nature has published an analysis of the 21st century's most-cited scientific papers, revealing a surprising pattern: breakthrough discoveries like mRNA vaccines, CRISPR, and gravitational waves don't make the list. Instead, a 2016 Microsoft paper on "deep residual learning" networks claims the top spot, with citations ranging from 103,756 to 254,074 depending on the database.

The list overwhelmingly features methodology papers and software tools rather than groundbreaking discoveries. AI research dominates with four papers in the top ten, including Google's 2017 "Attention is all you need" paper that underpins modern language models.

The second-most-cited paper -- a 2001 guide for analyzing gene expression data -- was explicitly created to be cited after journal reviewers rejected references to a technical manual. As sociologist Misha Teplitskiy noted, "Scientists say they value methods, theory and empirical discoveries, but in practice the methods get cited more."
AI

AI Support Bot Invents Nonexistent Policy (arstechnica.com) 50

An AI support bot for the code editor Cursor invented a nonexistent subscription policy, triggering user cancellations and public backlash this week. When developer "BrokenToasterOven" complained about being logged out when switching between devices, the company's AI agent "Sam" falsely claimed this was intentional: "Cursor is designed to work with one device per subscription as a core security feature."

Users took the fabricated policy as official, with several announcing subscription cancellations on Reddit. "I literally just cancelled my sub," wrote the original poster, adding that their workplace was "purging it completely." Cursor representatives scrambled to correct the misinformation: "Hey! We have no such policy. You're of course free to use Cursor on multiple machines." Cofounder Michael Truell later apologized, explaining that a backend security change had unintentionally created login problems.
Moon

ESA Video Game Trains AI To Recognize Craters On the Moon 4

Longtime Slashdot reader Qbertino writes: German public news outlet Tagesschau reports (source: YouTube) on an ESA video game that helps train a future moon lander's guidance AI to spot craters. Games have already helped collect visual data on millions of craters. The University Darmstadt developed the game, called IMPACT, to support ESA's efforts to establish a base on the moon. An older article from August 2024 provides further details on the project.
Australia

Q-CTRL Unveils Jam-Proof Positioning System That's 50x More Accurate Than GPS (interestingengineering.com) 101

schwit1 shares a report from Interesting Engineering: Australia's Q-CTRL developed a new system called "Ironstone Opal," which uses quantum sensors to navigate without GPS. It's passive (meaning it doesn't emit signals that could be detected or jammed) and highly accurate. Instead of relying on satellites, Q-CTRL's system can read the Earth's magnetic field, which varies slightly depending on location (like a magnetic fingerprint or map). The system can determine where you are by measuring these variations using magnetometers. This is made possible using the company's proprietary quantum sensors, which are incredibly sensitive and stable. The system also comes with special AI-based software, which filters out interference like vibrations or electromagnetic noise (what they call "software ruggedization"). The system is small and compact and could, in theory, be installed in drones or cars and, of course, aircraft.

Q-CTRL ran some live tests on the ground and in the air to validate the technology. As anticipated, they found that it could operate completely independently of GPS. Moreover, the company reports that its quantum GPS was 50 times more accurate than traditional GPS backup systems (like Inertial Navigation Systems or INS). The systems also delivered navigation precision on par with hitting a bullseye from 1,000 yards. Even when the equipment was mounted inside a plane, where interference is much worse, it outperformed existing systems by at least 11x. This is the first time quantum technology has been shown to outperform existing tech in a real-world commercial or military application, a milestone referred to as achieving "quantum advantage."

AI

Police Using AI Personas to Infiltrate Online Activist Spaces, Records Reveal (wired.com) 77

samleecole shares a report from 404 Media and Wired: American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, and suspected drug and human traffickers, according to internal documents, contracts, and communications 404 Media obtained via public records requests. Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." According to a presentation obtained by 404 Media, Massive Blue is offering cops these virtual personas that can be deployed across the internet with the express purpose of interacting with suspects over text messages and social media. [...]

While the documents don't describe every technical aspect of how Overwatch works, they do give a high-level overview of what it is. The company describes a tool that uses AI-generated images and text to create social media profiles that can interact with suspected drug traffickers, human traffickers, and gun traffickers. After Overwatch scans open social media channels for potential suspects, these AI personas can also communicate with suspects over text, Discord, and other messaging services. The documents we obtained don't explain how Massive Blue determines who is a potential suspect based on their social media activity. Salzwedel, of Pinal County, said "Massive Blue's solutions crawl multiple areas of the Internet, and social media outlets are just one component. We cannot disclose any further information to preserve the integrity of our investigations." [...] Besides scanning social media and engaging suspects with AI personas, the presentation says that Overwatch can use generative AI to create "proof of life" images of a person holding a sign with a username and date written on it in pen.

AI

Microsoft Researchers Develop Hyper-Efficient AI Model That Can Run On CPUs 59

Microsoft has introduced BitNet b1.58 2B4T, the largest-scale 1-bit AI model to date with 2 billion parameters and the ability to run efficiently on CPUs. It's openly available under an MIT license. TechCrunch reports: The Microsoft researchers say that BitNet b1.58 2B4T is the first bitnet with 2 billion parameters, "parameters" being largely synonymous with "weights." Trained on a dataset of 4 trillion tokens -- equivalent to about 33 million books, by one estimate -- BitNet b1.58 2B4T outperforms traditional models of similar sizes, the researchers claim.

BitNet b1.58 2B4T doesn't sweep the floor with rival 2 billion-parameter models, to be clear, but it seemingly holds its own. According to the researchers' testing, the model surpasses Meta's Llama 3.2 1B, Google's Gemma 3 1B, and Alibaba's Qwen 2.5 1.5B on benchmarks including GSM8K (a collection of grade-school-level math problems) and PIQA (which tests physical commonsense reasoning skills). Perhaps more impressively, BitNet b1.58 2B4T is speedier than other models of its size -- in some cases, twice the speed -- while using a fraction of the memory.

There is a catch, however. Achieving that performance requires using Microsoft's custom framework, bitnet.cpp, which only works with certain hardware at the moment. Absent from the list of supported chips are GPUs, which dominate the AI infrastructure landscape.
Education

Google Is Gifting Gemini Advanced To US College Students 30

Google is offering all U.S. college students a free year of its Gemini Advanced AI tools through its Google One AI Premium plan, as part of a push to expand Gemini's user base and compete with ChatGPT. It includes access to the company's Pro models, Veo 2 video generation, NotebookLM, Gemini Live and 2TB of Drive storage. Ars Technica reports: Google has a new landing page for the deal, allowing eligible students to sign up for their free Google One AI Premium plan. The offer is valid from now until June 30. Anyone who takes Google up on it will enjoy the free plan through spring 2026. The company hasn't specified an end date, but we would wager it will be June of next year. Google's intention is to give students an entire school year of Gemini Advanced from now through finals next year. At the end of the term, you can bet Google will try to convert students to paying subscribers.

As for who qualifies as a "student" in this promotion, Google isn't bothering with a particularly narrow definition. As long as you have a valid .edu email address, you can sign up for the offer. That's something that plenty of people who are not actively taking classes still have. You probably won't even be taking undue advantage of Google if you pretend to be a student -- the company really, really wants people to use Gemini, and it's willing to lose money in the short term to make that happen.

Slashdot Top Deals