Google

OpenAI Says It Has No Plan To Use Google's In-house Chip (reuters.com) 2

An anonymous reader shares a report: OpenAI said it has no active plans to use Google's in-house chip to power its products, two days after Reuters and other news outlets reported on the AI lab's move to turn to its competitor's artificial intelligence chips to meet growing demand.

A spokesperson for OpenAI said on Sunday that while the AI lab is in early testing with some of Google's tensor processing units (TPUs), it has no plans to deploy them at scale right now.

Science

Springer Nature Book on Machine Learning is Full of Made-Up Citations (retractionwatch.com) 46

Springer Nature published a $169 machine learning textbook in April containing citations that appear to be largely fabricated, according to an investigation by Retraction Watch. The site checked 18 of the 46 citations in "Mastering Machine Learning: From Basics to Advanced" by Govindakumar Madhavan and found two-thirds either did not exist or contained substantial errors.

Three researchers contacted by Retraction Watch confirmed their supposedly authored works were fake or incorrectly cited. Yehuda Dar of Ben-Gurion University said a paper cited as appearing in IEEE Signal Processing Magazine was actually an unpublished arXiv preprint. Aaron Courville of Universite de Montreal confirmed he was cited for sections of his "Deep Learning" book that "doesn't seem to exist."

The pattern of nonexistent citations matches known hallmarks of large language model-generated text. Madhavan did not answer whether he used AI to generate the book's content. The book contains no AI disclosure despite Springer Nature policies requiring authors to declare AI use beyond basic copy editing.
China

The Startup-Filled Coder 'Village' at the Heart of China's AI Frenzy (msn.com) 6

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," the Wall Street Journal noted this weekend.

But what does that look like? The New York Times visits Liangzhu, "the coder 'village' at the heart of China's AI frenzy... a quiet suburb of the southern Chinese city of Hangzhou... As China faces off with the United States over tech primacy, Hangzhou has become the centre of China's AI frenzy," with its proximity to tech companies like Alibaba and DeepSeek..." In Liangzhu, many engineers said they were killing time until they could create their own startups, waiting out noncompete agreements they had signed at bigger companies like ByteDance... But some said the government support for Hangzhou's tech scene had scared off some investors. Several company founders, who asked not to be named so they could discuss sensitive topics, said it was difficult for them to attract funds from foreign venture capital firms, frustrating their ambitions to grow outside China. The nightmare situation, they said, would be to end up like ByteDance, the Chinese parent of TikTok, whose executives have been questioned before Congress about the company's ties to the Chinese government. Founders described choosing between two paths for their companies' growth: Take government funding and tailor their product to the Chinese market, or raise enough money on their own to set up offices in a country like Singapore to pitch foreign investors. For most, the first was the only feasible option.

Another uncertainty is access to the advanced computer chips that power artificial intelligence systems. Washington has spent years trying to prevent Chinese companies from buying these chips, and Chinese companies like Huawei and Semiconductor Manufacturing International Corp. are racing to produce their own. So far, the Chinese-made chips work well enough to help companies like ByteDance provide some of their AI services in China. Many Chinese companies have created stockpiles of Nvidia chips despite Washington's controls. But it is not clear how long that supply will last, or how quickly China's chipmakers can catch up to their American counterparts...

Liangzhu villagers have been hosting film nights. They had recently gathered to watch "The Matrix." Afterward, they decided the movie should be required viewing, Lin said. Its theme — people finding their way out of a vast system controlling society — provided spot-on inspiration. Aspiring founders in Liangzhu, even those who did not go to top universities, believe they could start the next world-changing tech company, said Felix Tao [a 36-year-old former Facebook and Alibaba employee.] "Many of them are super brave to make a choice to explore their own way, because in China that is not the common way to live your life."

Science

Citizen Scientists Just Helped Discover Nearly 8,000 New Eclipsing Binary Stars (spokesman.com) 10

"Citizen scientists have successfully located thousands of previously unknown pairs of 'eclipsing binary' stars," reports the Washington Post, citing a recent announcement from NASA. The ongoing initiative helps space researchers hunt for "eclipsing binary" stars, a rare phenomenon in which two stars orbit one another, periodically blocking each other's light. These star pairs offer important data to astrophysicists, who consider the many measurable properties of eclipsing binaries — and the information they bear about the history of star formation and destruction — as a foundation of the field...

The citizen science project in question, the Eclipsing Binary Patrol, validates images from NASA's Transiting Exoplanet Survey Satellite (TESS) mission. The satellite, launched in 2018, is "exceptionally capable at detecting varying stars," the researchers write in a preprint paper describing the initiative. The researchers used machine learning to identify about 1.2 million potential eclipsing star pairs. Citizen scientists then validated a subset of about 60,000... manually inspecting hundreds of thousands of images of eclipse-like events and weeding out actual binaries from images that tricked the algorithm. "Thankfully," the researchers write, "to the rescue come volunteers from all walks of life that boost the capacity of bandwidth-limited professional astronomers many-fold and help tackle the ever-increasing volume of publicly available astronomical data."

Universe Today describes how they limited the dataset to only stars with a magnitude brighter than 15, then used a Python tool to generate a massive dataset of millions of light curves... The outcome of all the work resulted in the identification of 10,001 eclipsing binary systems. 7,936 of them are new to science, while the other 2,065 were previously known, but the study provided updated, more accurate, parameters for their periods, as TESS' dataset provided better insight. There were also some particularly interesting systems that could hold new discoveries, including several that had variable eclipse timings, and plenty that might have a third star, and some that show a significant dynamic between the star being orbited and the one doing the orbiting.

All of those systems await further research, but there's another, unspoken factor at play in this data — exoplanets. TESS was originally designed as an exoplanet hunter, and this kind of large scale AI/human collaboration of lightcurve analysis is exactly the kind of work that could potentially produce even more accurate exoplanet catalogues, as evidenced by some of the work already done in this paper. That seems to be the next step for this dataset, with Dr. Kostov telling an interviewer "I can't wait to search them for exoplanets!" Given the data has already been collected, and the team has already been assembled, it's very likely he'll get his chance soon.

AI

Google DeepMind's Spinoff Company 'Very Close' to Human Trials for Its AI-Designed Drugs (fortune.com) 38

Google DeepMind's chief business officer says Alphabet's drug-discovery company Isomorphic Labs "is preparing to launch human trials of AI-designed drugs," according to a report in Fortune, "pairing cutting-edge AI with pharma veterans to design medicines faster, cheaper, and more accurately." "There are people sitting in our office in King's Cross, London, working, and collaborating with AI to design drugs for cancer," said Colin Murdoch [DeepMind's chief business officer and president of Isomorphic Labs]. "That's happening right now."

After years in development, Murdoch says human clinical trials for Isomorphic's AI-assisted drugs are finally in sight. "The next big milestone is actually going out to clinical trials, starting to put these things into human beings," he said. "We're staffing up now. We're getting very close."

The company, which was spun out of DeepMind in 2021, was born from one of DeepMind's most celebrated breakthroughs, AlphaFold, an AI system capable of predicting protein structures with a high level of accuracy. Interactions of AlphaFold progressed from being able to accurately predict individual protein structures to modeling how proteins interact with other molecules like DNA and drugs. These leaps made it far more useful for drug discovery, helping researchers design medicines faster and more precisely, turning the tool into a launchpad for a much larger ambition... In 2024, the same year it released AlphaFold 3, Isomorphic signed major research collaborations with pharma companies Novartis and Eli Lilly. A year later, in April 2025, Isomorphic Labs raised $600 million in its first-ever external funding round, led by Thrive Capital. The deals are part of Isomorphic's plan to build a "world-class drug design engine..."

Today, pharma companies often spend millions attempting to bring a single drug to market, sometimes with just a 10% chance of success once trials begin. Murdoch believes Isomorphic's tech could radically improve those odds. "We're trying to do all these things: speed them up, reduce the cost, but also really improve the chance that we can be successful," he says. He wants to harness AlphaFold's technology to get to a point where researchers have 100% conviction that the drugs they are developing are going to work in human trials. "One day we hope to be able to say — well, here's a disease, and then click a button and out pops the design for a drug to address that disease," Murdoch said. "All powered by these amazing AI tools."

China

Chinese Film Foundation Plans to Use AI to 'Revitalize' 100 Classic Kung Fu Films (msn.com) 57

"The China Film Foundation, a nonprofit fund under the Chinese government, plans to use AI to revitalize 100 kung fu classics including Police Story, Once Upon a Time in China and Fist of Fury, featuring Jackie Chan, Jet Li and Bruce Lee, respectively," reports the Los Angeles Times.

"The foundation said it will partner with businesses including Shanghai Canxing Culture & Media Co., which will license 100 Hong Kong films to AI companies to reintroduce those movies to younger audiences globally." The foundation said there are opportunities to use AI to tell those stories through animation, for example. There are plans to release an animated version of director John Woo's 1986 film A Better Tomorrow that uses AI to "reinterpret" Woo's "signature visual language," according to an English transcript of the announcement....

The project raised eyebrows among U.S. artists, many of whom are deeply wary of the use of AI in creative pursuits. The Directors Guild of America said AI is a creative tool that should only be used to enhance the creative storytelling process and "it should never be used retroactively to distort or destroy a filmmaker's artistic work... The DGA strongly opposes the use of AI or any other technology to mutilate a film or to alter a director's vision," the DGA said in a statement. "The Guild has a longstanding history of opposing such alterations on issues like colorization or sanitization of films to eliminate so-called 'objectionable content', or other changes that fundamentally alter a film's original style, meaning, and substance."

The project highlights widely divergent views on AI's potential to reshape entertainment as the two countries compete for dominance in the highly competitive AI space.... During the project's announcement, supporters touted the opportunity AI will bring to China to further its cultural message globally and generate new work for creatives. At the same time, they touted AI's disruption of the filmmaking process, saying the A Better Tomorrow remake was completed with just 30 people, significantly fewer than a typical animated project. China is a "more brutal society in that sense," said Eric Harwit, professor of Asian studies at the University of Hawaii at Manoa. "If somebody loses their job because artificial intelligence is taking over, well, that's just the cost of China's moving forward.... You don't have those freestanding labor organizations, so they don't have that kind of clout to protest against the Chinese using artificial intelligence in a way that might reduce their job opportunities or lead to layoffs in the sector..."

The kung fu revitalization efforts will extend into other areas, including the creation of a martial arts video game.

The article also includes an interesting statistic. "Many people in China embrace AI, with 83% feeling confident that AI systems are designed to act in the best interest of society, much higher than the U.S. where it's 37%, according to a survey from the United Nations Development Program."
Education

Recent College Graduates Face Higher Unemployment Than Other Workers - for the First Time in Decades (msn.com) 125

"A growing group of young, college-educated Americans are struggling to find work," reports the Minnesota Star Tribune, "as the unemployment rate for recent graduates outpaces overall unemployment for the first time in decades." While the national unemployment rate has hovered around 4% for months, the rate for 20-something degree holders is nearly 6%, data from the Federal Reserve Bank of New York shows. [And for young workers (ages 22 to 27) without a degree it's 6.9%.] The amount of time young workers report being unemployed is also on the rise.

Economists attribute some of the shift to normal post-pandemic cooling of labor market, which is making it harder for job-seekers of all ages to land a gig. But there's also widespread economic uncertainty causing employers to pull back on hiring and signs AI could replace entry-level positions....

Business schools nationwide were among the first to see the labor market shift in early 2023 as tech industry cuts bled into other sectors, said Maggie Tomas, Business Career Center executive director at Carlson. Tariffs and stock market volatility have only added to the uncertainty, she said. In 2022, when workers had their pick of jobs, 98% of full-time Carlson MBA graduates had a job offer in a field related to their degree within three months of graduation, according to the school. That number, which Tomas said is usually 90% or higher, dropped to 89% in 2023 and 83% in 2024.

Part of the challenge, she said, is recent graduates are now competing with more experienced workers who are re-entering the market amid layoffs and hiring freezes... After doing a lot of hiring in 2021 and 2022, Securian Financial in St. Paul is prioritizing internal hires, said Human Resources Director Leah Henrikson. Many entry-level roles have gone to current employees looking for a change, she said. "We are still looking externally, it's just the folks that we are looking for externally tend ... to fulfill a specific skill gap we may have at that moment in time," Henrikson said.

AI

Is China Quickly Eroding America's Lead in the Global AI Race? (msn.com) 130

China "is pouring money into building an AI supply chain with as little reliance on the U.S. as possible," reports the Wall Street Journal.

And now Chinese AI companies "are loosening the U.S.'s global stranglehold on AI," reports the Wall Street Journal, "challenging American superiority and setting the stage for a global arms race in the technology." In Europe, the Middle East, Africa and Asia, users ranging from multinational banks to public universities are turning to large language models from Chinese companies such as startup DeepSeek and e-commerce giant Alibaba as alternatives to American offerings such as ChatGPT... Saudi Aramco, the world's largest oil company, recently installed DeepSeek in its main data center. Even major American cloud service providers such as Amazon Web Services, Microsoft and Google offer DeepSeek to customers, despite the White House banning use of the company's app on some government devices over data-security concerns.

OpenAI's ChatGPT remains the world's predominant AI consumer chatbot, with 910 million global downloads compared with DeepSeek's 125 million, figures from researcher Sensor Tower show. American AI is widely seen as the industry's gold standard, thanks to advantages in computing semiconductors, cutting-edge research and access to financial capital. But as in many other industries, Chinese companies have started to snatch customers by offering performance that is nearly as good at vastly lower prices. A study of global competitiveness in critical technologies released in early June by researchers at Harvard University found China has advantages in two key building blocks of AI, data and human capital, that are helping it keep pace...

Leading Chinese AI companies — which include Tencent and Baidu — further benefit from releasing their AI models open-source, meaning users are free to tweak them for their own purposes. That encourages developers and companies globally to adopt them. Analysts say it could also pressure U.S. rivals such as OpenAI and Anthropic to justify keeping their models private and the premiums they charge for their service... On Latenode, a Cyprus-based platform that helps global businesses build custom AI tools for tasks including creating social-media and marketing content, as many as one in five users globally now opt for DeepSeek's model, according to co-founder Oleg Zankov. "DeepSeek is overall the same quality but 17 times cheaper," Zankov said, which makes it particularly appealing for clients in places such as Chile and Brazil, where money and computing power aren't as plentiful...

The less dominant American AI companies are, the less power the U.S. will have to set global standards for how the technology should be used, industry analysts say. That opens the door for Beijing to use Chinese models as a Trojan horse for disseminating information that reflects its preferred view of the world, some warn.... The U.S. also risks losing insight into China's ambitions and AI innovations, according to Ritwik Gupta, AI policy fellow at the University of California, Berkeley. "If they are dependent on the global ecosystem, then we can govern it," said Gupta. "If not, China is going to do what it is going to do, and we won't have visibility."

The article also warns of other potential issues:
  • "Further down the line, a breakdown in U.S.-China cooperation on safety and security could cripple the world's capacity to fight future military and societal threats from unrestrained AI."
  • "The fracturing of global AI is already costing Western makers of computer chips and other hardware billions in lost sales... Adoption of Chinese models globally could also mean lost market share and earnings for AI-related U.S. firms such as Google and Meta."

GNU is Not Unix

The FSF Faces Active 'Ongoing and Increasing' DDoS Attacks (fsf.org) 34

The Free Software Foundation's services face "ongoing (and increasing) distributed denial of service (DDoS) attacks," senior systems administrator Ian Kelling wrote Wednesday. But "Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins."

"We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue." Our infrastructure has been under attack since August 2024. Large Language Model (LLM) web crawlers have been a significant source of the attacks, and as for the rest, we don't expect to ever know what kind of entity is targeting our sites or why.

- In the fall Bulletin, we wrote about the August attack on gnu.org. That attack continues, but we have mitigated it. Judging from the pattern and scope, the goal was likely to take the site down and it was not an LLM crawler. We do not know who or what is behind the attack, but since then, we have had more attacks with even higher severity.

- To begin with, GNU Savannah, the FSF's collaborative software development system, was hit by a massive botnet controlling about five million IPs starting in January. As of this writing, the attack is still ongoing, but the botnet's current iteration is mitigated. The goal is likely to build an LLM training dataset. We do not know who or what is behind this.

- Furthermore, gnu.org and ftp.gnu.org were targets in a new DDoS attack starting on May 27, 2025. Its goal seems to be to take the site down. It is currently mitigated. It has had several iterations, and each has caused some hours of downtime while we figured out how to defend ourselves against it. Here again, the goal was likely to take our sites down and we do not know who or what is behind this.

- In addition, directory.fsf.org, the server behind the Free Software Directory, has been under attack since June 18. This likely is an LLM scraper designed to specifically target Media Wiki sites with a botnet. This attack is very active and now partially mitigated...

Even though we are under active attack, gnu.org, ftp.gnu.org, and savannah.gnu.org are up with normal response times at the moment, and have been for the majority of this week, largely thanks to hard work from the Savannah hackers Bob, Corwin, and Luke who've helped us, your sysadmins. We've shielded these sites for almost a full year of intense attacks now, and we'll keep on fighting these attacks for as long as they continue.

The full-time FSF tech staff is just two systems administrators, "and we currently lack the funds to hire more tech staff any time soon," Kelling points out. Kelling titled his post "our small team vs millions of bots," suggesting that supporters purchase FSF memberships "to improve our staffing situation... Can you join us in our crucial work to guard user freedom and defy dystopia?"

Kelling also points out they're also facing "run-of-the-mill standard crawlers, SEO crawlers, crawlers pretending to be normal users, crawlers pretending to be other crawlers, uptime systems, vulnerability scanners, carrier-grade network address translation, VPNs, and normal browsers hitting our sites..."

"Some of the abuse is not unique to us, and it seems that the health of the web has some serious problems right now."
AI

Police Department Apologizes for Sharing AI-Doctored Evidence Photo on Social Media (boston.com) 83

A Maine police department has now acknowledged "it inadvertently shared an AI-altered photo of drug evidence on social media," reports Boston.com: The image from the Westbrook Police Department showed a collection of drug paraphernalia purportedly seized during a recent drug bust on Brackett Street, including a scale and white powder in plastic bags. According to Westbrook police, an officer involved in the arrests snapped the evidence photo and used a photo editing app to insert the department's patch. "The patch was added, and the photograph with the patch was sent to one of our Facebook administrators, who posted it," the department explained in a post. "Unbeknownst to anyone, when the app added the patch, it altered the packaging and some of the other attributes on the photograph. None of us caught it or realized it."

It wasn't long before the edited image's gibberish text and hazy edges drew criticism from social media users. According to the Portland Press Herald, Westbrook police initially denied AI had been used to generate the photo before eventually confirming its use of the AI chatbot ChatGPT. The department issued a public apology Tuesday, sharing a side-by-side comparison of the original and edited images.

"It was never our intent to alter the image of the evidence," the department's post read. "We never realized that using a photoshop app to add our logo would alter a photograph so substantially."

Programming

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model (9to5mac.com) 7

"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once."

"The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested...

Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there.

Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples.

"Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out.

But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."
AI

'Vibe Coder' Who Doesn't Know How to Code Keeps Winning Hackathons in San Francisco (sfstandard.com) 166

An anonymous reader shared this report from the San Francisco Standard: About an hour into my meeting with the undisputed hackathon king of San Francisco, Rene Turcios asked if I wanted to smoke a joint with him. I politely declined, but his offer hardly surprised me. Turcios has built a reputation as a cannabis-loving former professional Yu-Gi-Oh! player who resells Labubus out of his Tenderloin apartment when he's not busy attending nearly every hackathon happening in the city. Since 2023, Turcios, 29, has attended more than 200 events, where he's won cash, software credits, and clout. "I'm always hustling," he said.

The craziest part: he doesn't even know how to code.

"Rene is the original vibe coder," said RJ Moscardon, a friend and fellow hacker who watched Turcios win second place at his first-ever hackathon at the AGI House mansion in Hillsborough. "All the engineers with prestigious degrees scoffed at him at first. But now they're all doing exactly the same thing...." Turcios was vibe coding long before the technique had a name — and was looked down upon by longtime hackers for using AI. But as Tiger Woods once said, "Winning takes care of everything...."

Instead of vigorously coding until the deadline, he finished his projects hours early by getting AI to do the technical work for him. "I didn't write a single line of code," Turcios said of his first hackathon where he prompted ChatGPT using plain English to generate a program that can convert any song into a lo-fi version. When the organizers announced Turcios had won second place, he screamed in celebration.... "I realized that I could compete with people who have degrees and fancy jobs...."

Turcios is now known for being able to build anything quickly. Businesses reach out to him to contract out projects that would take software engineering teams weeks — and he delivers in hours. He's even started running workshops to teach non-technical groups and experienced software engineers how to get the most out of AI for coding.

"He grew up in Missouri to parents who worked in an international circus, taming bears and lions..."
Programming

How Do You Teach Computer Science in the Age of AI? (thestar.com.my) 173

"A computer science degree used to be a golden ticket to the promised land of jobs," a college senior tells the New York Times. But "That's no longer the case."

The article notes that in the last three years there's been a 65% drop from companies seeking workers with two years of experience or less (according to an analysis by technology research/education organization CompTIA), with tech companies "relying more on AI for some aspects of coding, eliminating some entry-level work."

So what do college professors teach when AI "is coming fastest and most forcefully to computer science"? Computer science programs at universities across the country are now scrambling to understand the implications of the technological transformation, grappling with what to keep teaching in the AI era. Ideas range from less emphasis on mastering programming languages to focusing on hybrid courses designed to inject computing into every profession, as educators ponder what the tech jobs of the future will look like in an AI economy... Some educators now believe the discipline could broaden to become more like a liberal arts degree, with a greater emphasis on critical thinking and communication skills.

The National Science Foundation is funding a program, Level Up AI, to bring together university and community college educators and researchers to move toward a shared vision of the essentials of AI education. The 18-month project, run by the Computing Research Association, a research and education nonprofit, in partnership with New Mexico State University, is organising conferences and roundtables and producing white papers to share resources and best practices. The NSF-backed initiative was created because of "a sense of urgency that we need a lot more computing students — and more people — who know about AI in the workforce," said Mary Lou Maher, a computer scientist and a director of the Computing Research Association.

The future of computer science education, Maher said, is likely to focus less on coding and more on computational thinking and AI literacy. Computational thinking involves breaking down problems into smaller tasks, developing step-by-step solutions and using data to reach evidence-based conclusions. AI literacy is an understanding — at varying depths for students at different levels — of how AI works, how to use it responsibly and how it is affecting society. Nurturing informed skepticism, she said, should be a goal.

The article raises other possibilities. Experts also suggest the possibility of "a burst of technology democratization as chatbot-style tools are used by people in fields from medicine to marketing to create their own programs, tailored for their industry, fed by industry-specific data sets." Stanford CS professor Alex Aiken even argues that "The growth in software engineering jobs may decline, but the total number of people involved in programming will increase."

Last year, Carnegie Mellon actually endorsed using AI for its introductory CS courses. The dean of the school's undergraduate programs believes that coursework "should include instruction in the traditional basics of computing and AI principles, followed by plenty of hands-on experience designing software using the new tools."
Programming

Microsoft Open Sources Copilot Chat for VS Code on GitHub (nerds.xyz) 18

"Microsoft has released the source code for the GitHub Copilot Chat extension for VS Code under the MIT license," reports BleepingComputer. This provides the community access to the full implementation of the chat-based coding assistant, including the implementation of "agent mode," what contextual data is sent to large language models (LLMs), and the design of system prompts. The GitHub repository hosting the code also details telemetry collection mechanisms, addressing long-standing questions about data transparency in AI-assisted coding tools...

As the VS Code team explained previously, shifts in AI tooling landscape like the rapid growth of the open-source AI ecosystem and a more level playing field for all have reduced the need for secrecy around prompt engineering and UI design. At the same time, increased targeting of development tools by malicious actors has increased the need for crowdsourcing contributions to rapidly pinpoint problems and develop effective fixes. Essentially, openness is now considered superior from a security perspective.

"If you've been hesitant to adopt AI tools because you don't trust the black box behind them, this move opensources-github-copilot-chat-vscode/offers something rare these days: transparency," writes Slashdot reader BrianFagioli" Now that the extension is open source, developers can audit how agent mode actually works. You can also dig into how it manages your data, customize its behavior, or build entirely new tools on top of it. This could be especially useful in enterprise environments where compliance and control are non negotiable.

It is worth pointing out that the backend models powering Copilot remain closed source. So no, you won't be able to self host the whole experience or train your own Copilot. But everything running locally in VS Code is now fair game. Microsoft says it is planning to eventually merge inline code completions into the same open source package too, which would make Copilot Chat the new hub for both chat and suggestions.

AI

XBOW's AI-Powered Pentester Grabs Top Rank on HackerOne, Raises $75M to Grow Platform (csoonline.com) 10

We're living in a new world now — one where it's an AI-powered penetration tester that "now tops an eminent US security industry leaderboard that ranks red teamers based on reputation." CSO Online reports: On HackerOne, which connects organizations with ethical hackers to participate in their bug bounty programs, "Xbow" scored notably higher than 99 other hackers in identifying and reporting enterprise software vulnerabilities. It's a first in bug bounty history, according to the company that operates the eponymous bot...

Xbow is a fully autonomous AI-driven penetration tester (pentester) that requires no human input, but, its creators said, "operates much like a human pentester" that can scale rapidly and complete comprehensive penetration tests in just a few hours. According to its website, it passes 75% of web security benchmarks, accurately finding and exploiting vulnerabilities.

Xbow submitted nearly 1,060 vulnerabilities to HackerOne, including remote code execution, information disclosures, cache poisoning, SQL injection, XML external entities, path traversal, server-side request forgery (SSRF), cross-site scripting, and secret exposure. The company said it also identified a previously unknown vulnerability in Palo Alto's GlobalProtect VPN platform that impacted more than 2,000 hosts. Of the vulnerabilities Xbow submitted over the last 90 days, 54 were classified as critical, 242 as high and 524 as medium in severity. The company's bug bounty programs have resolved 130 vulnerabilities, and 303 are classified as triaged.

Notably, though, roughly 45% of the vulnerabilities it found are still awaiting resolution, highlighting the "volume and impact of the submissions across live targets," Nico Waisman, Xbow's head of security, wrote in a blog post this week... To further hone the technology, the company developed "validators," — automated peer reviewers that confirm each uncovered vulnerability, Waisman explained.

"As attackers adopt AI to automate and accelerate exploitation, defenders must meet them with even more capable systems," XBOW's CEO said this week, as the company raised $75 million in Series B funding to grow its platform, bringing its total funding to $117 million. Help Net Security reports: With the new funding, XBOW plans to grow its engineering team and expand its go-to-market efforts. The product is now generally available, and the company says it is working with large banks, tech firms, and other organizations that helped shape the platform during its early testing phase. XBOW's long-term goal is to help security teams stay ahead of adversaries using advanced automation. As attackers increasingly turn to AI, the company argues that defenders will need equally capable systems to match their speed and sophistication.
HP

HPE Acquires Juniper Networks for $14B After Settling Antitrust Case (telecoms.com) 29

This week Hewlett-Packard Enterprise settled its antitrust case with America's Justice Department, "paving the way for its acquisition of rival kit maker Juniper Networks," reported Telecoms.com: Under the agreement, HPE has agreed to divest its Instant On unit, which sells a range of enterprise-grade Wi-Fi networking equipment for campus and branch deployments. It has also agreed to license Juniper's Mist AIOps source code — a software suite that enables AI-based network automation and management. HPE can live with that, since its primary motivation for buying Juniper is to improve its prospects in an IT networking market dominated by Cisco, where others like Arista and increasingly Nokia and Nvidia are also trying to make inroads.
And after receiving regulatory clearance, HPE "very quickly closed the deal..." reports The Motley Fool. "In the press release heralding the news, the buyer wrote that it "doubles the size of HPE's networking business and provides customers with a comprehensive portfolio of networking solutions." Investors were obviously happy about this, as according to data compiled by S&P Global Market Intelligence the company's stock price ballooned by nearly 16% across the week, largely on the news.... The Justice Department had alleged, in a lawsuit filed in January, that an HPE/Juniper tie-up would essentially result in a duopoly in networking equipment. It claimed that a beefed-up HPE and networking incumbent Cisco would hold more than 70% combined of the domestic market.
Thanks to long-time Slashdot reader AmiMoJo for sharing the news.
AI

AI Coding Agents Are Already Commoditized (seangoedecke.com) 62

Software engineer Sean Goedecke argues that AI coding agents have already been commoditized because they require no special technical advantages, just better base models. He writes: All of a sudden, it's the year of AI coding agents. Claude released Claude Code, OpenAI released their Codex agent, GitHub released its own autonomous coding agent, and so on. I've done my fair share of writing about whether AI coding agents will replace developers, and in the meantime how best to use them in your work. Instead, I want to make what I think is now a pretty firm observation: AI coding agents have no secret sauce.

[...] The reason everyone's doing agents now is the same reason everyone's doing reinforcement learning now -- from one day to the next, the models got good enough. Claude Sonnet 3.7 is the clear frontrunner here. It's not the smartest model (in my opinion), but it is the most agentic: it can stick with a task and make good decisions over time better than other models with more raw brainpower. But other AI labs have more agentic models now as well. There is no moat.

There's also no moat to the actual agent code. It turns out that "put the model in a loop with a 'read file' and 'write file' tool" is good enough to do basically anything you want. I don't know for sure that the closed-source options operate like this, but it's an educated guess. In other words, the agent hackers in 2023 were correct, and the only reason they couldn't build Claude Code then was that they were too early to get to use the really good models.

EU

EU Sticks With Timeline For AI Rules (reuters.com) 24

Reuters: The European Union's landmark rules on AI will be rolled out according to the legal timeline in the legislation, the European Commission said on Friday, dismissing calls from some companies and countries for a pause.

Google owner Alphabet, Facebook owner Meta and other U.S. companies as well as European businesses such as Mistral and ASML have in recent days urged the Commission to delay the AI Act by years.
Financial Times adds: In an open letter, seen by the Financial Times, the heads of 44 major firms on the continent called on European Commission President Ursula von der Leyen to introduce a two-year pause, warning that unclear and overlapping regulations are threatening the bloc's competitiveness in the global AI race.

[...] The current debate surrounds the drafting of a "code of practice," which will provide guidance to AI companies on how to implement the act that applies to powerful AI models such as Google's Gemini, Meta's Llama and OpenAI's GPT-4. Brussels has already delayed publishing the code, which was due in May, and is now expected to water down the rules.

AI

US Plans AI Chip Curbs on Malaysia, Thailand Over China Concerns (yahoo.com) 17

President Donald Trump's administration plans to restrict shipments of AI chips from the likes of Nvidia to Malaysia and Thailand, part of an effort to crack down on suspected semiconductor smuggling into China. Bloomberg: A draft rule from the Commerce Department seeks to prevent China -- to which the US has effectively banned sales of Nvidia's advanced AI processors -- from obtaining those components through intermediaries in the two Southeast Asian nations, according to people familiar with the matter. The rule is not yet finalized and could still change, said the people, who requested anonymity to discuss private conversations.

Officials plan to pair the Malaysia and Thailand controls with a formal rescission of global curbs from the so-called AI diffusion rule, the people said.

Slashdot Top Deals