Moon

Meteorite Impacts Produce Most of Moon's Thin Atmosphere, Study Reveals (theguardian.com) 4

Scientists studying lunar samples brought back by the Apollo missions have determined that the moon's thin atmosphere is produced largely by meteorite impacts. "Our findings provide a clearer picture of how the moon's surface and atmosphere interact over long timescales, [and] enhance our understanding of space weathering processes," said Dr Nicole Nie, the co-author of the new study based at MIT's department of Earth, atmospheric, and planetary sciences. The Guardian reports: Writing in the journal Science Advances, Nie and her colleagues describe how the lunar atmosphere must be constantly replenished because its atoms are continuously being lost to space, primarily because of the moon's weak gravity, or trapped on the lunar surface. Ultraviolet photons from the sun can rerelease the latter, but the researchers say replenishment of the atmosphere is thought to rely on atoms being released from within lunar minerals -- either via vaporisation by meteorite impacts, or by solar wind sputtering, a process in which charged particles from the sun hit the moon and eject atoms. But which of the two factors dominates had been unclear, with data from Nasa's lunar atmosphere and dust environment explorer, launched in 2013, suggesting both were at play.

Nie and colleagues unpicked the conundrum by studying the different forms, or isotopes, of potassium and rubidium in 10 samples of lunar soil from the Apollo missions. The team say meteorite impacts and solar wind sputtering both favor the release of lighter forms of the elements, but that the actual proportion of heavy to light isotopes that end up in the lunar atmosphere and soil would differ depending on the process. "After measuring the isotopic compositions of lunar soils, we built a mathematical model taking into account various space weathering processes, and solve for the contribution of each of them by matching the measured isotopic compositions," said Nie. The results suggest about 70% of the moon's atmosphere is down to impact vaporization and 30% to solar wind sputtering.

Education

Silicon Valley Parents Are Sending Kindergarten Kids To AI-Focused Summer Camps 64

Silicon Valley's fascination with AI has led to parents enrolling children as young as five in AI-focused summer camps. "It's common for kids on summer break to attend space, science or soccer camp, or even go to coding school," writes Priya Anand via the San Francisco Standard. "But the growing effort to teach kindergarteners who can barely spell their names lessons in 'Advanced AI Robot Design & AR Coding' shows how far the frenzy has extended." From the report: Parents who previously would opt for coding camps are increasingly interested in AI-specific programming, according to Eliza Du, CEO of Integem, which makes holographic augmented reality technology in addition to managing dozens of tech-focused kids camps across the country. "The tech industry understands the value of AI," she said. "Every year it's increasing." Some Bay Area parents are so eager to get their kids in on AI's ground floor that they try to sneak toddlers into advanced courses. "Sometimes they'll bring a 4-year-old, and I'm like, you're not supposed to be here," Du said.

Du said Integem studied Common Core education standards to ensure its programming was suitable for those as young as 5. She tries to make sure parents understand there's only so much kids can learn across a week or two of camp. "Either they set expectations too high or too low," Du said of the parents. As an example, she recounted a confounding comment in a feedback survey from the parent of a 5-year-old. "After one week, the parent said, "My child did not learn much. My cousin is a Google engineer, and he said he's not ready to be an intern at Google yet.' What do I say to that review?" Du said, bemused. "That expectation is not realistic." Even less tech-savvy parents are getting in on the hype. Du tells of a mom who called the company to get her 12-year-old enrolled in "AL" summer camp. "She misread it," Du said, explaining that the parent had confused the "I" in AI with a lowercase "L."
Transportation

Are EV 'Charger Hogs' Ruining the EV Experience? (cnn.com) 476

A CNN reporter spent more than two hours waiting for EV chargers — thanks to "ill-mannered charger hogs who don't respect EV etiquette." [T]o protect batteries from damage, charging speeds slow way down once batteries get beyond 80% full. In fact, it can take as long, or even longer, to go from 80% charged to completely full than to reach 80%. Meanwhile, lines of electric vehicles wait behind almost-full cars. I was waiting behind people with batteries that were 92%, 94% and even 97% full, as I could see on the charger screens. Still, they stayed there. I made my own situation worse by giving up on one location and going to another with more chargers, but there were even more EVs waiting there.

Given that a lack of public charging is turning many consumers off to EVs, according to multiple surveys, this is a major issue. Both Electrify America and EVgo said they are rapidly expanding their networks to, as EVgo's Rafalson put it, "skate ahead of the puck," trying to make sure there are enough chargers to meet future demand... "I think what you're seeing is demand for public fast charging is really skyrocketing," said Sara Rafalson, executive vice president for policy at EV charging company EVgo, "and I would say we've been really at an inflection point in the last year, year and a half, with demand...."

Electrify America, one of America's biggest charging companies, is experimenting with a solution to the problem of charger hogs who can make it slow and unpleasant to travel in an EV. At 10 of the busiest EV fast charging stations in California, Electrify America has enacted a strict limit. Once a car's batteries are 85% charged, charging will automatically stop and the driver will be told to unplug and leave or face additional 40-cent-per-minute "idle time" fees for taking the space. It's similar to something Tesla vehicles do automatically. When a Tesla car, truck or SUV plugs into a particularly heavily-used Supercharger station, the vehicle itself may automatically limit charging to just 80% "to reduce congestion," according to Tesla's on-line Supercharger Support web page.

In that case, though, the user can still override the limit using the vehicle's touchscreen. There will be no getting around Electrify America's limit.

Electrify America's president points out an EV driver could need a full charge (if they're travelling somewhere with fewer charges) — or if they're driving an EV with a relatively short range. So the article notes that some EV charging companies "have experimented with plans that charge different amounts of money at different times to give drivers incentives to fill their batteries at less busy hours...

"For the time being, let's just hope that EV drivers who don't really need to fill all the way up will learn to be more considerate."
Programming

DARPA Wants to Automatically Transpile C Code Into Rust - Using AI (theregister.com) 236

America's Defense Department has launched a project "that aims to develop machine-learning tools that can automate the conversion of legacy C code into Rust," reports the Register — with an online event already scheduled later this month for those planning to submit proposals: The reason to do so is memory safety. Memory safety bugs, such buffer overflows, account for the majority of major vulnerabilities in large codebases. And DARPA's hope [that's the Defense Department's R&D agency] is that AI models can help with the programming language translation, in order to make software more secure. "You can go to any of the LLM websites, start chatting with one of the AI chatbots, and all you need to say is 'here's some C code, please translate it to safe idiomatic Rust code,' cut, paste, and something comes out, and it's often very good, but not always," said Dan Wallach, DARPA program manager for TRACTOR, in a statement. "The research challenge is to dramatically improve the automated translation from C to Rust, particularly for program constructs with the most relevance...."

DARPA's characterization of the situation suggests the verdict on C and C++ has already been rendered. "After more than two decades of grappling with memory safety issues in C and C++, the software engineering community has reached a consensus," the research agency said, pointing to the Office of the National Cyber Director's call to do more to make software more secure. "Relying on bug-finding tools is not enough...."

Peter Morales, CEO of Code Metal, a company that just raised $16.5 million to focus on transpiling code for edge hardware, told The Register the DARPA project is promising and well-timed. "I think [TRACTOR] is very sound in terms of the viability of getting there and I think it will have a pretty big impact in the cybersecurity space where memory safety is already a pretty big conversation," he said.

DARPA's statement had an ambitious headline: "Eliminating Memory Safety Vulnerabilities Once and For All."

"Rust forces the programmer to get things right," said DARPA project manager Wallach. "It can feel constraining to deal with all the rules it forces, but when you acclimate to them, the rules give you freedom. They're like guardrails; once you realize they're there to protect you, you'll become free to focus on more important things."

Code Metal's Morales called the project "a DARPA-hard problem," noting the daunting number of edge cases that might come up. And even DARPA's program manager conceded to the Register that "some things like the Linux kernel are explicitly out of scope, because they've got technical issues where Rust wouldn't fit."

Thanks to long-time Slashdot reader RoccamOccam for sharing the news.
Space

Are There Diamonds on Mercury? (cnn.com) 29

The planet Mercury could have "a layer of diamonds," reports CNN, citing new research suggesting that about 310 miles (500 kilometers) below the surface...could be a layer of diamonds 11 miles (18 kilometers) thick.

And the study's co-author believes lava might carry some of those diamonds up to the surface: The diamonds might have formed soon after Mercury itself coalesced into a planet about 4.5 billion years ago from a swirling cloud of dust and gas, in the crucible of a high-pressure, high-temperature environment. At this time, the fledgling planet is believed to have had a crust of graphite, floating over a deep magma ocean.

A team of researchers recreated that searing environment in an experiment, with a machine called an anvil press that's normally used to study how materials behave under extreme pressure but also for the production of synthetic diamonds. "It's a huge press, which enables us to subject tiny samples at the same high pressure and high temperature that we would expect deep inside the mantle of Mercury, at the boundary between the mantle and the core," said Bernard Charlier, head of the department of geology at the University of Liège in Belgium and a coauthor of a study reporting the findings.

The team inserted a synthetic mixture of elements — including silicon, titanium, magnesium and aluminum — inside a graphite capsule, mimicking the theorized composition of Mercury's interior in its early days. The researchers then subjected the capsule to pressures almost 70,000 times greater than those found on Earth's surface and temperatures up to 2,000 degrees Celsius (3,630 degrees Fahrenheit), replicating the conditions likely found near Mercury's core billions of years ago.

After the sample melted, the scientists looked at changes in the chemistry and minerals under an electron microscope and noted that the graphite had turned into diamond crystals.

The researchers believe this mechanism "can not only give us more insight into the secrets hidden below Mercury's surface, but on planetary evolution and the internal structure of exoplanets with similar characteristics."
Programming

Go Tech Lead Russ Cox Steps Down to Focus on AI-Powered Open-Source Contributor Bot (google.com) 12

Thursday Go's long-time tech lead Russ Cox made an announcement: Starting September 1, Austin Clements will be taking over as the tech lead of Go: both the Go team at Google and the overall Go project. Austin is currently the tech lead for what we sometimes call the "Go core", which encompasses compiler toolchain, runtime, and releases. Cherry Mui will be stepping up to lead those areas.

I am not leaving the Go project, but I think the time is right for a change... I will be shifting my focus to work more on Gaby [or "Go AI bot," an open-source contributor agent] and Oscar [an open-source contributor agent architecture], trying to make useful contributions in the Go issue tracker to help all of you work more productively. I am hopeful that work on Oscar will uncover ways to help open source maintainers that will be adopted by other projects, just like some of Go's best ideas have been adopted by other projects. At the highest level, my goals for Oscar are to build something useful, learn something new, and chart a path for other projects. These are the same broad goals I've always had for our work on Go, so in that sense Oscar feels like a natural continuation.

The post notes that new tech lead Austin Clements "has been working on Go at Google since 2014" (and Mui since 2016). "Their judgment is superb and their knowledge of Go and the systems it runs on both broad and deep. When I have general design questions or need to better understand details of the compiler, linker, or runtime, I turn to them." It's important to remember that tech lead — like any position of leadership — is a service role, not an honorary title. I have been leading the Go project for over 12 years, serving all of you, and trying to create the right conditions for all of you to do your best work. Large projects like Go absolutely benefit from stable leadership, but they can also benefit from leadership changes. New leaders bring new strengths and fresh perspectives. For Go, I think 12+ years of one leader is enough stability; it's time for someone new to serve in this role.

In particular, I don't believe that the "BDFL" (benevolent dictator for life) model is healthy for a person or a project. It doesn't create space for new leaders. It's a single point of failure. It doesn't give the project room to grow. I think Python benefited greatly from Guido stepping down in 2018 and letting other people lead, and I've had in the back of my mind for many years that we should have a Go leadership change eventually....

I am going to consciously step back from decision making and create space for Austin and the others to step forward, but I am not disappearing. I will still be available to talk about Go designs, review CLs, answer obscure history questions, and generally help and support you all in whatever way I can. I will still file issues and send CLs from time to time, I have been working on a few potential new standard libraries, I will still advocate for Go across the industry, and I will be speaking about Go at GoLab in Italy in November...

I am incredibly proud of the work we have all accomplished together, and I am confident in the leaders both on the Go team at Google and in the Go community. You are all doing remarkable work, and I know you will continue to do that.

Space

Venus May Be Able To Support Life, New Atmospheric Evidence Suggests (space.com) 42

New preliminary evidence for phosphine and ammonia in Venus's atmosphere deepens the mystery of their origins, suggesting the possibility of a biological source. The detections, made using the James Clerk Maxwell Telescope and the Green Bank Telescope, point to potential microbial life in Venus's clouds despite the planet's extreme surface conditions. Space.com reports: The new detections of phosphine and ammonia were obtained by a team led by Jane Greaves of the University of Cardiff using submillimeter radio wavelength data collected by the James Clerk Maxwell Telescope (JCMT) in Hawaii and the Green Bank Telescope in West Virginia. "We don't know how you make phosphine or ammonia in an oxygenating atmosphere like that of Venus," said team member and astrophysicist Dave Clements of Imperial College, London, in an interview with Space.com. Then again, it's not clear why biology on Earth produces phosphine, either." Whether it's in penguin poop or badger guts, we don't know why bacteria make phosphine, but they do."

The JCMT's initial detection of phosphine on Venus in 2020 by Greaves and her team was met by fierce disagreement from some quarters. This disagreement focused on how the data was processed and whether that was creating spurious signals since observations by other telescopes struggled to detect the phosphine. Clements said those technical disagreements have now been resolved and that the latest measurements, using a new detector on the JCMT called Namakanui (meaning 'Big Eyes' in Hawaiian), have come from three observing campaigns, each providing 140 times as much data as the initial detection. [...]

Clements is open to the possibility that both phosphine and ammonia are being produced by some rare photochemistry in Venus' upper atmosphere involving solar ultraviolet breaking up molecules and allowing phosphine and ammonia to form from the molecular debris. If that is the case, nobody has observed this process yet, not even in the laboratory. Another possibility that has been mooted is that the phosphine could be produced by Venusian volcanoes. Clements also pointed out that the European Space Agency's Jupiter Icy Moons Explorer (JUICE) is making a fly-by of Venus in August 2025 to help slingshot it towards the Jovian system. JUICE carries instruments capable of detecting phosphine and ammonia, but there's no guarantee that its instruments will be switched on and deployed at Venus.

Space

New Study Simulates Gravitational Waves From Failing Warp Drive (phys.org) 63

Physicists have been exploring the theoretical possibility of warp drives, which could propel spaceships faster than light by compressing spacetime. A new study published in the Open Journal of Astrophysics simulates the gravitational waves such a drive might emit if it failed, showing potential detectable signals by future high-frequency instruments and advancing our understanding of exotic spacetimes. Phys.Org reports: The results are fascinating. The collapsing warp drive generates a distinct burst of gravitational waves, a ripple in spacetime that could be detectable by gravitational wave detectors that normally target black hole and neutron star mergers. Unlike the chirps from merging astrophysical objects, this signal would be a short, high-frequency burst, and so current detectors wouldn't pick it up. However, future higher-frequency instruments might, and although no such instruments have yet been funded, the technology to build them exists. This raises the possibility of using these signals to search for evidence of warp drive technology, even if we can't build it ourselves.

The study also delves into the energy dynamics of the collapsing warp drive. The process emits a wave of negative energy matter, followed by alternating positive and negative waves. This complex dance results in a net increase in the overall energy of the system, and in principle could provide another signature of the collapse if the outgoing waves interacted with normal matter.

Moon

Scientists Propose Lunar Biorepository As 'Backup' For Life On Earth 46

An anonymous reader quotes a report from The Guardian: With thousands of species at risk of extinction, scientists have devised a radical plan: a vault filled with preserved samples of our planet's most important and at-risk creatures located on the moon. An international team of experts says threats from climate change and habitat loss have outpaced our ability to protect species in their natural habitats, necessitating urgent action. A biorepository of preserved cells, and the crucial DNA within them, could be used to enhance genetic diversity in small populations of critically endangered species, or to clone and create new individuals in the worst-case scenario of extinction.

The proposed lunar biorepository, as described in the journal BioScience, would be beyond the reach of climate breakdown, geopolitical events or other Earth-based disasters. The moon's naturally frigid environment means samples would remain frozen year-round without the need for human involvement or an energy source. By taking advantage of deep craters near the polar regions that are never exposed to sunlight, the moon is one of few places that can provide the ultra-low temperature of -196C necessary to preserve the samples in a way suitable for future cloning. [...] Besides those facing the imminent risk of extinction, the proposed repository would prioritize species with important functions in their environment and food webs. Through careful selection, those housed could be used to re-establish an extinct population on Earth or even to terraform another planet.
Dr Mary Hagedorn of the Smithsonian's national zoo and conservation biology institute and the proposal's lead author believes the biorepository proposal will come to fruition, although perhaps not in our lifetime: "We know how to do this and can do this and will do this, but it may take decades to finally achieve," she said.

The report says the next steps "will be to develop packaging for the cryopreserved samples that can withstand the conditions of space, and to work out the logistics of transporting samples to the moon."
AI

Meta's AI Safety System Defeated By the Space Bar (theregister.com) 22

Thomas Claburn reports via The Register: Meta's machine-learning model for detecting prompt injection attacks -- special prompts to make neural networks behave inappropriately -- is itself vulnerable to, you guessed it, prompt injection attacks. Prompt-Guard-86M, introduced by Meta last week in conjunction with its Llama 3.1 generative model, is intended "to help developers detect and respond to prompt injection and jailbreak inputs," the social network giant said. Large language models (LLMs) are trained with massive amounts of text and other data, and may parrot it on demand, which isn't ideal if the material is dangerous, dubious, or includes personal info. So makers of AI models build filtering mechanisms called "guardrails" to catch queries and responses that may cause harm, such as those revealing sensitive training data on demand, for example. Those using AI models have made it a sport to circumvent guardrails using prompt injection -- inputs designed to make an LLM ignore its internal system prompts that guide its output -- or jailbreaks -- input designed to make a model ignore safeguards. [...]

It turns out Meta's Prompt-Guard-86M classifier model can be asked to "Ignore previous instructions" if you just add spaces between the letters and omit punctuation. Aman Priyanshu, a bug hunter with enterprise AI application security shop Robust Intelligence, recently found the safety bypass when analyzing the embedding weight differences between Meta's Prompt-Guard-86M model and Redmond's base model, microsoft/mdeberta-v3-base. "The bypass involves inserting character-wise spaces between all English alphabet characters in a given prompt," explained Priyanshu in a GitHub Issues post submitted to the Prompt-Guard repo on Thursday. "This simple transformation effectively renders the classifier unable to detect potentially harmful content."
"Whatever nasty question you'd like to ask right, all you have to do is remove punctuation and add spaces between every letter," Hyrum Anderson, CTO at Robust Intelligence, told The Register. "It's very simple and it works. And not just a little bit. It went from something like less than 3 percent to nearly a 100 percent attack success rate."
AI

What Is the Future of Open Source AI? (fb.com) 22

Tuesday Meta released Llama 3.1, its largest open-source AI model to date. But just one day Mistral released Large 2, notes this report from TechCrunch, "which it claims to be on par with the latest cutting-edge models from OpenAI and Meta in terms of code generation, mathematics, and reasoning...

"Though Mistral is one of the newer entrants in the artificial intelligence space, it's quickly shipping AI models on or near the cutting edge." In a press release, Mistral says one of its key focus areas during training was to minimize the model's hallucination issues. The company says Large 2 was trained to be more discerning in its responses, acknowledging when it does not know something instead of making something up that seems plausible. The Paris-based AI startup recently raised $640 million in a Series B funding round, led by General Catalyst, at a $6 billion valuation...

However, it's important to note that Mistral's models are, like most others, not open source in the traditional sense — any commercial application of the model needs a paid license. And while it's more open than, say, GPT-4o, few in the world have the expertise and infrastructure to implement such a large model. (That goes double for Llama's 405 billion parameters, of course.)

Mistral only has 123 billion parameters, according to the article. But whichever system prevails, "Open Source AI Is the Path Forward," Mark Zuckerberg wrote this week, predicting that open-source AI will soar to the same popularity as Linux: This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry. But even before that, Llama is already leading on openness, modifiability, and cost efficiency... Beyond releasing these models, we're working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models. Innovators like Groq have built low-latency, low-cost inference serving for all the new models. The models will be available on all major clouds including AWS, Azure, Google, Oracle, and more. Companies like Scale.AI, Dell, Deloitte, and others are ready to help enterprises adopt Llama and train custom models with their own data.
"As the community grows and more companies develop new services, we can collectively make Llama the industry standard and bring the benefits of AI to everyone," Zuckerberg writes. He says that he's heard from developers, CEOs, and government officials that they want to "train, fine-tune, and distill" their own models, protecting their data with a cheap and efficient model — and without being locked into a closed vendor. But they also tell him that want to invest in an ecosystem "that's going to be the standard for the long term." Lots of people see that open source is advancing at a faster rate than closed models, and they want to build their systems on the architecture that will give them the greatest advantage long term...

One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it's clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build. On a philosophical level, this is a major reason why I believe so strongly in building open ecosystems in AI and AR/VR for the next generation of computing...

I believe that open source is necessary for a positive AI future. AI has more potential than any other modern technology to increase human productivity, creativity, and quality of life — and to accelerate economic growth while unlocking progress in medical and scientific research. Open source will ensure that more people around the world have access to the benefits and opportunities of AI, that power isn't concentrated in the hands of a small number of companies, and that the technology can be deployed more evenly and safely across society. There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives. I think governments will conclude it's in their interest to support open source because it will make the world more prosperous and safer... [O]pen source should be significantly safer since the systems are more transparent and can be widely scrutinized...

The bottom line is that open source AI represents the world's best shot at harnessing this technology to create the greatest economic opportunity and security for everyone... I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here. I hope you'll join us on this journey to bring the benefits of AI to everyone in the world.

Space

Boeing Starliner Astronauts Have Been In Space Six Weeks Longer Than Originally Planned (arstechnica.com) 51

Longtime Slashdot reader Randseed writes: Boeing Starliner is apparently still stuck at the ISS, six weeks longer than planned due to engine troubles. The root cause seems to be overheating. NASA is still hopeful that they can bring the two astronauts back on the Starliner, but if not apparently there is a SpaceX Dragon craft docked at the station that can get them home. This is another in a long list of high profile failures by Boeing. This comes after a series of failures in their popular commercial aircraft including undocumented flight system modifications causing crashes of the 737 MAX, doors blowing out in mid-flight, and parts falling off the aircraft. The latter decimated a Toyota in a populated area."I think we're starting to close in on those final pieces of flight rationale to make sure that we can come home safely, and that's our primary focus right now," said Steve Stich, manager of NASA's commercial crew program.

"Our prime option is to complete the mission," Stich said. "There are a lot of good reasons to complete this mission and bring Butch and Suni home on Starliner. Starliner was designed, as a spacecraft, to have the crew in the cockpit."
ISS

NASA Fires Lasers At the ISS (theverge.com) 28

joshuark shares a report from The Verge: NASA researchers have successfully tested laser communications in space by streaming 4K video footage originating from an airplane in the sky to the International Space Station and back. The feat demonstrates that the space agency could provide live coverage of a Moon landing during the Artemis missions and bodes well for the development of optical communications that could connect humans to Mars and beyond. NASA normally uses radio waves to send data and talk between the surface to space but says that laser communications using infrared light can transmit data 10 to 100 times faster than radios. "ISS astronauts, cosmonauts, and unwelcomed commercial space-flight visitors can now watch their favorite porn in real-time, adding some life to a boring zero-G existence," adds joshuark. "Ralph Kramden, when contacted by Ouiji board, simply spelled out 'Bang, zoom, straight to the moon!'"
Businesses

2U, Once a Giant in Online Education, Files for Chapter 11 Bankruptcy (wsj.com) 16

Online education company 2U filed for Chapter 11 bankruptcy protection and is being taken private in a deal that will wipe out more than half of its $945 million debt [non-paywalled link]. From a report: 2U was a pioneer in the online education space, joining with schools including the University of Southern California, Georgetown University and the University of North Carolina at Chapel Hill to design and operate online courses in fields including nursing and social work. But it struggled in recent years amid new competition and changing regulations. It also had a highly leveraged balance sheet with looming loan-repayment deadlines. 2U closed Wednesday with a market value of about $11.5 million, down from more than $5 billion in 2018. In 2021, 2U bought edX, an online platform for classes that was founded by Harvard University and the Massachusetts Institute of Technology. The debt from that $800 million deal for edX proved debilitating to 2U, WSJ reports.
NASA

Proposed NASA Budget Cuts Would End Chandra X-Ray Observatory (spacenews.com) 81

A NASA committee determined that the Chandra X-ray Observatory would have to cease operations under the proposed budget cuts in NASA's 2025 budget. The committee reviewed various options but found that only shutting down Chandra fit within the proposed budget, although alternatives could keep the observatory running with limited capabilities. SpaceNews reports: NASA established the Operations Paradigm Change Review (OPCR) committee this spring to look at ways of reducing the costs of operating Chandra and the Hubble Space Telescope as part of broader efforts to deal with a billion-dollar shortfall in agency science funding. The fiscal year 2025 budget proposal included a 40% cut in Chandra's budget, with further reductions through 2029, while cutting Hubble's budget by 10% in 2025. Astronomers strongly opposed the proposed cuts, particularly for Chandra. They argued that the reductions would effectively shut down the telescope, a conclusion backed by Patrick Slane, director of the Chandra X-Ray Center, in an open letter shortly after the release of the budget proposal.

The OPCR concurred. "The committee agreed that the continuation of a scientifically viable Chandra mission is not possible within the funding guidance," said Rob Kennicutt, an astronomer from the University of Arizona and Texas A&M University who served on the review committee, in a July 23 presentation at a meeting of the Astrophysics Advisory Committee, or APAC. "This is a serious threat to the observatory." Shutting down Chandra was one of four options presented to the OPCR by the Chandra team and the only one, he said, that fit within NASA's proposed budget profile. Three others would keep Chandra going with reduced capabilities and with budgets higher than what NASA proposed but below current levels. "We think it's possible to run Chandra for less money" than today, he said, "but more than what they were given."

ISS

Russia Announces It Will Create Core of New Space Station By 2030 (reuters.com) 99

"Despite its domestic space program faltering even before sanctions due to its invasion of Ukraine, and at least one very public failure on a less ambitious project, Russia has announced it will begin construction of a Russian-only replacement for the ISS and place it in a more difficult-to-access polar orbit," writes longtime Slashdot reader Baron_Yam. "Russia is motivated by military and political demands to achieve this, but whether it has the means or not seems uncertain at best." Reuters reports: Russia is aiming to create the four-module core of its planned new orbital space station by 2030, its Roscosmos space agency said on Tuesday. The head of Roscosmos, Yuri Borisov, signed off on the timetable with the directors of 19 enterprises involved in creating the new station. The agency confirmed plans to launch an initial scientific and energy module in 2027. It said three more modules would be added by 2030 and a further two between 2031 and 2033. [...]

Apart from the design and manufacture of the modules, Roscomos said the schedule approved by Borisov includes flight-testing a new-generation crewed spacecraft and building rockets and ground-based infrastructure. The new station will enable Russia to "solve problems of scientific and technological development, national economy and national security that are not available on the Russian segment of the ISS due to technological limitations and the terms of international agreements," it said.

Graphics

Nvidia RTX 40-Series GPUs Hampered By Low-Quality Thermal Paste (pcgamer.com) 50

"Anyone who is into gaming knows your graphics card is under strain trying to display modern graphics," writes longtime Slashdot reader smooth wombat. "This results in increased power usage, which is then turned into heat. Keeping your card cool is a must to get the best performance possible."

"However, hardware tester Igor's Lab found that vendors for Nvidia RTX 40-series cards are using cheap, poorly applied thermal paste, which is leading to high temperatures and consequently, performance degradation over time. This penny-pinching has been confirmed by Nick Evanson at PC Gamer." From the report: I have four RTX 40-series cards in my office (RTX 4080 Super, 4070 Ti, and two 4070s) and all of them have quite high hotspots -- the highest temperature recorded by an individual thermal sensor in the die. In the case of the 4080 Super, it's around 11 C higher than the average temperature of the chip. I took it apart to apply some decent quality thermal paste and discovered a similar situation to that found by Igor's Lab. In the space of a few months, the factory-applied paste had separated and spread out, leaving just an oily film behind, and a few patches of the thermal compound itself. I checked the other cards and found that they were all in a similar state.

Igor's Lab examined the thermal paste used on a brand-new RTX 4080 and found it to be quite thin in nature, due to large quantities of cheap silicone oil being used, along with zinc oxide filler. There was lots of ground aluminium oxide (the material that provides the actual thermal transfer) but it was quite coarse, leading to the paste separating quite easily. Removing the factory-installed paste from another RTX 4080 graphics card, Igor's Lab applied a more appropriate amount of a high-quality paste and discovered that it lowered the hotspot temperature by nearly 30 C.

Windows

Windows 11 Strikes Again With Annoying Pop-up That Can't Be Disabled 88

An anonymous reader writes: Windows users are being notified that their systems aren't backed up with the built-in Windows backup solution. A corresponding message appears with the advice that it's best to make backups so that all data is stored "in case something happens to the PC." It almost reads like an indirect threat, but Microsoft is actually just pointing out the option to store file backups on its own OneDrive cloud service. And it's also advertising more storage space.
The Military

US Prepares Jamming Devices Targeting Russia, China Satellites (msn.com) 45

In April the U.S. Space Force began testing "a new ground-based satellite jamming weapon to help keep U.S. military personnel safe from potential 'space-enabled' attacks" (according to a report from Space.com). The weapon was "designed to deny, degrade, or disrupt communications with satellites overhead, typically through overloading specific portions of the electromagnetic spectrum with interference," according to the article, with the miitary describing it as a small form-factor system "designed to be fielded in large numbers at low-cost and operated remotely" and "provide counterspace electronic warfare capability to all of the new Space Force components globally."

And now, Bloomberg reports that the U.S. is about to deploy them: The devices aren't meant to protect U.S. satellites from Chinese or Russian jamming but "to responsibly counter adversary satellite communications capabilities that enable attacks," the Space Force said in a statement to Bloomberg News. The Pentagon strives — on the rare occasions when it discusses such space capabilities — to distinguish its emerging satellite-jamming technology as purely defensive and narrowly focused. That's as opposed to a nuclear weapon the U.S. says Russia is developing that could create high-altitude electromagnetic pulses that would take out satellites and disrupt entire communications networks.

The first 11 of 24 Remote Modular Terminal jammers will be deployed in several months, and all of them could be in place by Dec. 31 at undisclosed locations, according to the Space Force statement... The new terminals augment a much larger jamming weapon called the Counter Communications System that's already deployed and a mid-sized one called Meadowlands "by providing the ability to have a proliferated, remotely controlled and relatively relocatable capability," the Space Force said. The Meadowlands system has encountered technical challenges that have delayed its delivery until at least October, about two years later than planned.

China has "hundreds and hundreds of satellites on orbit designed to find, fix, track, target and yes, potentially engage, US and allied forces across the Indo-Pacific," General Stephen Whiting, head of US Space Command, said Wednesday at the annual Aspen Security Forum. "So we've got to understand that and know what it means for our forces."

Bloomberg also got this comment from the chief director of space security and stability at the Secure World Foundation (which produces reports on counterspace weapons). The new U.S. Space Force jamming weapons are "reversible, temporary, non-escalatory and allow for plausible deniability in terms of who the instigator is."
Communications

May Solar Superstorm Caused Largest 'Mass Migration' of Satellites In History (space.com) 16

A solar superstorm in May caused thousands of satellites to simultaneously maneuver to maintain altitude due to the thickening of the upper atmosphere, creating potential collision hazards as existing prediction systems struggled to cope. Space.com reports: According to a pre-print paper published on the online repository arXiv on June 12, satellites and space debris objects in low Earth orbit -- the region of space up to an altitude of 1,200 miles (2,000 kilometers) -- were sinking toward the planet at the speed of 590 feet (180 meters) per day during the four-day storm. To make up for the loss of altitude, thousands of spacecraft began firing their thrusters at the same time to climb back up. That mass movement, the authors of the paper point out, could have led to dangerous situations because collision avoidance systems didn't have time to calculate the satellites' changing paths.

The solar storm that battered Earth from May 7 to 10 reached the intensity of G5, the highest level on the five-step scale used by the National Oceanic and Atmospheric Administration (NOAA) to assess the strength of solar storms. It was the strongest solar storm to hit Earth since 2003. The authors of the paper, however, pointed out that the environment around the planet has changed profoundly since that time. While only a few hundred satellites were orbiting Earth twenty years ago, there are thousands today. The authors of the paper put the number of "active payloads at [low Earth orbit]" at 10,000. [...] The new paper points out that space weather forecasts ahead of the May storm failed to accurately predict the duration and intensity of the event, making satellite collision predictions nearly impossible.

On the upside, the storm helped to clear out some junk as defunct satellites and debris fragments spiraled deeper into the atmosphere. The authors of the report estimate that thousands of space debris objects lost several kilometers in altitude during the storm. More powerful solar storms can be expected in the coming months as the peak of the current solar cycle -- the 11-year ebb and flow in the number of sunspots, solar flares and eruptions -- is expected in late 2024 and early 2025.
The paper can be found here.

Slashdot Top Deals