×
Space

Scientists Hope Euclid Telescope Will Reveal Mysteries of Dark Matter (theguardian.com) 44

In just a few weeks, a remarkable European probe will be blasted into space in a bid to explore the dark side of the cosmos. From a report:ÂThe $1bn Euclid mission will investigate the universe's two most baffling components: dark energy and dark matter. The former is the name given to a mysterious force that was shown -- in 1998 -- to be accelerating the expansion of the universe, while the latter is a form of matter thought to pervade the cosmos, provide the universe with 80% of its mass, and act as a cosmic glue that holds galaxies together. Both dark energy and dark matter are invisible and astronomers have only been able to infer their existence by measuring their influence on the behaviour of stars and galaxies.

"We cannot say we understand the universe if the nature of these dark components remains a mystery," said astrophysicist Prof Andy Taylor of Edinburgh University. "That is why Euclid is so important." Taylor added that UK scientists had played a key role in designing and building the probe. For example, one of its two main instruments, the craft's Vis imager, was mostly built in the UK. "We thought what would be the biggest, most fundamentally important project we could do?" Taylor said. "The answer was Euclid, which has now been designed, built and is ready for launch." Euclid was intended to be launched last year on a Russian Soyuz rocket. However, after the invasion of Ukraine, the European Space Agency ended its cooperation with the Russian space agency, Roscosmos, and instead signed a deal to use a Falcon 9 rocket from Elon Musk's SpaceX company.

Communications

Indonesia, SpaceX Launch Satellite To Boost Internet Connectivity (reuters.com) 8

Indonesia and Elon Musk's rocket company SpaceX on Monday launched the country's largest telecommunication satellite from the United States, in a $540 million project intended to link up remote corners of the archipelago to the internet. From a report: Roughly two-thirds of Indonesia's 280 million population already use the internet, but connectivity is limited in far-flung, underdeveloped eastern islands of the Southeast Asian country. "Satellite technology will accelerate internet access to villages in areas that cannot be reached by fiber optics in the next 10 years," Mahfud MD, senior Indonesian minister, said in a statement ahead of the launch. The 4.5-tonne Satellite of the Republic of Indonesia (SATRIA-1) was built by Thales Alenia Space and deployed into orbit from Florida by SpaceX's Falcon 9 rocket, which then returned to an offshore site in a precision landing.
Google

Google is Building a 153-Acre Neighborhood By Its Headquarters (sfgate.com) 68

In the heart of Silicon Valley, the city of Mountain View, California "just approved its biggest development ever," reports SFGate, "and it's for exactly the company you'd expect." Google got the go-ahead to build a 153-acre mixed-use neighborhood just south of its headquarters in north Mountain View on June 13, with unanimous city council approval.

Plans for the 30-year project, which will supplant the Google offices and parking lots currently in the area, include over 3 million square feet of office space and 7,000 residential units... Originally, the developers planned to dedicate 20% of the new housing to affordable units, but the approved plan sets aside only 15% for lower- and middle-income housing. Google lowered the target to make the project viable in an uncertain economic climate, a spokesperson told SFGATE. This past January, the firm laid off 12,000 workers.

The new development sounds an awful lot like the "company towns" of 1900-era American settlement — firms ran all the stores and housing for their workers — but a Google spokesperson said the new project's restaurants, housing and services would serve the broader Mountain View community. Along with the housing and Google office space, the plans include 26 acres of public parks and open space, up to 288,990 square feet of ground-floor commercial space, land for a school, new streets and a private utility system. The developers have 30 years to complete the project, as long as Google and Lendlease hit permit benchmarks and complete other terms within the first 15.

Space

How The JWST Could Detect Signs of Life on Exoplanets (universetoday.com) 25

Universe Today reports: The best hope for finding life on another world isn't listening for coded messages or traveling to distant stars, it's detecting the chemical signs of life in exoplanet atmospheres. This long hoped-for achievement is often thought to be beyond our current observatories, but a new study argues that the James Webb Space Telescope (JWST) could pull it off.

Most of the exoplanets we've discovered so far have been found by the transit method. This is where a planet passes in front of its star from our point of view. Even though we can't observe the planet directly, we can see the star's brightness dip by a fraction of a percent. As we watch stars over time, we can find a regular pattern of brightness dips, indicating the presence of a planet. The star dips in brightness because the planet blocks some of the starlight. But if the planet also has an atmosphere, there is a small amount of light that will pass through the atmosphere before reaching us. Depending on the chemical composition of the atmosphere, certain wavelengths will be absorbed, forming absorption spectra within the spectra of the starlight.

We have long been able to identify atoms and molecules by their absorption and emission spectra, so in principle, we can determine a planet's atmospheric composition with the transit method... We have done this with a few exoplanets, such as detecting the presence of water and organic compounds, but these were done for large gas planets with thick atmospheres. We haven't been able to do this with rocky Earth-like worlds. Our telescopes just aren't sensitive enough for that.

But this new study shows that the JWST could detect certain chemical biosignatures depending on their abundance in the atmosphere.

Long-time Slashdot reader Baron_Yam writes that "The signature I like to imagine detecting is actually industrial pollution. Chemicals that aren't created by any known geological process and indicate not just life, but life smart enough to have advanced technology (but stupid enough to pollute their own air supply)."
Space

Researchers Argue Earth Formed Much Faster Than Believed, Suggest More Planets Could Have Water (msn.com) 17

An anonymous reader quotes this report from the Washington Post: In a new study released in Nature this week, researchers state that Earth formed within just 3 million years. That's notably faster than previous estimates placing the timeline up to 100 million years.... "We can also predict that if other planets formed ... by the same mechanism, then the ingredients required for life such as water, should be present on other planets and other systems, so there's a greater chance that we have water worlds elsewhere in the galaxy," said Isaac Onyett, lead author of the study and Ph.D. candidate at the University of Copenhagen.

The authors assert that this rapid genesis occurred through a theory called pebble accretion. The general idea, according to co-author and cosmochemist Martin Bizzarro, is that planets are born in a disk of dust and gas. When they reach a certain size, they rapidly attract those pebbles like a vacuum cleaner. Some of those pebbles are icy and could provide a water supply to Earth, thought of as pebble snow. This would have led to an early version of our planet, known as proto-Earth, that is approximately half the size of our present-day planet. (Our current rendition of Earth likely formed after a larger impact about 100 million years later, which also led to the formation of our moon....)

The team determined the time scale of Earth's formation by looking at silicon isotopes from more than 60 meteorites and planetary bodies in the vicinity of Earth, which represent the rubble leftover after planet formation... By analyzing the silicon compositions in samples of different ages, Onyett said they can piece together a time sequence of what was happening in the disk of dust before Earth formed. They found that, as the samples increased in age, the composition of the asteroids changed toward the composition of the cosmic dust that was being accumulated by Earth. "That's very strong evidence that this dust was also being swept up as it was drifting inwards towards the Sun," said Onyett. "It would have been swept up by Earth as it was growing by accretion."

Birger Schmitz, an astrogeologist at Lund University who was not involved in the research, said these results are "very compelling" and could shift how we think about our planet's formation... Most importantly, he said the results show there is nothing special about our water-carrying planet. "It is just a very ordinary planet in our galaxy. This is important in our attempts to understand how common higher forms of life are in the universe."

While scientists agree pebble accretion does explain the formation of gas-giant planets like Jupiter and Saturn, some still argue that rocky planets like Earth were instead formed through larger and larger asteroid collisions...
AI

A New Approach to Computation Reimagines Artificial Intelligence: Hyperdimensional Computing (quantamagazine.org) 43

Quanta magazine thinks there's a better alternative to the artificial neural networks (or ANNs) powering AI systems. (Alternate URL) For one, ANNs are "super power-hungry," said Cornelia Fermüller, a computer scientist at the University of Maryland. "And the other issue is [their] lack of transparency." Such systems are so complicated that no one truly understands what they're doing, or why they work so well. This, in turn, makes it almost impossible to get them to reason by analogy, which is what humans do — using symbols for objects, ideas and the relationships between them....

Bruno Olshausen, a neuroscientist at the University of California, Berkeley, and others argue that information in the brain is represented by the activity of numerous neurons... This is the starting point for a radically different approach to computation known as hyperdimensional computing. The key is that each piece of information, such as the notion of a car, or its make, model or color, or all of it together, is represented as a single entity: a hyperdimensional vector. A vector is simply an ordered array of numbers. A 3D vector, for example, comprises three numbers: the x, y and z coordinates of a point in 3D space. A hyperdimensional vector, or hypervector, could be an array of 10,000 numbers, say, representing a point in 10,000-dimensional space. These mathematical objects and the algebra to manipulate them are flexible and powerful enough to take modern computing beyond some of its current limitations and foster a new approach to artificial intelligence...

Hyperdimensional computing tolerates errors better, because even if a hypervector suffers significant numbers of random bit flips, it is still close to the original vector. This implies that any reasoning using these vectors is not meaningfully impacted in the face of errors. The team of Xun Jiao, a computer scientist at Villanova University, has shown that these systems are at least 10 times more tolerant of hardware faults than traditional ANNs, which themselves are orders of magnitude more resilient than traditional computing architectures...

All of these benefits over traditional computing suggest that hyperdimensional computing is well suited for a new generation of extremely sturdy, low-power hardware. It's also compatible with "in-memory computing systems," which perform the computing on the same hardware that stores data (unlike existing von Neumann computers that inefficiently shuttle data between memory and the central processing unit). Some of these new devices can be analog, operating at very low voltages, making them energy-efficient but also prone to random noise.

Thanks to Slashdot reader ZipNada for sharing the article.
Medicine

A Startup Tries Making Medicine in Space (cnn.com) 21

"California startup Varda Space Industries launched its first test mission on June 12," reports CNN, "successfully sending a 200-pound (90-kilogram) capsule designed to carry drug research into Earth's orbit.

"The experiment, conducted in microgravity by simple onboard machines, aims to test whether it would be possible to manufacture pharmaceuticals in space remotely." Research has already established that protein crystals grown in a weightless environment can result in more perfect structures compared with those grown on Earth. These space-formed crystals could potentially then be used to create better-performing drugs that the human body can more easily absorb.
"Its research, company officials hope, could lead to better, more effective drugs — and hefty profits," CNN reported earlier this week: "It's not as sexy a human-interest story as tourism when it comes to commercialization of the cosmos," said Will Bruey, Varda's CEO and cofounder. "But the bet that we're making at Varda is that manufacturing is actually the next big industry that gets commercialized." Varda launched its first test mission Monday aboard a SpaceX rocket, which took off from Vandenberg Space Force Base in California just after 2:30 pm PT. The company then confirmed in a tweet that its satellite successfully separated from the rocket...

If successful, Varda hopes to scale its business rapidly, sending regular flights of satellites into orbit stuffed with experiments on behalf of pharmaceutical companies. Eventually, the firm hopes that research will yield a golden ticket drug, one that proves to be better when manufactured in space and can return royalties to Varda for years to come... Founded less than three years ago, Varda has gone from an idea to a company with more than $100 million in seed funding and grants, a 68,000-square-foot factory, and a satellite in space. Its workforce has grown to nearly 100 employees...

One day, the company hopes Varda flights will be so common that its capsules will blaze across the night sky every evening, like shooting stars to those on the ground who catch a glimpse. From there, Varda could even look to develop a research platform on a private space station, where pharma researchers could travel themselves.

Space

Saturn's Icy Moon Enceladus Harbors Essential Elements For Life (reuters.com) 29

Researchers have discovered high concentrations of phosphorus in ice crystals emitted from Saturn's moon Enceladus, enhancing its potential to support life. The findings, based on data from NASA's Cassini spacecraft, suggest that Enceladus may possess the necessary elements for life. Reuters reports: The discovery was based on data collected by NASA's Cassini spacecraft, the first to orbit Saturn, during its 13-year landmark exploration of the gaseous giant planet, its rings and its moons from 2004 to 2017. The same team previously confirmed that Enceladus' ice grains contain a rich assortment of minerals and complex organic compounds, including the ingredients for amino acids, associated with life as scientists know it. But phosphorus, the least abundant of six chemical elements considered necessary to all living things -- the others are carbon, oxygen, hydrogen, nitrogen and sulphur -- was still missing from the equation until now.

"It's the first time this essential element has been discovered in an ocean beyond Earth," the study's lead author, Frank Postberg, a planetary scientist at the Free University in Berlin, said in a JPL press release. [...] One notable aspect of the latest Enceladus discovery was geochemical modeling by the study's co-authors in Europe and Japan showing that phosphorus exists in concentrations at least 100 times that of Earth's oceans, bound water-soluble forms of phosphate compounds. "This key ingredient could be abundant enough to potentially support life in Enceladus' ocean," said co-investigator Christopher Glein, a planetary scientist at Southwest Research Institute in San Antonio, Texas. "This is a stunning discovery for astrobiology." "Whether life could have originated in Enceladus' ocean remains an open question," Glein said.

Youtube

Twitch, YouTube Influencers Are Becoming Video Game Publishers (bloomberg.com) 26

Influencers in the video-game industry are evolving from playing games to making them. From a report: Over the weekend, One True King, a media company focused on gaming content, launched Mad Mushroom, a new publishing division. "We have a unique competitive advantage in this space," said OTK co-founder Asmongold, a top streamer on Twitch, Amazon's live-streaming platform. "We can give games the push they need to actually go out to market, get eyes on the game and give [developers] insight." Moving forward, OTK's stable of gaming influencers will collaborate with lead adviser Mike Silbowitz, a gaming industry veteran who has previously worked at Square Enix, to publish, distribute, test and market games.

Currently, publishers pay top influencers tens of thousands of dollars to demo new games in front of their sizable audiences of live viewers on social media platforms, particularly Twitch and Google's YouTube. According to company executives, by reducing such marketing and user-acquisition costs, the organization can take a reduced cut of sales, say, 30% rather than the regular 40% or 50%, potentially benefiting the makers of independent games. "Twitch streamers have a large tool that is effectively a non-cost, which is their time and their audience," Asmongold said.

Influencers are increasingly diversifying their income streams beyond social media networks, which can be culturally and financially volatile. Popular gamers have said they anticipate that selling products directly to their audience will eventually form a larger fraction of their revenue. Top streamers, particularly those who have carved out a niche within a specific genre, are looking to publish and advise on both top tier and indie games that might appeal to the specific tastes of their fans.

Space

Owen Gingerich, Astronomer Who Saw God in the Cosmos, Dies at 93 (nytimes.com) 135

Owen Gingerich, a renowned astronomer and historian of science, has passed away at the age of 93. Gingerich dedicated years to tracking down 600 copies of Nicolaus Copernicus's influential book "De Revolutionibus Orbium Coelestium Libri Sex" and was known for his passion for astronomy, often dressing up as a 16th-century scholar for lectures. He believed in the compatibility of religion and science and explored this theme in his books "God's Universe" and "God's Planet." The New York Times reports: Professor Gingerich, who lived in Cambridge, Mass., and taught at Harvard for many years, was a lively lecturer and writer. During his decades of teaching astronomy and the history of science, he would sometimes dress as a 16th-century Latin-speaking scholar for his classroom presentations, or convey a point of physics with a memorable demonstration; for instance, The Boston Globe related in 2004, he "routinely shot himself out of the room on the power of a fire extinguisher to prove one of Newton's laws." He was nothing if not enthusiastic about the sciences, especially astronomy. One year at Harvard, when his signature course, "The Astronomical Perspective," wasn't filling up as fast as he would have liked, he hired a plane to fly a banner over the campus that read: "Sci A-17. M, W, F. Try it!"

Professor Gingerich's doggedness was on full display in his long pursuit of copies of Copernicus's "De Revolutionibus Orbium Coelestium Libri Sex" ("Six Books on the Revolutions of the Heavenly Spheres"), first published in 1543, the year Copernicus died. That book laid out the thesis that Earth revolved around the sun, rather than the other way around, a profound challenge to scientific knowledge and religious belief in that era. The writer Arthur Koestler had contended in 1959 that the Copernicus book was not read in its time, and Professor Gingerich set out to determine whether that was true. In 1970 he happened on a copy of "De Revolutionibus" that was heavily annotated in the library of the Royal Observatory in Edinburgh, suggesting that at least one person had read it closely. A quest was born. Thirty years and hundreds of thousands of miles later, Professor Gingerich had examined some 600 Renaissance-era copies of "De Revolutionibus" all over the world and had developed a detailed picture not only of how thoroughly the work was read in its time, but also of how word of its theories spread and evolved. He documented all this in "The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus" (2004). John Noble Wilford, reviewing it in The New York Times, called "The Book Nobody Read" "a fascinating story of a scholar as sleuth."

Professor Gingerich was raised a Mennonite and was a student at Goshen College, a Mennonite institution in Indiana, studying chemistry but thinking of astronomy, when, he later recalled, a professor there gave him pivotal advice: "If you feel a calling to pursue astronomy, you should go for it. We can't let the atheists take over any field." He took the counsel, and throughout his career he often wrote or spoke about his belief that religion and science need not be at odds. He explored that theme in the books "God's Universe" (2006) and "God's Planet" (2014). He was not a biblical literalist; he had no use for those who ignored science and proclaimed the Bible's creation story historical fact. Yet, as he put it in "God's Universe," he was "personally persuaded that a superintelligent Creator exists beyond and within the cosmos." [...] Professor Gingerich, who was senior astronomer emeritus at the Smithsonian Astrophysical Observatory, wrote countless articles over his career in addition to his books. In one for Science and Technology News in 2005, he talked about the divide between theories of atheistic evolution and theistic evolution. "Frankly it lies beyond science to prove the matter one way or the other," he wrote. "Science will not collapse if some practitioners are convinced that occasionally there has been creative input in the long chain of being."
In 2006, Gingerich was mentioned in a Slashdot story about geologists' reacting to the new definition of "Pluton." He was quoted as saying that he was only peripherally aware of the definition, and because it didn't show up on MS Word's spell check, he didn't think it was that important."

"Gingerich lead a committee of the International Astronomical Union charged with recommending whether Pluto should remain a planet," notes the New York Times. "His panel recommended that it should, but the full membership rejected that idea and instead made Pluto a 'dwarf planet.' That decision left Professor Gingerich somehwat dismayed."
Patents

US Patent Office Proposes Rule To Make It Much Harder To Kill Bad Patents (techdirt.com) 110

An anonymous reader quotes a report from Techdirt: So, this is bad. Over the last few years, we've written plenty about the so-called "inter partes review" or "IPR" that came into being about a decade ago as part of the "America Invents Act," which was the first major change to the patent system in decades. For much of the first decade of the 2000s, patent trolls were running wild and creating a massive tax on innovation. There were so many stories of people (mostly lawyers) getting vague and broad patents that they never had any intention of commercializing, then waiting for someone to come along and build something actually useful and innovative... and then shaking them down with the threat of patent litigation. The IPR process, while not perfect, was at least an important tool in pushing back on some of the worst of the worst patents. In its most basic form, the IPR process allows nearly anyone to challenge a bad patent and have the special Patent Trial and Appeal Board (PTAB) review the patent to determine if it should have been granted in the first place. Given that a bad patent can completely stifle innovation for decades this seems like the very least that the Patent Office should offer to try to get rid of innovation-killing bad patents.

However, patent trolls absolutely loathe the IPR process for fairly obvious reasons. It kills their terrible patents. The entire IPR process has been challenged over and over again and (thankfully) the Supreme Court said that it's perfectly fine for the Patent Office to review granted patents to see if they made a mistake. But, of course, that never stops the patent trolls. They've complained to Congress. And, now, it seems that the Patent Office itself is trying to help them out. Recently, the USPTO announced a possible change to the IPR process that would basically lead to limiting who can actually challenge bad patents, and which patents could be challenged.

The wording of the proposed changes seems to be written in a manner to be as confusing as possible. But there are a few different elements to the proposal. One part would limit who can bring challenges to patents under the IPR system, utilizing the power of the director to do a "discretionary denial." For example, it would say that "certain for-profit entities" are not allowed to bring challenges. Why? That's not clear. [...] But the more worrisome change is this one: "Recognizing the important role the USPTO plays in encouraging and protecting innovation by individual inventors, startups, and under-resourced innovators who are working to bring their ideas to market, the Office is considering limiting the impact of AIA post-grant proceedings on such entities by denying institution when certain conditions are met." Basically, if a patent holder is designated as an "individual inventor, startup" or "under-resourced innovator" then their patents are protected from the IPR process. But, as anyone studying this space well knows, patent trolls often present themselves as all three of those things (even though it's quite frequently not at all true). [...] And, again, none of this should matter. A bad patent is a bad patent. Why should the USPTO create different rules that protect bad patents? If the patent is legit, it will survive the IPR process.
The Electronic Frontier Foundation issued a response to the proposed changes: "The U.S. Patent Office has proposed new rules about who can challenge wrongly granted patents. If the rules become official, they will offer new protections to patent trolls. Challenging patents will become far more onerous, and impossible for some. The new rules could stop organizations like EFF, which used this process to fight the Personal Audio 'podcasting patent,' from filing patent challenges altogether."

The digital rights group added: "If these rules were in force, it's not clear that EFF would have been able to protect the podcasting community by fighting, and ultimately winning, a patent challenge against Personal Audio LLC. Personal Audio claimed to be an inventor-owned company that was ready to charge patent royalties against podcasters large and small. EFF crowd-funded a patent challenge and took out the Personal Audio patent after a 5-year legal battle (that included a full IPR process and multiple appeals)."
Databases

Will Submerging Computers Make Data Centers More Climate Friendly? (oregonlive.com) 138

20 miles west of Portland, engineers at an Intel lab are dunking expensive racks of servers "in a clear bath" made of motor oil-like petrochemicals, reports the Oregonian, where the servers "give off a greenish glow as they silently labor away on ordinary computing tasks." Intel's submerged computers operate just as they would on a dry server rack because they're not bathing in water, even though it looks just like it. They're soaking in a synthetic oil that doesn't conduct electricity. So the computers don't short out.

They thrive, in fact, because the fluid absorbs the heat from the hardworking computers much better than air does. It's the same reason a hot pan cools off a lot more quickly if you soak it in water than if you leave it on the stove.

As data centers grow increasingly powerful, the computers are generating so much heat that cooling them uses exorbitant amounts of energy. The cooling systems can use as much electricity as the computers themselves. So Intel and other big tech companies are designing liquid cooling systems that could use far less electricity, hoping to lower data centers' energy costs by as much as a third — and reducing the facilities' climate impact. It's a wholesale change in thinking for data centers, which already account for 2% of all the electricity consumption in the U.S... Skeptics caution that it may be difficult or prohibitively expensive to overhaul existing data centers to adapt to liquid cooling. Advocates of the shift, including Intel, say a transition is imperative to accommodate data centers' growing thirst for power. "It's really starting to come to a head as we're hitting the energy crisis and the need for climate action globally," said Jen Huffstetler, Intel's chief product sustainability officer...

Cooler computers can be packed more tightly together in data centers, since they don't need space for airflow. Computer manufacturers can pack chips together more tightly on the motherboard, enabling more computing power in the same space. And liquid cooling could significantly reduce data centers' environmental and economic costs. Conventional data centers' evaporative cooling systems require tremendous volumes of water and huge amounts of electricity...

Many other tech companies are backing immersion cooling, too. Google, Facebook and Microsoft are all helping fund immersion cooling research at Oregon State... [T]he timing may finally be right for data centers operators to make the shift away from air cooling to something far more efficient. Intel's Huffstetler said she expects to see liquid cooling become widespread in the next three to five years.

The article notes other challenges:
  • liquid adds more weight than some buildings' upper floors can support
  • Some metals degrade faster in liquid than they do in air.
  • And the engineers had to modify the servers by removing their fans — "because they serve no purpose while immersed."

Space

'He's About to Graduate College and Join SpaceX as an Engineer. He's 14.' (yahoo.com) 91

"Kairan Quazi will probably need someone to drive him to work at SpaceX," writes the Los Angeles Times — because "He's only 14." The teen is scheduled to graduate this month from the Santa Clara University School of Engineering before starting a job as a software engineer at the satellite communications and spacecraft manufacturer... The soft-spoken teen said working with Starlink — the satellite internet team at SpaceX — will allow him to be part of something bigger than himself. That is no small feat for someone who has accomplished so much at such a young age...

The youngster jumped from third grade to a community college, with a workload that he felt made sense. "I felt like I was learning at the level that I was meant to learn," said Kairan, who later transferred to Santa Clara University... Kairan's family told BrainGain Magazine that when he was 9, IQ tests showed that his intelligence was in the 99.9th percentile of the general population. Asked if he's a genius, he recalled his parents telling him, "Genius is an action â it requires solving big problems that have a human impact." Once accepted to the engineering school at Santa Clara University as a transfer student, Kairan felt that he had found his freedom to pursue a career path that allowed him to solve those big problems.

While in college, Kairan and his mother made a list of places where he could apply for an internship. Only one company responded. Lama Nachman, director of the Intelligent Systems Research Lab at Intel, took a meeting with 10-year-old Kairan, who expected it to be brief and thought she would give him the customary "try again in a few years," he said. She accepted him. "In a sea of so many 'no's' by Silicon Valley's most vaunted companies, that ONE leader saying yes ... one door opening ... changed everything," Kairan wrote on his LinkedIn page...

Asked what he plans to wear on his first day, Kairan joked in an email that he plans "to show up in head to toe SpaceX merch. I'll be a walking commercial! Joking aside, I'll probably wear jeans and a t-shirt so I can be taken seriously as an engineer."

Government

Does the US Government Want You to Believe in UFOs? (msn.com) 293

A New York Times columnist considers alternate reasons for the upcoming House hearings with a whistleblower former intelligence official, David Grusch, who claims the US government possesses "intact and partially intact" alien vehicles: This whistle-blower's mere existence is evidence of a fascinating shift in public U.F.O. discourse. There may not be alien spacecraft, but there is clearly now a faction within the national security complex that wants Americans to think there might be alien spacecraft, to give these stories credence rather than dismissal.

The evidence for this shift includes the military's newfound willingness to disclose weird atmospheric encounters. It includes the establishment of the task force that Grusch was assigned to... It also includes other examples of credentialed figures, like the Stanford pathology professor Garry Nolan, who claim they're being handed evidence of extraterrestrial contact. And it includes the range of strange stories being fed to writers willing to operate in the weird-science zone...

I have no definite theory of why this push is happening. Maybe it's because there really is something Out There and we're being prepared for the big reveal... [M]aybe it's a cynical effort to use unexplained phenomena as an excuse to goose military funding. Or maybe it's a psy-op to discredit critics of the national security state...

Intel

Intel Demos Its New 'Backside' Power-Delivery Chip Tech (ieee.org) 28

Next year Intel introduces a new transistor — RibbonFET — and a new way of powering it called "PowerVia."

This so-called "backside power" approach "aims to separate power and I/O wiring, shifting power lines to the back of the wafer," reports Tom's Hardware, which "eliminates any possible interference between the data and power wires and increases logic transistor density." IEEE Spectrum explains this approach "leaves more room for the data interconnects above the silicon," while "the power interconnects can be made larger and therefore less resistive."

And Intel has already done some successful powering tests using it on Intel's current transistors: The resulting cores saw more than a 6 percent frequency boost as well as more compact designs and 30 percent less power loss. Just as important, the tests proved that including backside power doesn't make the chips more costly, less reliable, or more difficult to test for defects. Intel is presenting the details of these tests in Tokyo next week at the IEEE Symposium on VLSI Technology and Circuits...

[C]ores can be made more compact, decreasing the length of interconnects between logic cells, which speeds things up. When the standard logic cells that make up the processor core are laid out on the chip, interconnect congestion keeps them from packing together perfectly, leaving loads of blank space between the cells. With less congestion among the data interconnects, the cells fit together more tightly, with some portions up to 95 percent filled... What's more, the lack of congestion allowed some of the smallest interconnects to spread out a bit, reducing parasitic capacitance that hinders performance...

With the process for PowerVia worked out, the only change Intel will have to make in order to complete its move from Intel 4 to the next node, called 20A, is to the transistor... Success would put Intel ahead of TSMC and Samsung, in offering both nanosheet transistors and backside power.

ISS

Adventure in Space: ISS Astronauts Install Fifth Roll-out Solar Blanket to Boost Power (cbsnews.com) 25

The international space station is equpped with four 39-foot blankets (11.8-meters), reports CBS News. The first one was delivered in December of 2000 — and now it's time for some changes: Two astronauts ventured outside the International Space Station Friday and installed the fifth of six roll-out solar array blankets — iROSAs — needed to offset age-related degradation and micrometeoroid damage to the lab's original solar wings.

Floating in the Quest airlock, veteran Stephen Bowen, making his ninth spacewalk, and crewmate Woody Hoburg, making his first, switched their spacesuits to battery power at 9:25 a.m. EDT, officially kicking off the 264th spacewalk devoted to ISS assembly and maintenance and the seventh so far this year. NASA is in the process of upgrading the ISS's solar power system by adding six iROSAs to the lab's eight existing U.S. arrays. The first four roll-out blankets were installed during spacewalks in 2021 and 2022. Bowen and Hoburg installed the fifth during Friday's spacewalk and plan to deploy the sixth during another excursion next Thursday.

The two new iROSAs were delivered to the space station earlier this week in the unpressurized trunk section of a SpaceX cargo Dragon. The lab's robot arm pulled them out Wednesday and mounted them on the right side of the station's power truss just inboard the starboard wings... As the station sailed 260 miles above the Great Lakes, the 63-foot-long solar array slowly unwound like a window shade to its full length. Well ahead of schedule by that point, the spacewalkers carried out a variety of get-ahead tasks to save time next week when they float back outside to install the second new iROSA.

They returned to the airlock and began re-pressurization procedures at 3:28 p.m., bringing the 6-hour three-minute spacewalk to a close. With nine spacewalks totaling 60 hours and 22 minutes under his belt, Bowen now ranks fifth on the list of the world's most experienced spacewalkers.

"Combined with the 95-kilowatt output of the original eight panels, the station's upgraded system will provide about 215,000 kilowatts of power."
Moon

NASA Researchers Think (Microbial) Life Could Survive on the Moon (space.com) 19

In less than two years, NASA plans to have astronauts walking on the moon again — the first time in over half a century. "And one potential surprise could be detecting life on the moon," reports Space.com: New research suggests that future visitors to the lunar south pole region should be on the lookout for evidence of life in super-cold permanently shadowed craters — organisms that could have made the trek from Earth. Microbial life could potentially survive in the harsh conditions near the lunar south pole, suggested Prabal Saxena, a planetary researcher at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "One of the most striking things our team has found is that, given recent research on the ranges in which certain microbial life can survive, there may be potentially habitable niches for such life in relatively protected areas on some airless bodies," Saxena told Space.com.

Indeed, the lunar south pole may possess the properties that can enable survival and potentially even episodic growth of certain microbial life, Saxena said. "We're currently working on understanding which specific organisms may be most suited for surviving in such regions and what areas of the lunar polar regions, including places of interest relevant to exploration, may be most amenable to supporting life," he said. In work presented at a recent science workshop on the potential Artemis 3 landing sites, Saxena and study members reported that the lunar south pole may contain substantial surface niches that could be potentially habitable for a number of microorganisms.

While it's possible organic molecules from earth might have been hurled to the moon after a meteor impact, there's a much more likely possibility. A NASA organic geochemist on the study views humans as "the most likely vector, given the extensive data that we have about our history of exploration..." Especially if humans start visiting these temperate radiation-protected sites...
AI

Is Self-Healing Code the Future of Software Development? (stackoverflow.blog) 99

We already have automated processes that detect bugs, test solutions, and generate documentation, notes a new post on Stack Overflow's blog. But beyond that, several developers "have written in the past on the idea of self-healing code. Head over to Stack Overflow's CI/CD Collective and you'll find numerous examples of technologists putting this ideas into practice."

Their blog post argues that self-healing code "is the future of software development." When code fails, it often gives an error message. If your software is any good, that error message will say exactly what was wrong and point you in the direction of a fix. Previous self-healing code programs are clever automations that reduce errors, allow for graceful fallbacks, and manage alerts. Maybe you want to add a little disk space or delete some files when you get a warning that utilization is at 90% percent. Or hey, have you tried turning it off and then back on again?

Developers love automating solutions to their problems, and with the rise of generative AI, this concept is likely to be applied to both the creation, maintenance, and the improvement of code at an entirely new level... "People have talked about technical debt for a long time, and now we have a brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before," said Armando Solar-Lezama, a professor at the Massachusetts Institute of Technology's Computer Science & Artificial Intelligence Laboratory, in an interview with the Wall Street Journal. "I think there is a risk of accumulating lots of very shoddy code written by a machine," he said, adding that companies will have to rethink methodologies around how they can work in tandem with the new tools' capabilities to avoid that.

Despite the occasional "hallucination" of non-existent information, Stack Overflow's blog acknowledges that large-language models improve when asked to review their response, identify errors, or show its work.

And they point out the project manager in charge of generative models at Google "believes that some of the work of checking the code over for accuracy, security, and speed will eventually fall to AI." Google is already using this technology to help speed up the process of resolving code review comments. The authors of a recent paper on this approach write that, "As of today, code-change authors at Google address a substantial amount of reviewer comments by applying an ML-suggested edit. We expect that to reduce time spent on code reviews by hundreds of thousands of hours annually at Google scale. Unsolicited, very positive feedback highlights that the impact of ML-suggested code edits increases Googlers' productivity and allows them to focus on more creative and complex tasks...."

Recently, we've seen some intriguing experiments that apply this review capability to code you're trying to deploy. Say a code push triggers an alert on a build failure in your CI pipeline. A plugin triggers a GitHub action that automatically send the code to a sandbox where an AI can review the code and the error, then commit a fix. That new code is run through the pipeline again, and if it passes the test, is moved to deploy... Right now his work happens in the CI/CD pipeline, but [Calvin Hoenes, the plugin's creator] dreams of a world where these kind of agents can help fix errors that arise from code that's already live in the world. "What's very fascinating is when you actually have in production code running and producing an error, could it heal itself on the fly?" asks Hoenes...

For now, says Hoenes, we need humans in the loop. Will there come a time when computer programs are expected to autonomously heal themselves as they are crafted and grown? "I mean, if you have great test coverage, right, if you have a hundred percent test coverage, you have a very clean, clean codebase, I can see that happening. For the medium, foreseeable future, we probably better off with the humans in the loop."

Last month Stack Overflow themselves tried an AI experiment that helped users to craft a good title for their question.
Space

Parker Solar Probe Discovers Source of Solar Wind (cnn.com) 31

The New York Times defines the solar wind as "a million-miles-per-hour stream of electrons, protons and other charged particles rushing outward into the solar system."

Now CNN reports that the Parker Solar Probe "has uncovered the source of solar wind." As the probe came within about 13 million miles (20.9 million kilometers) of the sun, its instruments detected fine structures of the solar wind where it generates near the photosphere, or the solar surface, and captured ephemeral details that disappear once the wind is blasted from the corona...A study detailing the solar findings was published Wednesday in the journal Nature...

There are two types of this wind. The faster solar wind streams from holes in the corona at the sun's poles at a peak speed of 497 miles per second (800 kilometers per second)... The spacecraft's data revealed that the coronal holes act like showerheads, where jets appear on the sun's surface in the form of bright spots, marking where the magnetic field passes in and out of the photosphere. As magnetic fields pass each other, moving in opposite directions within these funnels on the solar surface, they break and reconnect, which sends charged particles flying out of the sun.

"The photosphere is covered by convection cells, like in a boiling pot of water, and the larger scale convection flow is called supergranulation," said lead study author Stuart D. Bale, a professor of physics at the University of California, Berkeley, in a statement. "Where these supergranulation cells meet and go downward, they drag the magnetic field in their path into this downward kind of funnel. The magnetic field becomes very intensified there because it's just jammed. It's kind of a scoop of magnetic field going down into a drain. And the spatial separation of those little drains, those funnels, is what we're seeing now with solar probe data."

Parker Solar Probe detected highly energetic particles traveling between 10 and 100 times faster than the solar wind, leading the researchers to believe that the fast solar wind is created by the reconnection of magnetic fields. "The big conclusion is that it's magnetic reconnection within these funnel structures that's providing the energy source of the fast solar wind," Bale said. "It doesn't just come from everywhere in a coronal hole, it's substructured within coronal holes to these supergranulation cells. It comes from these little bundles of magnetic energy that are associated with the convection flows."

Moon

China Wants To Launch a Moon-Orbiting Telescope Array As Soon As 2026 (space.com) 32

China is planning to deploy a constellation of satellites in orbit around the moon to create a radio telescope that would enable the study of radio waves longer than 33 feet, providing insights into the "Dark Ages" of the universe. Space.com reports: The array would consist of one "mother" satellite and eight mini "daughter" craft. The mother would process data and communicate with Earth, and the daughters would detect radio signals from the farthest reaches of the cosmos, Xuelei Chen, an astronomer at the China National Space Administration (CNSA), said at the Astronomy From the Moon conference held earlier this year in London. Putting such an array in orbit around the moon would be technically more feasible than building a telescope directly on the lunar surface, a venture that NASA and other space agencies are currently considering as one of the next big steps in astronomy.

"There are a number of advantages in doing this in orbit instead of on the surface because it's engineeringly much simpler," Chen said during the conference. "There is no need for landing and a deployment, and also because the lunar orbital period is two hours, we can use solar power, which is much simpler than doing it on the lunar surface, which, if you want to observe during the lunar night, then you have to provide the energy for almost 14 days." He added that this proposed "Discovering Sky at the Longest Wavelength," or Hongmeng Project, could orbit the moon as early as 2026.

A telescope on the moon, astronomers say, would allow them to finally see cosmic radiation in a part of the electromagnetic spectrum that is impossible to study from Earth's surface: radio waves longer than 33 feet (10 meters), or, in other words, those with frequencies below 30 megahertz (MHz). "If you are looking into the low-frequency part of the electromagnetic spectrum, you'll find that, due to strong absorption [by Earth's atmosphere], we know very little about [the region] below 30 megahertz," Chen said. "It's almost a blank part of the electromagnetic spectrum. So we want to open this last electromagnetic window of the universe."

Slashdot Top Deals