×
Robotics

College Warns To 'Avoid All Robots' After Bomb Threat Involving Food Delivery Robots (nbcnews.com) 38

Oregon State University on Tuesday urged students and staff to "avoid all robots" after a bomb threat was reported in Starship food delivery robots. NBC News reports: The warning was issued at 12:20 p.m. local time and by 12:59 p.m., the potentially dangerous bots had been isolated at safe locations, the school said. The robots were being "investigated by" a technician, OSU said in a statement posted at 1:23 p.m. "Remain vigilant for suspicious activity," the school said. Finally, at around 1:45 p.m., the school issued an "all clear" alert. "Emergency is over," the message said. "You may now resume normal activities. Robot inspection continues in a safe location."

A representative for Starship, the company that produces the robots, could not be immediately reached for comment. The company calls itself a "global leader in autonomous delivery" with agreements at a host of universities across the United States.
Developing...
IT

Matter 1.2 is a Big Move For the Smart Home Standard (theverge.com) 64

Matter -- the IOT connectivity standard with ambitions to fix the smart home and make all of our gadgets talk to each other -- has hit version 1.2, adding support for nine new types of connected devices. From a report: Robot vacuums, refrigerators, washing machines, and dishwashers are coming to Matter, as are smoke and CO alarms, air quality sensors, air purifiers, room air conditioners, and fans. It's a crucial moment for the success of the industry-backed coalition that counts 675 companies among its members. This is where it moves from the relatively small categories of door locks and light bulbs to the real moneymakers: large appliances.

The Connectivity Standards Alliance (CSA), the organization behind Matter, released the Matter 1.2 specification this week, a year after launching Matter 1.0, following through on its promise to release two updates a year. Now, appliance manufacturers can add support for Matter to their devices, and ecosystems such as Apple Home, Amazon Alexa, Google Home, and Samsung SmartThings can start supporting the new device types. Yes, this means you should finally be able to control a robot vacuum in the Apple Home app -- not to mention your wine fridge, dishwasher, and washing machine.

The initial feature set for the new device types includes basic function controls (start / stop, change mode) and notifications -- such as the temperature of your fridge, the status of your laundry, or whether smoke is detected. Robot vacuum support is robust -- remote start and progress notifications, cleaning modes (dry vacuum, wet mopping), and alerts for brush status, error reporting, and charging status. But there's no mapping, so you'll still need to use your vacuum app if you want to tell the robot where to go.

Robotics

Amazon Tests Humanoid Robot in Warehouse Automation Push (bloomberg.com) 33

Amazon says it's testing two new technologies to increase automation in its warehouses, including a trial of a humanoid robot. From a report: The humanoid robot, called Digit, is bipedal and can squat, bend and grasp items using clasps that imitate hands, the company said in a blog post Wednesday. It's built by Agility Robotics and will initially be used to help employees consolidate totes that have been emptied of items. Amazon invested in Agility Robotics last year.

[...] In addition to Digit, Amazon is testing a technology called Sequoia, which will identify and sort inventory into containers for employees, who will then pick the items customers have ordered, the company said. Remaining products are then consolidated in bins by a robotic arm called Sparrow, which the company revealed last year. The system is in use at an Amazon warehouse in Houston, the company said in a statement.

AI

Freak Accident in San Francisco Traps Pedestrian Under Robotaxi (msn.com) 104

In downtown San Francisco two vehicles were stopped at a red light on Monday night, reports the Washington Post — a regular car and a Cruise robotaxi. Both vehicles advanced when the light turned green, according to witness accounts and video recorded by the Cruise vehicle's internal cameras and reviewed by The Post. As the cars moved forward, the pedestrian entered the traffic lanes in front of them, according to the video, and was struck by the regular car. The video shows the victim rolling onto that vehicle's windshield and then being flung into the path of the driverless car, which stopped once it collided with the woman. According to Cruise spokesperson Hannah Lindow, the autonomous vehicle "braked aggressively to minimize the impact" but was unable to stop before rolling over the woman and coming to a halt. Photos published by the San Francisco Chronicle show the woman's leg sticking out from underneath the car's left rear wheel.
"According to Cruise, police had directed the company to keep the vehicle stationary, apparently with the pedestrian stuck beneath it," reports the San Francisco Chronicle.

Also from the San Francisco Chronicle: Austin Tutone, a bicycle delivery person, saw the woman trapped underneath the Cruise car and tried to reassure her as they waited for first-responders. "I told her, 'The ambulance is coming' and that she'd be okay. She was just screaming." He shared a photo of the aftermath with The Chronicle that appears to show the car tire on the woman's leg. San Francisco firefighters arrived and used the jaws of life to lift the car off the woman. She was transported to San Francisco General Hospital with "multiple traumatic injuries," said SFFD Capt. Justin Schorr. The victim was in critical condition as of late Tuesday afternoon, according to the hospital.

It appears that once the Cruise car sensed something underneath its rear axle, it came to a halt and turned on its hazard lights, Schorr said. Firefighters obstructed the sensors of the driverless car to alert the Cruise control center. He said representatives from Cruise responded to firefighters and "immediately disabled the car remotely."
More from the San Francisco Chronicle: "When it comes to someone pinned beneath a vehicle, the most effective way to unpin them is to lift the vehicle," Sgt. Kathryn Winters, a spokesperson for the department, said in an interview. Were a driver to move a vehicle with a person lying there, "you run the risk of causing more injury." Once the person is freed, the car must stay in place as police gather evidence including "the location of the vehicle and/or vehicles before, during and after the collision," said Officer Eve Laokwansathitaya, another spokesperson.
The human driver who struck the pedestrian immediately fled the scene, and has not yet been identified.
Robotics

Japan Startup Develops 'Gundam'-Like Robot With $3 Million Price Tag (reuters.com) 36

A Tokyo startup has developed a 4.5-meter-tall, four-wheeled robot modeled after the "Mobile Suit Gundam" from the Japanese animation series. It has a price tag of $3 million. Reuters reports: Called ARCHAX after the avian dinosaur archaeopteryx, the robot has cockpit monitors that receive images from cameras hooked up to the exterior so that the pilot can maneuver the arms and hands with joysticks from inside its torso. The 3.5-ton robot, which will be unveiled at the Japan Mobility Show later this month, has two modes: the upright 'robot mode' and a 'vehicle mode' in which it can travel up to 10 km (6 miles) per hour.

"Japan is very good at animation, games, robots and automobiles so I thought it would be great if I could create a product that compressed all these elements into one," said Ryo Yoshida, the 25-year-old chief executive of Tsubame Industries. "I wanted to create something that says, 'This is Japan.'" Yoshida plans to build and sell five of the machines for the well-heeled robot fan, but hopes the robot could one day be used for disaster relief or in the space industry.

Robotics

Robot 'Monster Wolves' Try to Scare Off Japan's Bears (bbc.co.uk) 44

"Bear attacks in Japan have been rising at an alarming rate, so the city of Takikawa [about 570 miles from Tokyo] installed a robot wolf as a deterrent," reports the BBC. "The robot wolf was originally designed to keep wild animals from farmlands, but is now being used by local governments and managers of highways, golf courses, and pig farms." Digital Trends describes the "Monster Wolf" as "complete with glowing red eyes and protruding fangs." [T]he solar-powered Monster Wolf emits a menacing roar if it detects a nearby bear. It also has a set of flashing LED lights on its tail, and can move its head to appear more real... The robot's design is apparently based on a real wolf that roamed part of the Asian nation more than 100 years ago before it was hunted into extinction.

Japanese news outlet NHK reported earlier this month that bear attacks in the country are at their highest level since records began in 2007. The environment ministry said 53 cases of injuries as a result of such attacks were reported between April and July this year, with at least one person dying following an attack in Hokkaido in May.

Sci-Fi

Could 'The Creator' Change Hollywood Forever? (indiewire.com) 96

At the beginning of The Creator a narrator describes AI-powered robots that are "more human than human." From the movie site Looper: It's in reference to the novel "Do Androids Dream of Electric Sheep?" by Philip K. Dick, which was adapted into the seminal sci-fi classic, "Blade Runner." The phrase is used as the slogan for the Tyrell Corporation, which designs the androids that take on lives of their own. The saying perfectly encapsulates the themes of "Blade Runner" and, by proxy, "The Creator." If a machine of sufficient intelligence is indistinguishable from humans, then shouldn't it be considered on equal footing as humanity?
The Huffington Post calls its "the pro-AI movie we don't need right now" — but they also praise it as "one of the most astonishing sci-fi theatrical experiences this year." Variety notes the film was co-written and directed by Gareth Edwards (director of the 2014 version of Godzilla and the Star Wars prequel Rogue One), working with Oscar-winning cinematographer Greig Fraser (Dune) after the two collaborated on Rogue One. But what's unique is the way they filmed it: adding visual effects "almost improvisationally afterward.

"Achieving this meant shooting sumptuous natural landscapes in far-flung locales like Thailand or Tibet and building futuristic temples digitally in post-production..."

IndieWire gushes that "This movie looks fucking incredible. To a degree that shames most blockbusters that cost three times its budget." They call it "a sci-fi epic that should change Hollywood forever." Once audiences see how "The Creator" was shot, they'll be begging Hollywood to close the book on blockbuster cinema's ugliest and least transportive era. And once executives see how much (or how little) "The Creator" was shot for, they'll be scrambling to make good on that request as fast as they possibly can.

Say goodbye to $300 million superhero movies that have been green-screened within an inch of their lives and need to gross the GDP of Grenada just to break even, and say hello — fingers crossed — to a new age of sensibly budgeted multiplex fare that looks worlds better than most of the stuff we've been subjected to over the last 20 years while simultaneously freeing studios to spend money on the smaller features that used to keep them afloat. Can you imagine...? How ironic that such fresh hope for the future of hand-crafted multiplex entertainment should come from a film so bullish and sanguine at the thought of humanity being replaced by A.I [...]

The real reason why "The Creator" is set in Vietnam (and across large swaths of Eurasia) is so that it could be shot in Vietnam. And in Thailand. And in Cambodia, Nepal, Indonesia, and several other beautiful countries that are seldom used as backdrops for futuristic science-fiction stories like this one. This movie was born from the visual possibilities of interpolating "Star Wars"-like tech and "Blade Runner"-esque cyber-depression into primordially expressive landscapes. Greig Fraser and Oren Soffer's dusky and tactile cinematography soaks up every inch of what the Earth has to offer without any concession to motion capture suits or other CGI obstructions, which speaks to the truly revolutionary aspect of this production: Rather than edit the film around its special effects, Edwards reverse-engineered the special effects from a completed edit of his film... Instead of paying a fortune to recreate a flimsy simulacrum of our world on a computer, Edwards was able to shoot the vast majority of his movie on location at a fraction of the price, which lends "The Creator" a palpable sense of place that instantly grounds this story in an emotional truth that only its most derivative moments are able to undo... [D]etails poke holes in the porous border that runs between artifice and reality, and that has an unsurprisingly profound effect on a film so preoccupied with finding ghosts in the shell. Can a robot feel love? Do androids dream of electric sheep? At what point does programming blur into evolution...?

[T]he director has a classic eye for staging action, that he gives his movies room to breathe, and that he knows that the perfect "Kid A" needle-drop (the album, not the song) can do more for a story about the next iteration of "human" life than any of the tracks from Hans Zimmer's score... [T]here's some real cognitive dissonance to seeing a film that effectively asks us to root for a cuter version of ChatGPT. But Edwards and Weitz's script is fascinating for its take on a future in which people have programmed A.I. to maintain the compassion that our own species has lost somewhere along the way; a future in which technology might be a vessel for humanity rather than a replacement for it; a future in which computers might complement our movies rather than replace our cameras.

Privacy

Food Delivery Robots Are Feeding Camera Footage to the LAPD, Internal Emails Show (404media.co) 63

samleecole writes: A food delivery robot company that delivers for Uber Eats in Los Angeles provided video filmed by one of its robots to the Los Angeles Police Department as part of a criminal investigation, 404 Media has learned. The incident highlights the fact that delivery robots that are being deployed to sidewalks all around the country are essentially always filming, and that their footage can and has been used as evidence in criminal trials. Emails obtained by 404 Media also show that the robot food delivery company wanted to work more closely with the LAPD, which jumped at the opportunity.
Moon

Chinese Astronauts May Build a Base Inside a Lunar Lava Tube (universetoday.com) 75

According to Universe Today, China may utilize lunar caves as potential habitats for astronauts on the Moon, offering defense against hazards like radiation, meteorites, and temperature variations. From the report: Different teams of scientists from different countries and agencies have studied the idea of using lava tubes as shelter. At a recent conference in China, Zhang Chongfeng from the Shanghai Academy of Spaceflight Technology presented a study into the underground world of lava tubes. Chinese researchers did fieldwork in Chinese lava tubes to understand how to use them on the Moon. According to Zhang, there's enough similarity between lunar and Earthly lava tubes for one to be an analogue of the other. It starts with their two types of entrances, vertical and sloped. Both worlds have both types.

Most of what we've found on the Moon are vertical-opening tubes, but that may be because of our overhead view. The openings are called skylights, where the ceiling has collapsed and left a debris accumulation on the floor of the tube directly below it. Entering through these requires either flight or some type of vertical lift equipment. Sloped entrances make entry and exit much easier. It's possible that rovers could simply drive into them, though some debris would probably need to be cleared. According to Zhang, this is the preferred entrance that makes exploration easier. China is prioritizing lunar lava tubes at Mare Tranquillitatis (Sea of Tranquility) and Mare Fecunditatis (Sea of Fecundity) for exploration.

China is planning a robotic system that can explore caves like the one in Mare Tranquillitatis. The primary probe will have either wheels or feet and will be built to adapt to challenging terrain and to overcome obstacles. It'll also have a scientific payload. Auxiliary vehicles can separate from the main probe to perform more reconnaissance and help with communications and "energy support." They could be diversified so the mission can meet different challenges. They might include multi-legged crawling probes, rolling probes, and even bouncing probes. These auxiliary vehicles would also have science instruments to study the lunar dust, radiation, and the presence of water ice in the tubes. China is also planning a flight-capable robot that could find its way through lava tubes autonomously using microwave and laser radars.
"China's future plan, after successful exploration, is a crewed base," the report adds. "It would be a long-term underground research base in one of the lunar lava tubes, with a support center for energy and communication at the tube's entrance. The terrain would be landscaped, and the base would include both residential and research facilities inside the tube."

"[R]egardless of when they start, China seems committed to the idea. Ding Lieyun, a top scientist at Huazhong University of Science and Technology, told the China Science Daily that 'Eventually, building habitation beyond the Earth is essential not only for all humanity's quest for space exploration but also for China's strategic needs as a space power.'"
Robotics

Tesla Bot Can Now Sort Objects Autonomously (interestingengineering.com) 54

The official Tesla Optimus account shared an update video showing the progress its humanoid robot has made since it was announced in August 2021. In a video that looks like CGI, you can see Optimus sorting blocks and performing some yoga poses, among other things. Interesting Engineering reports: The video begins with the Tesla Bot aka the Optimus robot performing a self-calibration routine, which is essential for adapting to new environments. It then shows how TeslaBot can use its vision and joint position sensors to accurately locate its limbs in space, without relying on any external feedback. This enables TeslaBot to interact with objects and perform tasks with precision and dexterity.

One of the tasks that Optimus demonstrates is sorting blue and green blocks into matching trays. Tesla Optimus can grasp each block with ease and sort them at a human-like speed. It can also handle dynamic changes in the environment, such as when a human intervenes and moves the blocks around. TeslaBot can quickly adjust to the new situation and resume its task. It can also correct its own errors, such as when a block lands on its side and needs to be rotated.

The video also showcases Tesla Bot's balance and flexibility, as it performs some yoga poses that require standing on one leg and extending its limbs. These poses are not related to any practical workloads, but they show how TeslaBot can control its body and maintain its stability. The video ends with a call for more engineers to join the Tesla Optimus team, as the project is still in development and needs more talent. There is no information on when TeslaBot will be ready for production or commercial use, but the video suggests that it is making rapid progress and using the same software as the Tesla cars.

Robotics

New York City Deploys 420-Pound RoboCop to Patrol Subway Station (gothamist.com) 82

"New York City is now turning to robots to help patrol the Times Square subway station," quipped one local newscast.

The non-profit New York City blog Gothamist describes the robot as "almost as tall as the mayor — but at least three-times as wide around the waist," with a maximum speed of 3 miles per hour-- but a 360-degree field of vision, equipped with four cameras to send live video (without audio) to the police. A 420-pound, 5-foot-2-inch robocop with a giant camera for a face will begin patrolling the Times Square subway station overnight, the New York Police Department announced Friday morning. At a press conference held underground in the 42nd Street subway station, New York City Mayor Eric Adams said the city is launching a two-month pilot program to test the Knightscope K5 Autonomous Security Robot. During the press conference, the K5 robot — which is shaped like a small, white rocketship — stood silently along with uniformed officers and city officials in suits. Stripes of glowing blue lights indicated it was "on."

The K5 will act as a crime deterrent and provide real-time information on how to best deploy human officers to a safety incident, the mayor said. It features multiple cameras, a button that can connect the public with a real person, and a speaker for live audio communication... During the pilot program, the K5 will patrol the Times Squares subway station from midnight to 6 a.m. with a human NYPD handler that will help introduce it to the public. After two months, the mayor said the handler will no longer be necessary, and the robot will go on solo patrol...

Knightscope, which manufactures the robot, reports that it has been deployed to 30 clients in 10 states, including at malls and hospitals. The K5 has been in some sticky situations in other cities. One was toppled and slathered in barbecue sauce in San Francisco, while another was beaten by an intoxicated man in Mountain View, California, according to news reports. Another robot fell into a pool of water outside an office building in Washington, D.C.

When asked whether the robot was at risk of vandalism in New York City, the mayor strode over to it and gave it a few firm shoves. "Let's be clear, this is not a pushover. 420 pounds. This is New York tested," he said.

The city is leasing the robot for $9 an hour — And yes, local newscasts couldn't resist calling it a robocop. One shows the mayor announcing "We will continue to stay ahead of those who want to harm everyday New Yorkers."

Though the robot is equipped with facial recognition capability, it will not be activated.
Medicine

Neuralink Is Recruiting Subjects For the First Human Trial of Its Brain-Computer Interface 85

A few months after getting FDA approval for human trials, Neuralink is looking for its first test subjects. The Verge reports: The six-year initial trial, which the Elon Musk-owned company is calling "the PRIME Study," is intended to test Neuralink tech designed to help those with paralysis control devices. The company is looking for people (PDF) with quadriplegia due to vertical spinal cord injury or ALS who are over the age of 22 and have a "consistent and reliable caregiver" to be part of the study.

The PRIME Study (which apparently stands for Precise Robotically Implanted Brain-Computer Interface, even though that acronym makes no sense) is set to research three things at once. The first is the N1 implant, Neuralink's brain-computer device. The second is the R1 robot, the surgical robot that actually implants the device. The third is the N1 User App, the software that connects to the N1 and translates brain signals into computer actions. Neuralink says it's planning to test both the safety and efficacy of all three parts of the system.

Those who participate in the PRIME Study will first participate in an 18-month study that involves nine visits with researchers. After that, they'll spend at least two hours a week on brain-computer interface research sessions and then do 20 more visits over the next five years. Neuralink doesn't say how many subjects it's looking for or when it plans to begin the study but does say it only plans to compensate "for study-related costs" like travel to and from the study location. (Also not clear: where that location is. Neuralink only says it has received approval from "our first hospital site.")
Robotics

Agility Robotics Is Opening a Humanoid Robot Factory In Oregon (cnbc.com) 52

Agility Robotics is wrapping up construction of a factory in Salem, Oregon, where it plans to mass produce its first line of humanoid robots, called Digit. Each robot has two legs and two arms and is engineered to maneuver freely and work alongside humans in warehouses and factories. CNBC reports: The 70,000-square-foot facility, which the company is calling the "RoboFab," is the first of its kind, according to Damion Shelton, co-founder and CEO of Agility Robotics. COO Aindrea Campbell, who was formerly Apple's senior director of iPad operations and an engineering manager at Ford, told CNBC that the facility will have a 10,000 unit annual max capacity when it's fully built out and will employ more than 500 people. For now, though, Agility Robotics is focused on the installation and testing of its first production lines.

Funded by DCVC and Playground Global among venture investors, Agility Robotics beat would-be competitors to the punch, including Tesla with its Optimus initiative, by completing development of production prototype humanoid robots and standing up a factory where it can mass produce them. Shelton told CNBC that his team developed Digit with a human form factor so that the robots can lift, sort and maneuver while staying balanced, and so they could operate in environments where steps or other structures could otherwise limit the use of robotics. The robots are powered with rechargeable lithium ion batteries.

One thing Digit lacks is a five-fingered hand -- instead, the robot's hands look more like a claw or mitten. [...] Digit can traverse stairs, crouch into tight spaces, unload containers and move materials onto or off of a pallet or a conveyor, then help to sort and divide material onto other pallets, according to Agility. The company plans to put the robots to use transporting materials around its own factory, Campbell said. Agility's preferred partners will be first to receive the robots next year, and the company is only selling -- not renting or leasing -- the systems in the near term.

The Military

US Air Force Tests an AI -Powered Drone Aircraft Prototype (msn.com) 65

An anonymous reader shared this report from the New York Times: It is powered into flight by a rocket engine. It can fly a distance equal to the width of China. It has a stealthy design and is capable of carrying missiles that can hit enemy targets far beyond its visual range. But what really distinguishes the Air Force's pilotless XQ-58A Valkyrie experimental aircraft is that it is run by artificial intelligence, putting it at the forefront of efforts by the U.S. military to harness the capacities of an emerging technology whose vast potential benefits are tempered by deep concerns about how much autonomy to grant to a lethal weapon.

Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill... The emergence of artificial intelligence is helping to spawn a new generation of Pentagon contractors who are seeking to undercut, or at least disrupt, the longstanding primacy of the handful of giant firms who supply the armed forces with planes, missiles, tanks and ships. The possibility of building fleets of smart but relatively inexpensive weapons that could be deployed in large numbers is allowing Pentagon officials to think in new ways about taking on enemy forces.

It also is forcing them to confront questions about what role humans should play in conflicts waged with software that is written to kill...

The article adds that the U.S. Air Force plans to build 1,000 to 2,000 AI drones for as little as $3 million apiece. "Some will focus on surveillance or resupply missions, others will fly in attack swarms and still others will serve as a 'loyal wingman' to a human pilot....

"A recently revised Pentagon policy on the use of artificial intelligence in weapons systems allows for the autonomous use of lethal force — but any particular plan to build or deploy such a weapon must first be reviewed and approved by a special military panel."
AI

California Firefighters Are Training AI To Detect Wildfires (nytimes.com) 13

Firefighters are training a robot to scan the horizon for fires. It turns out a lot of things look like smoke. From a report: For years, firefighters in California have relied on a vast network of more than 1,000 mountaintop cameras to detect wildfires. Operators have stared into computer screens around the clock looking for wisps of smoke. This summer, with wildfire season well underway, California's main firefighting agency is trying a new approach: training an artificial intelligence program to do the work. The idea is to harness one of the state's great strengths -- expertise in A.I. -- and deploy it to prevent small fires from becoming the kinds of conflagrations that have killed scores of residents and destroyed thousands of homes in California over the past decade.

Officials involved in the pilot program say they are happy with early results. Around 40 percent of the time, the artificial intelligence software was able to alert firefighters of the presence of smoke before dispatch centers received 911 calls. "It has absolutely improved response times," said Phillip SeLegue, the staff chief of intelligence for the California Department of Forestry and Fire Protection, the state's main firefighting agency better known as Cal Fire. In about two dozen cases, Mr. SeLegue said, the A.I. identified fires that the agency never received 911 calls for. The fires were extinguished when they were still small and manageable.

After an exceptionally wet winter, California's fire season has not been as destructive -- so far -- as in previous years. Cal Fire counts 4,792 wildfires so far this year, lower than the five-year average of 5,422 for this time of year. Perhaps more important, the number of acres burned this year has been only one-fifth of the five-year average of 812,068 acres. The A.I. pilot program, which began in late June and covered six of Cal Fire's command centers, will be rolled out to all 21 command centers starting in September. But the program's apparent success comes with caveats. The system can detect fires only visible to the cameras. And at this stage, humans are still needed to make sure the A.I. program is properly identifying smoke. Engineers for the company that created the software, DigitalPath, based in Chico, Calif., are monitoring the system day and night, and manually vetting every incident that the A.I. identifies as fire.

EU

Cheese-Makers Track Their Parmesans By Embedding Edible, Blockchain-Enabled Microchips (msn.com) 187

"Italian producers of parmesan cheese have been fighting against imitations for years," writes the Wall Street Journal, adding "Their latest trick to beat counterfeiters is edible microchips.

"Now, makers of Parmigiano-Reggiano, as the original parmesan cheese is officially called, are slapping the microchips on their 90-pound cheese wheels as part of an endless cat-and-mouse game between makers of authentic and fake products." New methods to guarantee the origin of products are being used across the EU. Some wineries are putting serial numbers, invisible ink and holograms on their bottles. So-called DNA fingerprinting of milk bacteria pioneered in Switzerland, which isn't in the EU, is now being tested inside the bloc as a method for identifying cheese. QR codes are also proliferating, including on individual portions of pre-sliced Prosciutto di San Daniele, a raw ham similar to Prosciutto di Parma. A smartphone can be used to show information such as how long the prosciutto has been aged and when it was sliced... The new silicon chips, made by Chicago-based p-Chip, use blockchain technology to authenticate data that can trace the cheese as far back as the producer of the milk used.

The chips have been in advanced testing on more than 100,000 Parmigiano wheels for more than a year. The consortium of producers wants to be sure the chips can stand up to Parmigiano's aging requirement, which is a minimum of one year and can exceed three years for some varieties... The p-Chips can withstand extreme heat or cold, can be read through ice and can withstand years of storage in liquid nitrogen. They have outperformed RFID chips, which are larger, can be more difficult to attach to products, are more fragile and can't survive extreme temperatures, according to p-Chip Chief Technology Officer Bill Eibon. Parmigiano producers also use QR codes, but the codes are easily copied and degrade during the cheese's aging process.

A robot heats the Parmigiano wheel's casein label — a small plaque made of milk protein that is widely used in the cheese industry — and then inserts the chip on top. A hand-held reader can grab the data from the chips, which cost a few cents each and are similar to the ones that some people have inserted under the skin of their pets. The chips can't be read remotely. In lab tests, the chips sat for three weeks in a mock-up of stomach acid without leaking any dangerous material. Eibon went a step further, eating one without suffering any ill effects, but he isn't touting that lest p-Chip face accusations it is tracking people, something that isn't possible because the chips can't be read remotely and can't be read once they are ingested.

"We don't want to be known as the company accused of tracking people," said Eibon. "I ate one of the chips and nobody is tracking me, except my wife, and she uses a different method."

Merck KGaA will soon be using the same chips, the article points out, and the chips "are also being tested in the automotive industry to guarantee the authenticity of car parts.

"The chips could eventually be used on livestock, crops or medicine stored in liquid nitrogen."
AI

New AP Guidelines Lay the Groundwork For AI-Assisted Newsrooms (engadget.com) 11

An anonymous reader quotes a report from Engadget: The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP's blessing as a license to use generative AI more excessively or underhandedly.

The organization's AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is -- not a replacement for trained writers, editors and reporters exercising their best judgment. "We do not see AI as a replacement of journalists in any way," the AP's Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. "It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share." The article directs its journalists to view AI-generated content as "unvetted source material," to which editorial staff "must apply their editorial judgment and AP's sourcing standards when considering any information for publication." It says employees may "experiment with ChatGPT with caution" but not create publishable content with it. That includes images, too. "In accordance with our standards, we do not alter any elements of our photos, video or audio," it states. "Therefore, we do not allow the use of generative AI to add or subtract any elements." However, it carved an exception for stories where AI illustrations or art are a story's subject -- and even then, it has to be clearly labeled as such.

Barrett warns about AI's potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists "should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image's origin, and checking for reports with similar content from trusted media." To protect privacy, the guidelines also prohibit writers from entering "confidential or sensitive information into AI tools." Although that's a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. [...] It's not hard to imagine other outlets -- desperate for an edge in the highly competitive media landscape -- viewing the AP's (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.
Further reading: NYT Prohibits Using Its Content To Train AI Models
Robotics

Bots Are Better Than Humans At Cracking 'Are You a Robot?' Captcha Tests, Study Finds (independent.co.uk) 78

A recent comprehensive study reveals that automated bots are substantially more efficient than humans at cracking Captcha tests, a widely used security measure on over 100 popular websites. The Independent reports: In the study, scientists assessed 200 of the most popular websites and found 120 still used Captcha. They took the help of 1,000 participants online from diverse backgrounds -- varying in location, age, sex and educational level -- to take 10 captcha tests on these sites and gauge their difficulty levels. Researchers found many bots described in scientific journals could beat humans at these tests in both speed and accuracy.

Some Captcha tests took human participants between nine and 15 seconds to solve, with an accuracy of about 50 to 84 per cent, while it took the bots less than a second to crack them, with up to near perfection. "The bots' accuracy ranges from 85-100 per cent, with the majority above 96 per cent. This substantially exceeds the human accuracy range we observed (50-85 per cent)," scientists wrote in the study. They also found that the bots' solving times are "significantly lower" or nearly the same as humans in almost all cases.

The Military

US Air Force Builds $5B Climate-Resilient 'Base of the Future' with Robot Dogs and AI Security (msn.com) 103

After a hurricane hit Florida, 484 buildings just at the Tyndall Air Force base were destroyed or damaged beyond repair. Five years later, it's part of a $5 billion, nine-year rebuilding effort the Washington Post describes as rare "blank slate." The plan is "not merely to rebuild it, but to construct what the U.S. military calls 'the installation of the future,' which will be able to withstand rising seas, stronger storms and other threats..." The rebuild at Tyndall, which is expected to continue into 2027, marks the largest military construction project undertaken by the Pentagon. "Think of it as the Air Force throwing its Costco card down on the table and buying buildings in bulk," said Michael Dwyer, deputy chief of the Natural Disaster Recovery Division. A dizzying array of new technologies and approaches have been incorporated into the effort, from semiautonomous robot dogs patrolling the grounds to artificial intelligence software designed to detect and deter any armed person who enters the base.

But the most robust funding is aimed at making Tyndall more efficient, connected and resilient in the face of a warming world. Structures under construction — from dormitory complexes to a child care center to hangars that will house three new squadrons of the F-35A Lightning II later this year — are being built to withstand winds in excess of 165 mph. Steel frames, high-impact windows, concrete facades and roofing with additional bracing are among the features meant to weather the stronger storms to come.

At nearby Panama City, sea level rise has accelerated in recent years, with federal data showing seas have risen there more than 4 inches since 2010. Planners factored in the potential for as much as 7 feet of sea level rise by the end of the century, and as a result placed the "vast majority" of new buildings at elevations that should be safe from storm surges for decades, Dwyer said. In addition, sensors placed near the low spots of buildings will send alerts the moment a flood threatens. The Air Force also has created a "digital twin" of Tyndall — essentially, a virtual duplicate of the base that allows officials to simulate how roads, buildings and other infrastructure would hold up in different scenarios, such as a hurricane or historic rainfall events.

Other efforts include restoring the beach's 10-foot sand dunes and its rocky shoreline, along with "the installation of submerged oyster reef breakwater that can reduce wave energy and erosion."

But the article points out that the Air Force also has a second hope for their base: "that the lessons unfolding here can be replicated at other bases around the world that will face — or already are facing — similar threats...
DRM

Google's Nightmare 'Web Integrity API' Wants a DRM Gatekeeper For the Web 163

Google's newest proposed web standard is... DRM? Over the weekend the Internet got wind of this proposal for a "Web Environment Integrity API. " From a report: The explainer is authored by four Googlers, including at least one person on Chrome's "Privacy Sandbox" team, which is responding to the death of tracking cookies by building a user-tracking ad platform right into the browser. The intro to the Web Integrity API starts out: "Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it."

The goal of the project is to learn more about the person on the other side of the web browser, ensuring they aren't a robot and that the browser hasn't been modified or tampered with in any unapproved ways. The intro says this data would be useful to advertisers to better count ad impressions, stop social network bots, enforce intellectual property rights, stop cheating in web games, and help financial transactions be more secure. Perhaps the most telling line of the explainer is that it "takes inspiration from existing native attestation signals such as [Apple's] App Attest and the [Android] Play Integrity API." Play Integrity (formerly called "SafetyNet") is an Android API that lets apps find out if your device has been rooted.

Root access allows you full control over the device that you purchased, and a lot of app developers don't like that. So if you root an Android phone and get flagged by the Android Integrity API, several types of apps will just refuse to run. You'll generally be locked out of banking apps, Google Wallet, online games, Snapchat, and some media apps like Netflix. [...] Google wants the same thing for the web. Google's plan is that, during a webpage transaction, the web server could require you to pass an "environment attestation" test before you get any data. At this point your browser would contact a "third-party" attestation server, and you would need to pass some kind of test. If you passed, you would get a signed "IntegrityToken" that verifies your environment is unmodified and points to the content you wanted unlocked. You bring this back to the web server, and if the server trusts the attestation company, you get the content unlocked and finally get a response with the data you wanted.

Slashdot Top Deals