AI

AI-Generated Articles Prompt Wikipedia To Downgrade CNET's Reliability Rating (arstechnica.com) 54

Wikipedia has downgraded tech website CNET's reliability rating following extensive discussions among its editors regarding the impact of AI-generated content on the site's trustworthiness. "The decision reflects concerns over the reliability of articles found on the tech news outlet after it began publishing AI-generated stories in 2022," adds Ars Technica. Futurism first reported the news. From the report: Wikipedia maintains a page called "Reliable sources/Perennial sources" that includes a chart featuring news publications and their reliability ratings as viewed from Wikipedia's perspective. Shortly after the CNET news broke in January 2023, Wikipedia editors began a discussion thread on the Reliable Sources project page about the publication. "CNET, usually regarded as an ordinary tech RS [reliable source], has started experimentally running AI-generated articles, which are riddled with errors," wrote a Wikipedia editor named David Gerard. "So far the experiment is not going down well, as it shouldn't. I haven't found any yet, but any of these articles that make it into a Wikipedia article need to be removed." After other editors agreed in the discussion, they began the process of downgrading CNET's reliability rating.

As of this writing, Wikipedia's Perennial Sources list currently features three entries for CNET broken into three time periods: (1) before October 2020, when Wikipedia considered CNET a "generally reliable" source; (2) between October 2020 and present, when Wikipedia notes that the site was acquired by Red Ventures in October 2020, "leading to a deterioration in editorial standards" and saying there is no consensus about reliability; and (3) between November 2022 and January 2023, when Wikipedia considers CNET "generally unreliable" because the site began using an AI tool "to rapidly generate articles riddled with factual inaccuracies and affiliate links."

Futurism reports that the issue with CNET's AI-generated content also sparked a broader debate within the Wikipedia community about the reliability of sources owned by Red Ventures, such as Bankrate and CreditCards.com. Those sites published AI-generated content around the same period of time as CNET. The editors also criticized Red Ventures for not being forthcoming about where and how AI was being implemented, further eroding trust in the company's publications. This lack of transparency was a key factor in the decision to downgrade CNET's reliability rating.
A CNET spokesperson said in a statement: "CNET is the world's largest provider of unbiased tech-focused news and advice. We have been trusted for nearly 30 years because of our rigorous editorial and product review standards. It is important to clarify that CNET is not actively using AI to create new content. While we have no specific plans to restart, any future initiatives would follow our public AI policy."
Social Networks

Supreme Court Hears Landmark Cases That Could Upend What We See on Social Media (cnn.com) 282

The US Supreme Court is hearing oral arguments Monday in two cases that could dramatically reshape social media, weighing whether states such as Texas and Florida should have the power to control what posts platforms can remove from their services. From a report: The high-stakes battle gives the nation's highest court an enormous say in how millions of Americans get their news and information, as well as whether sites such as Facebook, Instagram, YouTube and TikTok should be able to make their own decisions about how to moderate spam, hate speech and election misinformation. At issue are laws passed by the two states that prohibit online platforms from removing or demoting user content that expresses viewpoints -- legislation both states say is necessary to prevent censorship of conservative users.

More than a dozen Republican attorneys general have argued to the court that social media should be treated like traditional utilities such as the landline telephone network. The tech industry, meanwhile, argues that social media companies have First Amendment rights to make editorial decisions about what to show. That makes them more akin to newspapers or cable companies, opponents of the states say. The case could lead to a significant rethinking of First Amendment principles, according to legal experts. A ruling in favor of the states could weaken or reverse decades of precedent against "compelled speech," which protects private individuals from government speech mandates, and have far-reaching consequences beyond social media. A defeat for social media companies seems unlikely, but it would instantly transform their business models, according to Blair Levin, an industry analyst at the market research firm New Street Research.

AI

Scientific Journal Publishes AI-Generated Rat With Gigantic Penis (vice.com) 72

Jordan Pearson reports via Motherboard: A peer-reviewed science journal published a paper this week filled with nonsensical AI-generated images, which featured garbled text and a wildly incorrect diagram of a rat penis. The episode is the latest example of how generative AI is making its way into academia with concerning effects. The paper, titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway" was published on Wednesday in the open access Frontiers in Cell Development and Biology journal by researchers from Hong Hui Hospital and Jiaotong University in China. The paper itself is unlikely to be interesting to most people without a specific interest in the stem cells of small mammals, but the figures published with the article are another story entirely. [...]

It's unclear how this all got through the editing, peer review, and publishing process. Motherboard contacted the paper's U.S.-based reviewer, Jingbo Dai of Northwestern University, who said that it was not his responsibility to vet the obviously incorrect images. (The second reviewer is based in India.) "As a biomedical researcher, I only review the paper based on its scientific aspects. For the AI-generated figures, since the author cited Midjourney, it's the publisher's responsibility to make the decision," Dai said. "You should contact Frontiers about their policy of AI-generated figures." Frontier's policies for authors state that generative AI is allowed, but that it must be disclosed -- which the paper's authors did -- and the outputs must be checked for factual accuracy. "Specifically, the author is responsible for checking the factual accuracy of any content created by the generative AI technology," Frontier's policy states. "This includes, but is not limited to, any quotes, citations or references. Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript."

On Thursday afternoon, after the article and its AI-generated figures circulated social media, Frontiers appended a notice to the paper saying that it had corrected the article and that a new version would appear later. It did not specify what exactly was corrected.
UPDATE: Frontiers retracted the article and issued the following statement: "Following publication, concerns were raised regarding the nature of its AI-generated figures. The article does not meet the standards of editorial and scientific rigor for Frontiers in Cell and Development Biology; therefore, the article has been retracted. This retraction was approved by the Chief Executive Editor of Frontiers. Frontiers would like to thank the concerned readers who contacted us regarding the published article."
Science

Firms Churning Out Fake Papers Are Now Bribing Journal Editors (science.org) 32

Nicholas Wise is a fluid dynamics researcher who moonlights as a scientific fraud buster, reports Science magazine. And last June he "was digging around on shady Facebook groups when he came across something he had never seen before." Wise was all too familiar with offers to sell or buy author slots and reviews on scientific papers — the signs of a busy paper mill. Exploiting the growing pressure on scientists worldwide to amass publications even if they lack resources to undertake quality research, these furtive intermediaries by some accounts pump out tens or even hundreds of thousands of articles every year. Many contain made-up data; others are plagiarized or of low quality. Regardless, authors pay to have their names on them, and the mills can make tidy profits.

But what Wise was seeing this time was new. Rather than targeting potential authors and reviewers, someone who called himself Jack Ben, of a firm whose Chinese name translates to Olive Academic, was going for journal editors — offering large sums of cash to these gatekeepers in return for accepting papers for publication. "Sure you will make money from us," Ben promised prospective collaborators in a document linked from the Facebook posts, along with screenshots showing transfers of up to $20,000 or more. In several cases, the recipient's name could be made out through sloppy blurring, as could the titles of two papers. More than 50 journal editors had already signed on, he wrote. There was even an online form for interested editors to fill out...

Publishers and journals, recognizing the threat, have beefed up their research integrity teams and retracted papers, sometimes by the hundreds. They are investing in ways to better spot third-party involvement, such as screening tools meant to flag bogus papers. So cash-rich paper mills have evidently adopted a new tactic: bribing editors and planting their own agents on editorial boards to ensure publication of their manuscripts. An investigation by Science and Retraction Watch, in partnership with Wise and other industry experts, identified several paper mills and more than 30 editors of reputable journals who appear to be involved in this type of activity. Many were guest editors of special issues, which have been flagged in the past as particularly vulnerable to abuse because they are edited separately from the regular journal. But several were regular editors or members of journal editorial boards. And this is likely just the tip of the iceberg.

The spokesperson for one journal publisher tells Science that its editors are receiving bribe offers every week..

Thanks to long-time Slashdot reader schwit1 for sharing the article..
Music

Spotify's Editorial Playlists Are Losing Influence Amid AI Expansion (bloomberg.com) 14

Once a dominant force in music discovery, Spotify's famed playlists like RapCaviar, which significantly influenced mainstream music and artist visibility, are losing ground. As the music industry shifts towards algorithmic suggestions and TikTok emerges as a major music promoter, Spotify's strategy evolves with more automated music discovery and less emphasis on human-curated playlists, signaling a potential end to the era where a few key playlists could make a star overnight. Bloomberg reports: Enter TikTok. In the late 2010s, as the algorithmic controlled, short-form video app emerged as a growing force in music promotion, Spotify took notice. On an earnings call in 2020, Spotify Chief Executive Officer Daniel Ek noted that users were increasingly opting for algorithmic suggestions and that Spotify would be leaning into the trend. "As we're getting better and better at personalization, we're serving better and better content and more and more of our users are choosing that," he said. From there, Spotify began implementing a number of changes that over time significantly altered the fundamental dynamics of how playlists get composed. Among other things, the company had already introduced a standardized pitching form that all artists and managers must use to submit tracks for playlist consideration. One former employee says the tool was created to foster a more merit-based system with a greater emphasis on data -- and less focus on the taste of individual curators. The goal, in part, was to give independent and smaller artists without the resources to personally court key playlist editors a better chance at placements. It was also a way to better protect the public-facing editors who in the early days were sometimes subjected to harassment from people disgruntled over their musical choices.

As the automated submission system took hold, the editors gradually grew more anonymous and less associated with particular playlists. In a handbook for the editorial team, Spotify instructed curators not to claim ownership of any one playlist. At the same time, Spotify began introducing multiple splashy features meant to encourage algorithm-driven listening, including an AI DJ and Daylist, two features that constantly change to fit listeners' habits and interests. (Spotify says "human expertise" guides the AI DJ.) Last year, Spotify laid off members of the teams involved in making playlists as part of its various cuts. And over time, the shift in emphasis has had consequences outside the company as well. These days, the same music industry sources who in the late 2010s learned to obsess over what was included and excluded from key Spotify playlists have started noticing something else -- it no longer seems to matter as much. Employees at different major labels say they've seen streams coming from RapCaviar drop anywhere from 30% to 50%.

The trend towards automated music discovery at Spotify shows no sign of slowing down. One internal presentation titled "Recapturing the Zeitgeist" encourages editorial curators to better utilize data. According to the people who have seen the plan, in addition to putting together a playlist, editorial curators would tag songs to help the algorithm accurately place them on relevant playlists that are automatically personalized for individual subscribers. The company has also shifted some human-curated playlists to personalized versions, including selections with seven-figure followings, like Housewerk and Indie Pop. These days, Spotify is also promoting something called Discovery Mode, wherein labels and artist teams can submit songs for additional algorithm pushes in exchange for a lower royalty rate. These tracks can only surface on personalized listening sessions, a former employee said, meaning Spotify would have a financial incentive to push people to them over editorially curated playlists. (For now, Discovery Mode songs only surface in radio or autoplay listening sessions.)
The shift toward algorithmic distribution isn't necessarily a bad thing, says Dan Smith, US general manager at Armada, an independent dance label. "The way fans discovered new music was radio back in the day, then Spotify editorial playlists, then there were a few years where people only discovered new music through TikTok," Brad said. "All those things still work ... we're all just trying different ways to make sure songs get to the right people."
Games

Way Too Many Games Were Released On Steam In 2023 (kotaku.com) 93

John Walker, reporting for Kotaku: Steam is by far the most peculiar of online storefronts. Built on top of itself for the last twenty years, Valve's behemothic PC game distributor is a clusterfuck of overlapping design choices, where algorithms rule over coherence, with 2023 seeing over 14,500 games released into the mayhem. Which is too many games. That breaks down to just under 40 a day, although given how people release games, it more accurately breaks down to about 50 every weekday. 50 games a day. On a storefront that goes to some lengths to bury new releases, and even buries pages where you can deliberately list new releases.

Compared to 2022, that's an increase of nearly 2,000 games, up almost 5,000 from five years ago. There's no reason to expect that growth to diminish any time soon. It's a volume of games that not only could no individual ever hope to keep up with, but nor could even any gaming site. Not even the biggest sites in the industry could afford an editorial team capable of playing 50 games a day to find and write about those worth highlighting. Realistically, not even a tenth of the games. And that's not least because of those 50 games per day, about 48 of them will be absolute dross. On one level, in this way Steam represents a wonderful democracy for gaming, where any developer willing to stump up the $100 entry fee can release their game on the platform, with barely any restrictions. On another level, however, it's a disaster for about 99 percent of releases, which stand absolutely no chance of garnering any attention, no matter their quality. The solution: human storefront curation, which Valve has never shown any intention of doing.

AI

AI Models May Enable a New Era of Mass Spying, Says Bruce Schneier (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: In an editorial for Slate published Monday, renowned security researcher Bruce Schneier warned that AI models may enable a new era of mass spying, allowing companies and governments to automate the process of analyzing and summarizing large volumes of conversation data, fundamentally lowering barriers to spying activities that currently require human labor. In the piece, Schneier notes that the existing landscape of electronic surveillance has already transformed the modern era, becoming the business model of the Internet, where our digital footprints are constantly tracked and analyzed for commercial reasons.

Spying, by contrast, can take that kind of economically inspired monitoring to a completely new level: "Spying and surveillance are different but related things," Schneier writes. "If I hired a private detective to spy on you, that detective could hide a bug in your home or car, tap your phone, and listen to what you said. At the end, I would get a report of all the conversations you had and the contents of those conversations. If I hired that same private detective to put you under surveillance, I would get a different report: where you went, whom you talked to, what you purchased, what you did." Schneier says that current spying methods, like phone tapping or physical surveillance, are labor-intensive, but the advent of AI significantly reduces this constraint. Generative AI systems are increasingly adept at summarizing lengthy conversations and sifting through massive datasets to organize and extract relevant information. This capability, he argues, will not only make spying more accessible but also more comprehensive. "This spying is not limited to conversations on our phones or computers," Schneier writes. "Just as cameras everywhere fueled mass surveillance, microphones everywhere will fuel mass spying. Siri and Alexa and 'Hey, Google' are already always listening; the conversations just aren't being saved yet." [...]

In his editorial, Schneier raises concerns about the chilling effect that mass spying could have on society, cautioning that the knowledge of being under constant surveillance may lead individuals to alter their behavior, engage in self-censorship, and conform to perceived norms, ultimately stifling free expression and personal privacy. So what can people do about it? Anyone seeking protection from this type of mass spying will likely need to look toward government regulation to keep it in check since commercial pressures often trump technological safety and ethics. [...] Schneier isn't optimistic on that front, however, closing with the line, "We could prohibit mass spying. We could pass strong data-privacy rules. But we haven't done anything to limit mass surveillance. Why would spying be any different?" It's a thought-provoking piece, and you can read the entire thing on Slate.

China

'Global Science is Splintering Into Two - And This is Becoming a Problem' 168

The United States and China are pursuing parallel scientific tracks. To solve crises on multiple fronts, the two roads need to become one, Nature's editorial board wrote Wednesday. From the post: It's no secret that research collaborations between China and the United States -- among other Western countries -- are on a downward trajectory. Early indicators of a possible downturn have been confirmed by more sources. A report from Japan's Ministry of Education, Culture, Sports, Science and Technology, published in August, for instance, stated that the number of research articles co-authored by scientists in the two countries had fallen in 2021, the first annual drop since 1993. Meanwhile, data from Nature Index show that China-based scientists' propensity to collaborate internationally has been waning, when looking at the authorship of papers in the Index's natural-science journals.

Nature reported last month that China's decoupling from the countries loosely described as the West mirrors its strengthening of science links with low- and middle-income countries (LMICs), as part of its Belt and Road Initiative. There are many good reasons for China to be boosting science in LMICs, which could sorely do with greater research funding and capacity building. But this is also creating parallel scientific systems -- one centred on North America and Europe, and the other on China. The biggest challenges faced by humanity, from combating climate change to ending poverty, are embodied in a globally agreed set of targets, the United Nations Sustainable Development Goals (SDGs).

Approaching them without shared knowledge can only slow down progress by creating competing systems for advancing and implementing solutions. It's a scenario that the research community must be more aware of and work to avoid. Nature Index offers some reasons as to why collaboration between China and the West is declining. Travel restrictions during the COVID-19 pandemic took their toll, limiting collaborations and barring new ones from being forged. Geopolitical tensions have led many Western governments to restrict their research partnerships with China, on national-security grounds, and vice versa.
AI

'Hallucinate' Chosen As Cambridge Dictionary's Word of the Year (theguardian.com) 23

Cambridge dictionary's word of the year for 2023 is "hallucinate," a verb that took on a new meaning with the rise in popularity of artificial intelligence chatbots. The Guardian reports: The original definition of the chosen word is to "seem to see, hear, feel, or smell" something that does not exist, usually because of "a health condition or because you have taken a drug." It now has an additional meaning, relating to when artificial intelligence systems such as ChatGPT, which generates text that mimics human writing, "hallucinates" and produces false information. The word was chosen because the new meaning "gets to the heart of why people are talking about AI," according to a post on the dictionary site.

Generative AI is a "powerful" but "far from perfect" tool, "one we're all still learning how to interact with safely and effectively -- this means being aware of both its potential strengths and its current weaknesses." The dictionary added a number of AI-related entries this year, including large language model (or LLM), generative AI (or GenAI), and GPT (an abbreviation of Generative Pre-trained Transformer). "AI hallucinations remind us that humans still need to bring their critical thinking skills to the use of these tools," continued the post. "Large language models are only as reliable as the information their algorithms learn from. Human expertise is arguably more important than ever, to create the authoritative and up-to-date information that LLMs can be trained on."

Government

America's Net Neutrality Question: Should the FCC Define the Internet as a 'Common Carrier'? (fcc.gov) 132

The Washington Post's editorial board looks at America's "net neutrality" debate.

But first they note that America's communications-regulating FCC has "limited authority to regulate unless broadband is considered a 'common carrier' under the Telecommunications Act of 1996." The FCC under President Barack Obama moved to reclassify broadband so it could regulate broadband companies; the FCC under President Donald Trump reversed the change. Dismayed advocates warned the world that, without the protections in place, the internet would break. You'll never guess what happened next: nothing. Or, at least, almost nothing. The internet did not break, and internet service providers for the most part did not block and they did not throttle.

All the same, today's FCC, under Chairwoman Jessica Rosenworcel, has just moved to re-reclassify broadband. The interesting part is that her strongest argument doesn't have much to do with net neutrality, but with some of the other benefits the country could see from having a federal watchdog keeping an eye on the broadband business... Broadband is an essential service... Yet there isn't a single government agency with sufficient authority to oversee this vital tool. Asserting federal authority over broadband would empower regulation of any blocking, throttling or anti-competitive paid traffic prioritization that they might engage in. But it could also help ensure the safety and security of U.S. networks.

The FCC has, on national security grounds, removed authorization for companies affiliated with adversary states, such as China's Huawei, from participating in U.S. telecommunications markets. The agency can do this for phone carriers. But it can't do it for broadband, because it isn't allowed to. Or consider public safety during a crisis. The FCC doesn't have the ability to access the data it needs to know when and where there are broadband outages — much less the ability to do anything about those outages if they are identified. Similarly, it can't impose requirements for network resiliency to help prevent those outages from occurring in the first place — during, say, a natural disaster or a cyberattack.

The agency has ample power to police the types of services that are becoming less relevant in American life, such as landline telephones, and little power to police those that are becoming more important every day.

The FCC acknowledges this power would also allow them to prohibit "throttling" of content. But the Post's editorial also makes the argument that here in 2023 that's "unlikely to have any major effect on the broadband industry in either direction... Substantial consequences have only become less likely as high-speed bandwidth has become less limited."
Television

Jon Stewart's Apple TV Plus Show Ends, Reportedly Over Coverage of AI and China (theverge.com) 115

A user writes: Multiple outlets are reporting that Apple TV Plus has cancelled Jon Stewart's popular show The Problem with Jon Stewart, reportedly over editorial disagreements with regards to planned stories on the People's Republic of China and AI. Fans and haters of Apple will both recall that Apple recently made changes to AirDrop, one of the few effective means Chinese dissidents and protesters had for exchanging information off-grid at scale, and will ask why Apple is apparently not only willing, but eager, to carry water for the PRC, overriding both human rights and practical business concerns in the process. "Apple approached Stewart directly and expressed its need for the host and his team to be 'aligned' with the company's views on topics discussed," reports The Verge, citing The Hollywood Reporter. "Rather than falling in line when Apple threatened to cancel the show, Stewart reportedly decided to walk."
Businesses

Bandcamp Slashes Nearly Half Its Staff After Epic Sale (sfchronicle.com) 61

Aidin Vaziri reports via the San Francisco Chronicle: Epic Games has initiated layoffs at Bandcamp, the Oakland-based online music distribution platform it recently sold to Songtradr. Among those affected were members of Bandcamp Daily, the platform's editorial arm, as confirmed by former staff members on social media channels. "About half the company was laid off today," senior editor JJ Skolnik announced on X (formerly Twitter) on Monday morning. This move comes weeks after Songtradr's acquisition of Bandcamp was announced on Sept. 28. The company did not disclose how many employees were impacted by the cuts.

Songtradr, a Santa Monica-based licensing company, had previously stated that not all Bandcamp employees would be absorbed after the platform's sale from Epic, citing the service's financial situation as the basis for workforce adjustments. [...] The sale comes as the company cuts around 16% of its workforce, about 830 employees, in the face of lower profits that were outpaced by growing expenses.

Businesses

'I'm a Luddite - and Why You Should Be One Too' (stltoday.com) 211

Los Angeles Times technology columnist Brian Merchant has written a book about the 1811 Luddite rebellion against industrial technology, decrying "entrepreneurs and industrialists pushing for new, dubiously legal, highly automated and labor-saving modes of production."

In a new piece he applauds the spirit of the Luddites. "The kind of visionaries we need now are those who see precisely how certain technologies are causing harm and who resist them when necessary." The parallels to the modern day are everywhere. In the 1800s, entrepreneurs used technology to justify imposing a new mode of work: the factory system. In the 2000s, CEOs used technology to justify imposing a new mode of work: algorithmically organized gig labor, in which pay is lower and protections scarce. In the 1800s, hosiers and factory owners used automation less to overtly replace workers than to deskill them and drive down their wages. Digital media bosses, call center operators and studio executives are using AI in much the same way. Then, as now, the titans used technology both as a new mode of production and as an idea that allowed them to ignore long-standing laws and regulations. In the 1800s, this might have been a factory boss arguing that his mill exempted him from a statute governing apprentice labor. Today, it's a ride-hailing app that claims to be a software company so it doesn't have to play by the rules of a cab firm.

Then, as now, leaders dazzled by unregulated technologies ignored their potential downsides. Then, it might have been state-of-the-art water frames that could produce an incredible volume of yarn — but needed hundreds of vulnerable child laborers to operate. Today, it's a cellphone or a same-day delivery, made possible by thousands of human laborers toiling in often punishing conditions.

Then, as now, workers and critics sounded the alarm...

Resistance is gathering again, too. Amazon workers are joining union drives despite intense opposition. Actors and screenwriters are striking and artists and illustrators have called for a ban of generative AI in editorial outlets. Organizing, illegal in the Luddites' time, has historically proved the best bulwark against automation. But governments must also step up. They must offer robust protections and social services for those in precarious positions. They must enforce antitrust laws. Crucially, they must develop regulations to rein in the antidemocratic model of technological development wherein a handful of billionaires and venture capital firms determine the shape of the future — and who wins and loses in it.

The clothworkers of the 1800s had the right idea: They believed everyone should share in the bounty of the amazing technologies their work makes possible.

That's why I'm a Luddite — and why you should be one, too.

So whatever happened to the Luddites? The article reminds readers that the factory system "took root," and "brought prosperity for some, but it created an immiserated working class.

"The 200 years since have seen breathtaking technological innovation — but much less social innovation in how the benefits are shared."
Transportation

Privately-Owned High-Speed Rail Opens New Line in Florida, Kills Pedestrian (thepointsguy.com) 220

At 11 a.m. Friday in Orlando Florida, a train completed its 240-mile journey from Miami, inaugurating a new line from Brightline that reaches speeds of up to 125 miles per hour and reduces the journey to just under three hours. "This is going to revolutionize transportation not just in the country and the state of Florida but right here in Central Florida and really just make our backyard bigger," Brightline's director of public affairs Katie Mitzner told a local news station.

Ironically, within hours a different Brightline train had struck and killed a pedestrian. "Brightline trains have the highest death rate in the U.S.," reports one local news station, "fatally striking 98 people since Miami-West Palm operations began — about one death for every 32,000 miles its trains travel, according to an ongoing Associated Press analysis." A police spokesperson said the death appeared to be a suicide.

"None of the accidents have been determined to be Brightline's fault," writes The Points Guy, "and the company has spent millions of dollars on safety improvements at grade crossings. It also launched a public-relations push to encourage all residents along its corridor to commit to staying safe. However, it is a very real and ongoing element of this service in Florida. We hope these efforts will continue to further reduce these incidents in communities that see frequent Brightline trains coming through."

The Points Guy also shared photos in their blog post describing what it was like to take a ride on America's only privately owned and operated inter-city passenger railroad: When the train ultimately pulled out of the station, a surreal feeling washed over me. Those of us on the inaugural service were the first passengers to ride the rails along this stretch of Florida's east coast in more than 55 years. Florida East Coast Railway, which still owns the tracks and operates frequent freight trains along them, ceased passenger service on July 31, 1968... Each seat has multiple power outlets, and the Wi-Fi truly was high-speed based on my experience and the test I ran. I was even able to successfully join (and participate in) our morning editorial team call on Zoom...

The scenery along the route was simply spectacular... With no grade crossings and fencing on both sides, we reached 125 mph for the final stretch of the journey. The cars along the highway stood no chance of keeping up as we traversed the 30-plus miles in only 18 minutes as the tower at Orlando International Airport came into view... With plans to expand to Tampa and construction underway on its planned Los Angeles-to-Las Vegas route, we likely haven't heard the last from Brightline as it seeks to transform train service in the United States.

"I think what Brightline has done here has laid the blueprint for how speed rail can be built in America with private dollars versus government funding," investor Ryn Rosberg told a local news site. "It's much more efficient and it gets done a lot quicker."

"There have been colorful station openings, lawsuits, threats of lawsuits, threats of legislation and yes, fatal accidents," writes the Palm Beach Post, "but Brightline train passengers can now take the train from any of its five South Florida stations to visit the Disney World, Universal Studios or Sea World tourist attractions."
The Media

Can Philanthropy Save Local Newspapers? (washingtonpost.com) 122

70 million Americans live in a county without a newspaper, according to a 2022 report cited in this editorial by the Washington Post's editorial board"

Who's to blame? The internet, mostly. Whereas deep-pocketed advertisers formerly relied on newspapers to reach their customers, they took to the audience-targeting capabilities of Facebook or Google. Web-based marketplaces also siphoned newspapers' once-robust revenue from classified ads.
But the Post emphasizes one positive new development: "a large pile of cash." In an initiative announced this month, 22 donor organizations, including the Knight Foundation and the John D. and Catherine T. MacArthur Foundation, are teaming up to provide more than $500 million to boost local news over five years — an undertaking called Press Forward... The injection of more than a half-billion dollars is sure to help the quest for a durable and replicable business model.

The even bigger imperative, however, is to elevate local news on the philanthropic food chain so that national and hometown funders prioritize this pivotal American institution. Failure on this front places more pressure on public policy solutions, and government activism mixes poorly with independent journalism...

One of the goals for Press Forward, accordingly, is building out the infrastructure — "from legal support to membership programs" — relied upon by local news providers to deliver their product. Jim Brady, vice president of journalism at the Knight Foundation, says it's easier than ever for news entrepreneurs to launch a local site because they can plug into existing technologies hammered out by their predecessors — and there's more development work still to fund on this front.

So where to go from here? Local philanthropic interests across the country could take a cue from the Press Forward partners and invest in the news organizations down the street.

Movies

Is Rotten Tomatoes 'Erratic, Reductive, and Easily Hacked'? (vulture.com) 43

Rotten Tomatoes celebrated its 25th year of assigning scores to movies based on their aggregate review. Now Vulture writes that Rotten Tomatoes "can make or break" movies, "with implications for how films are perceived, released, marketed, and possibly even green-lit". But unfortuately, the site "is also erratic, reductive, and easily hacked."

Vulture tells the story of a movie-publicity company contacting "obscure, often self-published critics" to say the film's teams "feel like it would benefit from more input from different critics" — while making undisclosed payments of $50 or more.) A critic asking if it's okay to pan the movie was informed that "super nice" critics move their bad reviews onto sites not included in Rotten Tomatoes scores.

Vulture says after bringing this to the site's attention, Rotten Tomatoes "delisted a number of the company's movies from its website and sent a warning to writers who reviewed them." But is there a larger problem? Filmmaker Paul Schrader even opines that "Audiences are dumber. Normal people don't go through reviews like they used to. Rotten Tomatoes is something the studios can game. So they do...." A third of U.S. adults say they check Rotten Tomatoes before going to the multiplex, and while movie ads used to tout the blurbage of Jeffrey Lyons and Peter Travers, now they're more likely to boast that a film has been "Certified Fresh...."

Another problem — and where the trickery often begins — is that Rotten Tomatoes scores are posted after a movie receives only a handful of reviews, sometimes as few as five, even if those reviews may be an unrepresentative sample. This is sort of like a cable-news network declaring an Election Night winner after a single county reports its results. But studios see it as a feature, since, with a little elbow grease, they can sometimes fool people into believing a movie is better than it is.

Here's how. When a studio is prepping the release of a new title, it will screen the film for critics in advance. It's a film publicist's job to organize these screenings and invite the writers they think will respond most positively. Then that publicist will set the movie's review embargo in part so that its initial Tomatometer score is as high as possible at the moment when it can have maximal benefits for word of mouth and early ticket sales... [I]n February, the Tomatometer score for Ant-Man and the Wasp: Quantumania debuted at 79 percent based on its first batch of reviews. Days later, after more critics had weighed in, its rating sank into the 40s. But the gambit may have worked. Quantumania had the best opening weekend of any movie in the Ant-Man series, at $106 million. In its second weekend, with its rottenness more firmly established, the film's grosses slid 69 percent, the steepest drop-off in Marvel history.

In studios' defense, Rotten Tomatoes' hastiness in computing its scores has made it practically necessary to cork one's bat. In a strategic blunder in May, Disney held the first screening of Indiana Jones and the Dial of Destiny at Cannes, the world's snootiest film festival, from which the first 12 reviews begot an initial score of 33 percent. "What they should've done," says Publicist No. 1, "was have simultaneous screenings in the States for critics who might've been more friendly." A month and a half later, Dial of Destiny bombed at the box office even though friendly critics eventually lifted its rating to 69 percent. "They had a low Rotten Tomatoes score just sitting out there for six weeks before release, and that was deadly," says a third publicist.

Music

Apple To Acquire Major Classical Music Labels BIS Records (macrumors.com) 26

Apple will acquire the major Swedish classical music record label BIS Records, intending to fold it into Apple Music Classical and Platoon. MacRumors reports: BIS Records was founded in 1973 by Robert von Bahr. The label focuses on a range of classical music, with particular focus on works that are not well represented by existing recordings. It is an award-winning name in the world of classical music, acclaimed for its vast catalog and impressive audio quality. The label celebrates its 50th anniversary this week. The company announced its impending acquisition by Apple earlier today.

BIS is set to become a part of Apple Music Classical and the Apple-owned label Platoon. Apple acquired Platoon, a London-based A&R startup focused on discovering rising music artists, in 2018. In 2021, Apple announced that it had purchased the classical music streaming service Primephonic and would be folding it into Apple Music via a new app dedicated to the genre. Apple released the Apple Music Classical app in March. The app offers a simpler interface for interacting with classical music specifically. Unlike the main Apple Music app, Apple Music Classical allows users to search by composer, work, conductor, catalog number, and more. Users can get more detailed information from editorial notes and descriptions.

The Almighty Buck

Epic's New Program Lets Developers Keep Their Revenue In Exchange For Exclusivity (theverge.com) 24

An anonymous reader quotes a report from The Verge: Epic Games will let developers keep 100 percent of their net revenues from the Epic Games Store for six months if they choose to make their games or apps exclusives for that time through its new First Run program, the company announced on Wednesday. Typically, Epic lets developers keep 88 percent of their revenues, with the company taking a 12 percent cut. For developers who launch a product through First Run, the split will return to 88 / 12 once the six months are up.

Developers who choose to participate in the Epic First Run program will see a few other benefits as well. Epic says First Run games and apps will be presented to Store users with "new exclusive badging, homepage placements, and dedicated collections" and will be featured in "relevant store campaigns including sales, events, and editorial as applicable." The program is open now, and the first products that will be eligible to be part of the program must launch on or after October 16th. [...] However, developers can be a part of First Run and still release their products on their own stores.
Here's what Epic says about which products are eligible: "A new release game or app which has not been previously released on another third-party PC store or included in a subscription service available on another third-party PC store. Games or apps with a pre-existing exclusivity deal with the Epic Games Store are not eligible for the program."
AI

New AP Guidelines Lay the Groundwork For AI-Assisted Newsrooms (engadget.com) 11

An anonymous reader quotes a report from Engadget: The Associated Press published standards today for generative AI use in its newsroom. The organization, which has a licensing agreement with ChatGPT maker OpenAI, listed a fairly restrictive and common-sense list of measures around the burgeoning tech while cautioning its staff not to use AI to make publishable content. Although nothing in the new guidelines is particularly controversial, less scrupulous outlets could view the AP's blessing as a license to use generative AI more excessively or underhandedly.

The organization's AI manifesto underscores a belief that artificial intelligence content should be treated as the flawed tool that it is -- not a replacement for trained writers, editors and reporters exercising their best judgment. "We do not see AI as a replacement of journalists in any way," the AP's Vice President for Standards and Inclusion, Amanda Barrett, wrote in an article about its approach to AI today. "It is the responsibility of AP journalists to be accountable for the accuracy and fairness of the information we share." The article directs its journalists to view AI-generated content as "unvetted source material," to which editorial staff "must apply their editorial judgment and AP's sourcing standards when considering any information for publication." It says employees may "experiment with ChatGPT with caution" but not create publishable content with it. That includes images, too. "In accordance with our standards, we do not alter any elements of our photos, video or audio," it states. "Therefore, we do not allow the use of generative AI to add or subtract any elements." However, it carved an exception for stories where AI illustrations or art are a story's subject -- and even then, it has to be clearly labeled as such.

Barrett warns about AI's potential for spreading misinformation. To prevent the accidental publishing of anything AI-created that appears authentic, she says AP journalists "should exercise the same caution and skepticism they would normally, including trying to identify the source of the original content, doing a reverse image search to help verify an image's origin, and checking for reports with similar content from trusted media." To protect privacy, the guidelines also prohibit writers from entering "confidential or sensitive information into AI tools." Although that's a relatively common-sense and uncontroversial set of rules, other media outlets have been less discerning. [...] It's not hard to imagine other outlets -- desperate for an edge in the highly competitive media landscape -- viewing the AP's (tightly restricted) AI use as a green light to make robot journalism a central figure in their newsrooms, publishing poorly edited / inaccurate content or failing to label AI-generated work as such.
Further reading: NYT Prohibits Using Its Content To Train AI Models
Google

CNET Deletes Thousands of Old Articles To Game Google Search (gizmodo.com) 48

According to Gizmodo, CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results. From the report: Archived copies of CNET's author pages show the company deleted small batches of articles prior to the second half of July, but then the pace increased. Thousands of articles disappeared in recent weeks. A CNET representative confirmed that the company was culling stories but declined to share exactly how many it has taken down. The move adds to recent controversies over CNET's editorial strategy, which has included layoffs and experiments with error-riddled articles written by AI chatbots.

"Removing content from our site is not a decision we take lightly. Our teams analyze many data points to determine whether there are pages on CNET that are not currently serving a meaningful audience. This is an industry-wide best practice for large sites like ours that are primarily driven by SEO traffic," said Taylor Canada, CNET's senior director of marketing and communications. "In an ideal world, we would leave all of our content on our site in perpetuity. Unfortunately, we are penalized by the modern internet for leaving all previously published content live on our site."

CNET shared an internal memo about the practice. Removing, redirecting, or refreshing irrelevant or unhelpful URLs "sends a signal to Google that says CNET is fresh, relevant and worthy of being placed higher than our competitors in search results," the document reads. According to the memo about the "content pruning" the company considers a number of factors before it "deprecates" an article, including SEO, the age and length of the story, traffic to the article, and how frequently Google crawls the page. The company says it weighs historical significance and other editorial factors before an article is taken down. When an article is slated for deletion, CNET says it maintains its own copy, and sends the story to the Internet Archive's Wayback Machine. The company also says current staffers whose articles are deprecated will be alerted at least 10 days ahead of time.
What does Google have to say about this? According to the company's Public Liaison for Google Search, Danny Sullivan, Google recommends against the practice. "Are you deleting content from your site because you somehow believe Google doesn't like 'old' content? That's not a thing! Our guidance doesn't encourage this," Sullivan said in a series of tweets.

If a website has an individual page with outdated content, that page "isn't likely to rank well. Removing it might mean, if you have a massive site, that we're better able to crawl other content on the site. But it doesn't mean we go, 'Oh, now the whole site is so much better' because of what happens with an individual page." Sullivan wrote. "Just don't assume that deleting something only because it's old will improve your site's SEO magically."

Slashdot Top Deals