×
Social Networks

Reddit Grows, Seeks More AI Deals, Plans 'Award' Shops, and Gets Sued (yahoo.com) 45

Reddit reported its first results since going public in late March. Yahoo Finance reports: Daily active users increased 37% year over year to 82.7 million. Weekly active unique users rose 40% from the prior year. Total revenue improved 48% to $243 million, nearly doubling the growth rate from the prior quarter, due to strength in advertising. The company delivered adjusted operating profits of $10 million, versus a $50.2 million loss a year ago. [Reddit CEO Steve] Huffman declined to say when the company would be profitable on a net income basis, noting it's a focus for the management team. Other areas of focus include rolling out a new user interface this year, introducing shopping capabilities, and searching for another artificial intelligence content licensing deal like the one with Google.
Bloomberg notes that already Reddit "has signed licensing agreements worth $203 million in total, with terms ranging from two to three years. The company generated about $20 million from AI content deals last quarter, and expects to bring in more than $60 million by the end of the year."

And elsewhere Bloomberg writes that Reddit "plans to expand its revenue streams outside of advertising into what Huffman calls the 'user economy' — users making money from others on the platform... " In the coming months Reddit plans to launch new versions of awards, which are digital gifts users can give to each other, along with other products... Reddit also plans to continue striking data licensing deals with artificial intelligence companies, expanding into international markets and evaluating potential acquisition targets in areas such as search, he said.
Meanwhile, ZDNet notes that this week a Reddit announcement "introduced a new public content policy that lays out a framework for how partners and third parties can access user-posted content on its site." The post explains that more and more companies are using unsavory means to access user data in bulk, including Reddit posts. Once a company gets this data, there's no limit to what it can do with it. Reddit will continue to block "bad actors" that use unauthorized methods to get data, the company says, but it's taking additional steps to keep users safe from the site's partners.... Reddit still supports using its data for research: It's creating a new subreddit — r/reddit4researchers — to support these initiatives, and partnering with OpenMined to help improve research. Private data is, however, going to stay private.

If a company wants to use Reddit data for commercial purposes, including advertising or training AI, it will have to pay. Reddit made this clear by saying, "If you're interested in using Reddit data to power, augment, or enhance your product or service for any commercial purposes, we require a contract." To be clear, Reddit is still selling users' data — it's just making sure that unscrupulous actors have a tougher time accessing that data for free and researchers have an easier time finding what they need.

And finally, there's some court action, according to the Register. Reddit "was sued by an unhappy advertiser who claims that internet giga-forum sold ads but provided no way to verify that real people were responsible for clicking on them." The complaint [PDF] was filed this week in a U.S. federal court in northern California on behalf of LevelFields, a Virginia-based investment research platform that relies on AI. It says the biz booked pay-per-click ads on the discussion site starting September 2022... That arrangement called for Reddit to use reasonable means to ensure that LevelField's ads were delivered to and clicked on by actual people rather than bots and the like. But according to the complaint, Reddit broke that contract...

LevelFields argues that Reddit is in a particularly good position to track click fraud because it's serving ads on its own site, as opposed to third-party properties where it may have less visibility into network traffic... Nonetheless, LevelFields's effort to obtain IP address data to verify the ads it was billed for went unfulfilled. The social media site "provided click logs without IP addresses," the complaint says. "Reddit represented that it was not able to provide IP addresses."

"The plaintiffs aspire to have their claim certified as a class action," the article adds — along with an interesting statistic.

"According to Juniper Research, 22 percent of ad spending last year was lost to click fraud, amounting to $84 billion."
AI

OpenAI's Sam Altman on iPhones, Music, Training Data, and Apple's Controversial iPad Ad (youtube.com) 37

OpenAI CEO Sam Altman gave an hour-long interview to the "All-In" podcast (hosted by Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg). And speaking on technology's advance, Altman said "Phones are unbelievably good.... I personally think the iPhone is like the greatest piece of technology humanity has ever made. It's really a wonderful product."


Q: What comes after it?

Altman: I don't know. I mean, that was what I was saying. It's so good, that to get beyond it, I think the bar is quite high.

Q: You've been working with Jony Ive on something, right?

Altman: We've been discussing ideas, but I don't — like, if I knew...


Altman said later he thought voice interaction "feels like a different way to use a computer."

But the conversation turned to Apple in another way. It happened in a larger conversation where Altman said OpenAI has "currently made the decision not to do music, and partly because exactly these questions of where you draw the lines..."

Altman: Even the world in which — if we went and, let's say we paid 10,000 musicians to create a bunch of music, just to make a great training set, where the music model could learn everything about song structure and what makes a good, catchy beat and everything else, and only trained on that — let's say we could still make a great music model, which maybe we could. I was posing that as a thought experiment to musicians, and they were like, "Well, I can't object to that on any principle basis at that point — and yet there's still something I don't like about it." Now, that's not a reason not to do it, um, necessarily, but it is — did you see that ad that Apple put out... of like squishing all of human creativity down into one really iPad...?

There's something about — I'm obviously hugely positive on AI — but there is something that I think is beautiful about human creativity and human artistic expression. And, you know, for an AI that just does better science, like, "Great. Bring that on." But an AI that is going to do this deeply beautiful human creative expression? I think we should figure out — it's going to happen. It's going to be a tool that will lead us to greater creative heights. But I think we should figure out how to do it in a way that preserves the spirit of what we all care about here.

What about creators whose copyrighted materials are used for training data? Altman had a ready answer — but also some predictions for the future. "On fair use, I think we have a very reasonable position under the current law. But I think AI is so different that for things like art, we'll need to think about them in different ways..." Altman:I think the conversation has been historically very caught up on training data, but it will increasingly become more about what happens at inference time, as training data becomes less valuable and what the system does accessing information in context, in real-time... what happens at inference time will become more debated, and what the new economic model is there.
Altman gave the example of an AI which was never trained on any Taylor Swift songs — but could still respond to a prompt requesting a song in her style. Altman: And then the question is, should that model, even if it were never trained on any Taylor Swift song whatsoever, be allowed to do that? And if so, how should Taylor get paid? So I think there's an opt-in, opt-out in that case, first of all — and then there's an economic model.
Altman also wondered if there's lessons in the history and economics of music sampling...
Red Hat Software

RHEL (and Rocky and Alma Linux) 9.4 Released - Plus AI Offerings (almalinux.org) 19

Red Hat Enterprise Linux 9.4 has been released. But also released is Rocky Linux 9.4, reports 9to5Linux: Rocky Linux 9.4 also adds openSUSE's KIWI next-generation appliance builder as a new image build workflow and process for building images that are feature complete with the old images... Under the hood, Rocky Linux 9.4 includes the same updated components from the upstream Red Hat Enterprise Linux 9.4
This week also saw the release of Alma Linux 9.4 stable (the "forever-free enterprise Linux distribution... binary compatible with RHEL.") The Register points out that while Alma Linux is "still supporting some aging hardware that the official RHEL 9.4 drops, what's new is largely the same in them both."

And last week also saw the launch of the AlmaLinux High-Performance Computing and AI Special Interest Group (SIG). HPCWire reports: "AlmaLinux's status as a community-driven enterprise Linux holds incredible promise for the future of HPC and AI," said Hayden Barnes, SIG leader and Senior Open Source Community Manager for AI Software at HPE. "Its transparency and stability empowers researchers, developers and organizations to collaborate, customize and optimize their computing environments, fostering a culture of innovation and accelerating breakthroughs in scientific research and cutting-edge AI/ML."
And this week, InfoWorld reported: Red Hat has launched Red Hat Enterprise Linux AI (RHEL AI), described as a foundation model platform that allows users to more seamlessly develop and deploy generative AI models. Announced May 7 and available now as a developer preview, RHEL AI includes the Granite family of open-source large language models (LLMs) from IBM, InstructLab model alignment tools based on the LAB (Large-Scale Alignment for Chatbots) methodology, and a community-driven approach to model development through the InstructLab project, Red Hat said.
AI

Did OpenAI, Google and Meta 'Cut Corners' to Harvest AI Training Data? (indiatimes.com) 58

What happened when OpenAI ran out of English-language training data in 2021?

They just created a speech recognition tool that could transcribe the audio from YouTube videos, reports The New York Times, as part of an investigation arguing that tech companies "including OpenAI, Google and Meta have cut corners, ignored corporate policies and debated bending the law" in their search for AI training data. [Alternate URL here.] Some OpenAI employees discussed how such a move might go against YouTube's rules, three people with knowledge of the conversations said. YouTube, which is owned by Google, prohibits use of its videos for applications that are "independent" of the video platform. Ultimately, an OpenAI team transcribed more than 1 million hours of YouTube videos, the people said. The team included Greg Brockman, OpenAI's president, who personally helped collect the videos, two of the people said. The texts were then fed into a system called GPT-4...

At Meta, which owns Facebook and Instagram, managers, lawyers and engineers last year discussed buying the publishing house Simon & Schuster to procure long works, according to recordings of internal meetings obtained by the Times. They also conferred on gathering copyrighted data from across the internet, even if that meant facing lawsuits. Negotiating licenses with publishers, artists, musicians and the news industry would take too long, they said.

Like OpenAI, Google transcribed YouTube videos to harvest text for its AI models, five people with knowledge of the company's practices said. That potentially violated the copyrights to the videos, which belong to their creators. Last year, Google also broadened its terms of service. One motivation for the change, according to members of the company's privacy team and an internal message viewed by the Times, was to allow Google to be able to tap publicly available Google Docs, restaurant reviews on Google Maps and other online material for more of its AI products...

Some Google employees were aware that OpenAI had harvested YouTube videos for data, two people with knowledge of the companies said. But they didn't stop OpenAI because Google had also used transcripts of YouTube videos to train its AI models, the people said. That practice may have violated the copyrights of YouTube creators. So if Google made a fuss about OpenAI, there might be a public outcry against its own methods, the people said.

The article adds that some tech companies are now even developing "synthetic" information to train AI.

"This is not organic data created by humans, but text, images and code that AI models produce — in other words, the systems learn from what they themselves generate."
AI

Apple Will Revamp Siri To Catch Up To Its Chatbot Competitors (nytimes.com) 22

An anonymous reader quotes a report from the New York Times: Apple's top software executives decided early last year that Siri, the company's virtual assistant, needed a brain transplant. The decision came after the executives Craig Federighi and John Giannandrea spent weeks testing OpenAI's new chatbot, ChatGPT. The product's use of generative artificial intelligence, which can write poetry, create computer code and answer complex questions, made Siri look antiquated, said two people familiar with the company's work, who didn't have permission to speak publicly. Introduced in 2011 as the original virtual assistant in every iPhone, Siri had been limited for years to individual requests and had never been able to follow a conversation. It often misunderstood questions. ChatGPT, on the other hand, knew that if someone asked for the weather in San Francisco and then said, "What about New York?" that user wanted another forecast.

The realization that new technology had leapfrogged Siri set in motion the tech giant's most significant reorganization in more than a decade. Determined to catch up in the tech industry's A.I. race, Apple has made generative A.I. a tent pole project -- the company's special, internal label that it uses to organize employees around once-in-a-decade initiatives. Apple is expected to show off its A.I. work at its annual developers conference on June 10 when it releases an improved Siri that is more conversational and versatile, according to three people familiar with the company's work, who didn't have permission to speak publicly. Siri's underlying technology will include a new generative A.I. system that will allow it to chat rather than respond to questions one at a time. The update to Siri is at the forefront of a broader effort to embrace generative A.I. across Apple's business. The company is also increasing the memory in this year's iPhones to support its new Siri capabilities. And it has discussed licensing complementary A.I. models that power chatbots from several companies, including Google, Cohere and OpenAI.
Further reading: Apple Might Bring AI Transcription To Voice Memos and Notes
AI

Bumble's Dating 'AI Concierge' Will Date Hundreds of Other People's 'Concierges' For You (fortune.com) 63

An anonymous reader quotes a report from Fortune: Imagine this: you've "dated" 600 people in San Fransisco without having typed a word to any of them. Instead, a busy little bot has completed the mindless 'getting-to-know-you' chatter on your behalf, and has told you which people you should actually get off the couch to meet. That's the future of dating, according to Whitney Wolfe Herd -- and she'd know. Wolfe Herd is the founder and executive chair of Bumble, a meeting and networking platform that prompted women to make the first move. While the platform has now changed this aspect of its algorithm, Wolfe Herd said the company would always keep its "North Star" in mind: "A safer, kinder digital platform for more healthy and more equitable relationships. "Always putting women in the driver's seat -- not to put men down -- but to actually recalibrate the way we all treat each other."

Like any platform, Bumble is now navigating itself in a world of AI -- which means rethinking how humans will interact with each other in an increasing age of chatbots. Wolfe Herd toldBloomberg Technology Summit in San Francisco this week it could streamline the matching process. "If you want to get really out there, there is a world where your [AI] dating concierge could go and date for you with other dating concierge," she told host Emily Chang. "Truly. And then you don't have to talk to 600 people. It will scan all of San Fransisco for you and say: 'These are the three people you really outta meet.'" And forget catch-ups with friends, swapping notes on your love life -- AI can be that metaphorical shoulder to cry on.

Artificial intelligence -- which has seen massive amounts of investment since OpenAI disrupted the market with its ChatGPT large language model -- can help coach individuals on how to date and present themselves in the best light to potential partners. "So, for example, you could in the near future be talking to your AI dating concierge and you could share your insecurities,"Wolfe Herd explained. "'I've just come out of a break-up, I've got commitment issues,' and it could help you train yourself into a better way of thinking about yourself." "Then it could give you productive tips for communicating with other people," she added. If these features do indeed come to Bumble in the future, they will impact the experience of millions.

AI

Apple Might Bring AI Transcription To Voice Memos and Notes (appleinsider.com) 18

Apple's plans for AI on the iPhone could bring real-time transcription to its Voice Memos and Notes apps, according to a report from AppleInsider: People familiar with the matter have told us that Apple has been working on AI-powered summarization and greatly enhanced audio transcription for several of its next-gen operating systems. The new features are expected to enable significant improvements in efficiency for users of its staple Notes, Voice Memos, and other apps. Apple is currently testing the capabilities as feature additions to several app updates scheduled to arrive with the release of iOS 18 later in 2024. They're also expected to make their way to the corresponding apps in macOS 15 and iPadOS 18 as well.
AI

CEO of World's Biggest Ad Firm Targeted By Deepfake Scam 11

The head of the world's biggest advertising group was the target of an elaborate deepfake scam that involved an AI voice clone. From a report: The CEO of WPP, Mark Read, detailed the attempted fraud in a recent email to leadership, warning others at the company to look out for calls claiming to be from top executives. Fraudsters created a WhatsApp account with a publicly available image of Read and used it to set up a Microsoft Teams meeting that appeared to be with him and another senior WPP executive, according to the email obtained by the Guardian.

During the meeting, the impostors deployed a voice clone of the executive as well as YouTube footage of them. The scammers impersonated Read off-camera using the meeting's chat window. The scam, which was unsuccessful, targeted an "agency leader," asking them to set up a new business in an attempt to solicit money and personal details. "Fortunately the attackers were not successful," Read wrote in the email. "We all need to be vigilant to the techniques that go beyond emails to take advantage of virtual meetings, AI and deepfakes."
AI

Will Chatbots Eat India's IT Industry? (economist.com) 61

Economist: What is the ideal job to outsource to AI? Today's AIs, in particular the Chatgpt-like generative sort, have a leaky memory, cannot handle physical objects and are worse than humans at interacting with humans. Where they excel is in manipulating numbers and symbols, especially within well-defined tasks such as writing bits of computer code. This happens to be the forte of giant existing outsourcing businesses -- India's information-technology companies. Seven of them, including the two biggest, Tata Consultancy Services (TCS) and Infosys, collectively laid off 75,000 employees last year. The firms say this reduction, equivalent to about 4% of their combined workforce, has nothing to do with ai and reflects the broader slowdown in the tech sector. In reality, they say, ai is an opportunity, not a threat.

Business services are critical to India's economy. The sector employs 5m people, or less than 1% of Indian workers, but contributes 7% of GDP and nearly a quarter of total exports. Simple services such as call centres account for a fifth of those foreign revenues. Three-fifths are generated by it services such as moving data to the computing cloud. The rest comes from sophisticated processes tailored for individual clients. Capital Economics, a research firm, calculates that an extreme case, in which ai wiped out the industry entirely and the resources were not reallocated, would knock nearly one percentage point off annual GDP growth over the next decade in India. In a likelier scenario of "a slow demise," the country would grow 0.3-0.4 percentage points less fast. The simplest jobs are the most vulnerable. Data from Upwork, a freelancing platform, shows that earnings for uncomplicated writing tasks like copy-editing fell by 5% between Chatgpt's launch in November 2022 and April 2023, relative to roles less affected by ai. In the year after Dall-e 2, an image-creation model, was launched in April 2022, wages for jobs like graphic design fell by 7-14%. Some companies are using AI to deal with simple customer-service requests and repetitive data-processing tasks. In April K. Krithivasan, chief executive of TCS, predicted that "maybe a year or so down the line" chatbots could do much of the work of a call-centre employee. In time, he mused, AI could foretell gripes and alleviate them before a customer ever picks up the phone.

Medicine

The Most Detailed 3D Reconstruction of Human Brain Tissue (interestingengineering.com) 25

An anonymous reader quotes a report from Interesting Engineering: Imagine exploring the intricate world within a single cubic millimeter of human brain tissue. It might seem insignificant, but within that tiny space lies a universe of complexity -- 57,000 individual cells, 230 millimeters of blood vessels, and a staggering 150 million synapses, the junctions where neurons communicate. All this information translates to a mind-boggling 1,400 terabytes of data. That's the kind of groundbreaking achievement researchers from Harvard and Google have just accomplished.

Leading the charge at Harvard is Professor Jeff Lichtman, a renowned expert in brain structure. Partnering with Google AI, Lichtman's team has co-created the most detailed 3D reconstruction of a human brain fragment to date. This intricate map, published in Science, offers an unprecedented view of the human temporal cortex, the region responsible for memory and other higher functions. Envision a piece of brain tissue roughly half the size of a rice grain but magnified to reveal every cell and its web of neural connections in vivid detail. This remarkable feat is the culmination of nearly a decade of collaboration between Harvard and Google. Lichtman's expertise in electron microscopy imaging is combined with Google's cutting-edge AI algorithms. [...]

The newly published map in Science reveals previously unseen details of brain structure. One such discovery is a rare but powerful set of axons, each connected by up to 50 synapses, potentially influencing a significant number of neighboring neurons. The team also encountered unexpected structures, like a small number of axons forming intricate whorls. Since the sample came from a patient with epilepsy, it's unclear if these formations are specific to the condition or simply uncommon occurrences.

AI

Apple To Power AI Tools With In-House Server Chips This Year (bloomberg.com) 17

Apple will deliver some of its upcoming AI features this year via data centers equipped with its own in-house processors, part of a sweeping effort to infuse its devices with AI capabilities. From a report: The company is placing high-end chips -- similar to ones it designed for the Mac -- in cloud-computing servers designed to process the most advanced AI tasks coming to Apple devices, according to people familiar with the matter. Simpler AI-related features will be processed directly on iPhones, iPads and Macs, said the people, who asked not to be identified because the plan is still under wraps.

The move is part of Apple's much-anticipated push into generative artificial intelligence -- the technology behind ChatGPT and other popular tools. The company is playing catch-up with Big Tech rivals in the area but is poised to lay out an ambitious AI strategy at its Worldwide Developers Conference on June 10. Apple's plan to use its own chips and process AI tasks in the cloud was hatched about three years ago, but the company accelerated the timeline after the AI craze -- fueled by OpenAI's ChatGPT and Google's Gemini -- forced it to move more quickly. The first AI server chips will be the M2 Ultra, which was launched last year as part of the Mac Pro and Mac Studio computers, though the company is already eyeing future versions based on the M4 chip

IT

OpenAI Considers Allowing Users To Create AI-Generated Pornography (theguardian.com) 108

OpenAI, the company behind ChatGPT, is exploring whether users should be allowed to create AI-generated pornography and other explicit content with its products. From a report:While the company stressed that its ban on deepfakes would continue to apply to adult material, campaigners suggested the proposal undermined its mission statement to produce "safe and beneficial" AI. OpenAI, which is also the developer of the DALL-E image generator, revealed it was considering letting developers and users "responsibly" create what it termed not-safe-for-work (NSFW) content through its products. OpenAI said this could include "erotica, extreme gore, slurs, and unsolicited profanity."

It said: "We're exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts ... We look forward to better understanding user and societal expectations of model behaviour in this area." The proposal was published as part of an OpenAI document discussing how it develops its AI tools. Joanne Jang, an employee at the San Francisco-based company who worked on the document, told the US news organisation NPR that OpenAI wanted to start a discussion about whether the generation of erotic text and nude images should always be banned from its products. However, she stressed that deepfakes would not be allowed.

China

Deepfakes of Your Dead Loved Ones Are a Booming Chinese Business (technologyreview.com) 57

An anonymous reader quotes a report from MIT Technology Review: Once a week, Sun Kai has a video call with his mother. He opens up about work, the pressures he faces as a middle-aged man, and thoughts that he doesn't even discuss with his wife. His mother will occasionally make a comment, like telling him to take care of himself -- he's her only child. But mostly, she just listens. That's because Sun's mother died five years ago. And the person he's talking to isn't actually a person, but a digital replica he made of her -- a moving image that can conduct basic conversations. They've been talking for a few years now. After she died of a sudden illness in 2019, Sun wanted to find a way to keep their connection alive. So he turned to a team at Silicon Intelligence, an AI company based in Nanjing, China, that he cofounded in 2017. He provided them with a photo of her and some audio clips from their WeChat conversations. While the company was mostly focused on audio generation, the staff spent four months researching synthetic tools and generated an avatar with the data Sun provided. Then he was able to see and talk to a digital version of his mom via an app on his phone.

"My mom didn't seem very natural, but I still heard the words that she often said: 'Have you eaten yet?'" Sun recalls of the first interaction. Because generative AI was a nascent technology at the time, the replica of his mom can say only a few pre-written lines. But Sun says that's what she was like anyway. "She would always repeat those questions over and over again, and it made me very emotional when I heard it," he says. There are plenty of people like Sun who want to use AI to preserve, animate, and interact with lost loved ones as they mourn and try to heal. The market is particularly strong in China, where at least half a dozen companies are now offering such technologies and thousands of people have already paid for them. In fact, the avatars are the newest manifestation of a cultural tradition: Chinese people have always taken solace from confiding in the dead.

The technology isn't perfect -- avatars can still be stiff and robotic -- but it's maturing, and more tools are becoming available through more companies. In turn, the price of "resurrecting" someone -- also called creating "digital immortality" in the Chinese industry -- has dropped significantly. Now this technology is becoming accessible to the general public. Some people question whether interacting with AI replicas of the dead is actually a healthy way to process grief, and it's not entirely clear what the legal and ethical implications of this technology may be. For now, the idea still makes a lot of people uncomfortable. But as Silicon Intelligence's other cofounder, CEO Sima Huapeng, says, "Even if only 1% of Chinese people can accept [AI cloning of the dead], that's still a huge market."

AI

Researchers Warned Against Using AI To Peer Review Academic Papers (semafor.com) 17

Researchers should not be using tools like ChatGPT to automatically peer review papers, warned organizers of top AI conferences and academic publishers worried about maintaining intellectual integrity. From a report: With recent advances in large language models, researchers have been increasingly using them to write peer reviews -- a time-honored academic tradition that examines new research and assesses its merits, showing a person's work has been vetted by other experts in the field. That's why asking ChatGPT to analyze manuscripts and critique the research, without having read the papers, would undermine the peer review process. To tackle the problem, AI and machine learning conferences are now thinking about updating their policies, as some guidelines don't explicitly ban the use of AI to process manuscripts, and the language can be fuzzy.

The Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering setting up a committee to determine whether it should update its policies around using LLMs for peer review, a spokesperson told Semafor. At NeurIPS, researchers should not "share submissions with anyone without prior approval" for example, while the ethics code at the International Conference on Learning Representations (ICLR), whose annual confab kicked off Tuesday, states that "LLMs are not eligible for authorship." Representatives from NeurIPS and ICLR said "anyone" includes AI, and that authorship covers both papers and peer review comments. A spokesperson for Springer Nature, an academic publishing company best known for its top research journal Nature, said that experts are required to evaluate research and leaving it to AI is risky.

Programming

Stack Overflow is Feeding Programmers' Answers To AI, Whether They Like It or Not 90

Stack Overflow's new deal giving OpenAI access to its API as a source of data has users who've posted their questions and answers about coding problems in conversations with other humans rankled. From a report: Users say that when they attempt to alter their posts in protest, the site is retaliating by reversing the alterations and suspending the users who carried them out.

A programmer named Ben posted a screenshot yesterday of the change history for a post seeking programming advice, which they'd updated to say that they had removed the question to protest the OpenAI deal. "The move steals the labour of everyone who contributed to Stack Overflow with no way to opt-out," read the updated post. The text was reverted less than an hour later. A moderator message Ben also included says that Stack Overflow posts become "part of the collective efforts" of other contributors once made and that they should only be removed "under extraordinary circumstances." The moderation team then said it was suspending his account for a week while it reached out "to avoid any further misunderstandings."
AI

Google DeepMind's 'Leap Forward' in AI Could Unlock Secrets of Biology (theguardian.com) 29

Researchers have hailed another "leap forward" for AI after Google DeepMind unveiled the latest version of its AlphaFold program, which can predict how proteins behave in the complex symphony of life. From a report: The breakthrough promises to shed fresh light on the biological machinery that underpins living organisms and drive breakthroughs in fields from antibiotics and cancer therapy to new materials and resilient crops. "It's a big milestone for us," said Demis Hassabis, the chief executive of Google DeepMind and the spin-off, Isomorphic Labs, which co-developed AlphaFold3. "Biology is a dynamic system and you have to understand how properties of biology emerge through the interactions between different molecules."

Earlier versions of AlphaFold focused on predicting the 3D structures of 200m proteins, the building blocks of life, from their chemical constituents. Knowing what shape a protein takes is crucial because it determines how the protein will function -- or malfunction -- inside a living organism. AlphaFold3 was trained on a global database of 3D molecular structures and goes a step further by predicting how proteins will interact with the other molecules and ions they encounter. When asked to make a prediction, the program starts with a cloud of atoms and steadily reshapes it into the most accurate predicted structure. Writing in Nature, the researchers describe how AlphaFold3 can predict how proteins interact with other proteins, ions, strands of genetic code, and smaller molecules, such as those developed for medicines. In tests, the program's accuracy varied from 62% to 76%.

Apple

Apple Slammed By Users Over iPad Pro 'Crush' Ad (venturebeat.com) 172

Less than 24 hours after Apple held a special event to unveil the new, record-thin (0.20 inch, the thinnest Apple device yet) iPad Pro with M4 chip inside, which the company says is optimized for AI, it is facing a loud and fast-spreading public backlash to one of its new marquee video advertisements promoting the device -- a spot called "Crush." VentureBeat: The video features a giant, industrial hydraulic press machine -- a device category famous for appearing in viral videos over the last decade-and-a-half -- literally pressing down upon and destroying dozens of other objects and creative instruments, from trumpets to cans of paint. The ad concludes with the press lifting to reveal these objects have somehow been transformed into a new iPad Pro. The metaphor and messaging is pretty obvious: the iPad Pro can subsume and replace all these older legacy instruments and technologies inside of it, and all in a more portable, sleek, and more powerful form factor than ever before.

It's analogous to similar observations and advertisements other fans and creatives have made in the past about how PCs and smartphones replaced nearly all the individual gadgets -- stereo radios/boom boxes, journals, calculators, drawing pads, typewriters, video cameras -- of yore by offering many of their same core capabilities in a smaller, unified, more portable form factor. [...] People are revolted by the bluntness of Apple's metaphor, the destruction of beloved traditional instruments and objects which people hold in high esteem and affix intangible value to for their creative potential, and the overarching and perhaps unintentional messaging that Apple wants to literally flatten creativity and violently crush the creative tools of yesterday in favor of a multi-hundred dollar piece of luxury technology whose operating system and ecosystem of applications it tightly controls and restricts.

United States

US Eyes Curbs on China's Access To AI Software Behind Apps Like ChatGPT (reuters.com) 27

The Biden administration is poised to open up a new front in its effort to safeguard U.S. AI from China with preliminary plans to place guardrails around the most advanced AI models, the core software of artificial intelligence systems like ChatGPT, Reuters reported Wednesday. From the report: The Commerce Department is considering a new regulatory push to restrict the export of proprietary or closed source AI models, whose software and the data it is trained on are kept under wraps, three people familiar with the matter said. Any action would complement a series of measures put in place over the last two years to block the export of sophisticated AI chips to China in an effort to slow Beijing's development of the cutting edge technology for military purposes. Even so, it will be hard for regulators to keep pace with the industry's fast-moving developments.

Currently, nothing is stopping U.S. AI giants like Microsoft-backed OpenAI, Alphabet's Google DeepMind and rival Anthropic, which have developed some of the most powerful closed source AI models, from selling them to almost anyone in the world without government oversight. Government and private sector researchers worry U.S. adversaries could use the models, which mine vast amounts of text and images to summarize information and generate content, to wage aggressive cyber attacks or even create potent biological weapons. To develop an export control on AI models, the sources said the U.S. may turn to a threshold contained in an AI executive order issued last October that is based on the amount of computing power it takes to train a model. When that level is reached, a developer must report its AI model development plans and provide test results to the Commerce Department.

China

US Revokes Intel, Qualcomm Licenses To Sell Chips To Huawei (msn.com) 241

An anonymous reader quotes a report from MSN: The US has revoked licenses allowing Huawei to buy semiconductors from Qualcomm and Intel, according to people familiar with the matter, further tightening export restrictions against the Chinese telecom equipment maker. Withdrawal of the licenses affects US sales of chips for use in Huawei phones and laptops, according to the people, who discussed the move on condition of anonymity. House Foreign Affairs Committee Chairman Michael McCaul confirmed the administration's decision in an interview Tuesday. He said the move is key to preventing China from developing advanced AI. "It's blocking any chips sold to Huawei," said McCaul, a Texas Republican who was briefed about the license decisions for Intel and Qualcomm. "Those are two companies we've always worried about being a little too close to China."

While the decision may not affect a significant volume of chips, it underscores the US government's determination to curtail China's access to a broad swathe of semiconductor technology. Officials are also considering sanctions against six Chinese firms that they suspect could supply chips to Huawei, which has been on a US trade restrictions list since 2019. [...] Qualcomm recently said that its business with Huawei is already limited and will soon shrink to nothing. It has been allowed to supply the Chinese company with chips that provide older 4G network connections. It's prohibited from selling ones that allow more advanced 5G access.

Supercomputing

Defense Think Tank MITRE To Build AI Supercomputer With Nvidia (washingtonpost.com) 44

An anonymous reader quotes a report from the Washington Post: A key supplier to the Pentagon and U.S. intelligence agencies is building a $20 million supercomputer with buzzy chipmaker Nvidia to speed deployment of artificial intelligence capabilities across the U.S. federal government, the MITRE think tank said Tuesday. MITRE, a federally funded, not-for-profit research organization that has supplied U.S. soldiers and spies with exotic technical products since the 1950s, says the project could improve everything from Medicare to taxes. "There's huge opportunities for AI to make government more efficient," said Charles Clancy, senior vice president of MITRE. "Government is inefficient, it's bureaucratic, it takes forever to get stuff done. ... That's the grand vision, is how do we do everything from making Medicare sustainable to filing your taxes easier?" [...] The MITRE supercomputer will be based in Ashburn, Va., and should be up and running late this year. [...]

Clancy said the planned supercomputer will run 256 Nvidia graphics processing units, or GPUs, at a cost of $20 million. This counts as a small supercomputer: The world's fastest supercomputer, Frontier in Tennessee, boasts 37,888 GPUs, and Meta is seeking to build one with 350,000 GPUs. But MITRE's computer will still eclipse Stanford's Natural Language Processing Group's 68 GPUs, and will be large enough to train large language models to perform AI tasks tailored for government agencies. Clancy said all federal agencies funding MITRE will be able to use this AI "sandbox." "AI is the tool that is solving a wide range of problems," Clancy said. "The U.S. military needs to figure out how to do command and control. We need to understand how cryptocurrency markets impact the traditional banking sector. ... Those are the sorts of problems we want to solve."

Slashdot Top Deals