AI

Call Center Workers Are Tired of Being Mistaken for AI (bloomberg.com) 83

Bloomberg reports: By the time Jessica Lindsey's customers accuse her of being an AI, they are often already shouting. For the past two years, her work as a call center agent for outsourcing company Concentrix has been punctuated by people at the other end of the phone demanding to speak to a real human. Sometimes they ask her straight, 'Are you an AI?' Other times they just start yelling commands: 'Speak to a representative! Speak to a representative...!' Skeptical customers are already frustrated from dealing with the automated system that triages calls before they reach a person. So when Lindsey starts reading from her AmEx-approved script, callers are infuriated by what they perceive to be another machine. "They just end up yelling at me and hanging up," she said, leaving Lindsey sitting in her home office in Oklahoma, shocked and sometimes in tears. "Like, I can't believe I just got cut down at 9:30 in the morning because they had to deal with the AI before they got to me...."

In Australia, Canada, Greece and the US, call center agents say they've been repeatedly mistaken for AI. These people, who spend hours talking to strangers, are experiencing surreal conversations, where customers ask them to prove they are not machines... [Seth, a US-based Concentrix worker] said he is asked if he's AI roughly once a week. In April, one customer quizzed him for around 20 minutes about whether he was a machine. The caller asked about his hobbies, about how he liked to go fishing when not at work, and what kind of fishing rod he used. "[It was as if she wanted] to see if I glitched," he said. "At one point, I felt like she was an AI trying to learn how to be human...."

Sarah, who works in benefits fraud-prevention for the US government — and asked to use a pseudonym for fear of being reprimanded for talking to the media — said she is mistaken for AI between three or four times every month... Sarah tries to change her inflections and tone of voice to sound more human. But she's also discovered another point of differentiation with the machines. "Whenever I run into the AI, it just lets you talk, it doesn't cut you off," said Sarah, who is based in Texas. So when customers start to shout, she now tries to interrupt them. "I say: 'Ma'am (or Sir). I am a real person. I'm sitting in an office in the southern US. I was born.'"

EU

How a Crewless, AI-Enhanced Vessel Will Patrol Denmark's and NATO's Waters (euronews.com) 5

After past damage to undersea cables, Denmark will boost their surveillance of Baltic Sea/North Sea waters by deploying four uncrewed surface vessels — about 10 meters long — that are equipped with drones and also AI, reports Euronews.

The founder/CEO of the company that makes the vessels — Saildrone — says they'll work "like a truck" that "carries the sensors." And then "we use on-board sophisticated machine learning and AI to fuse that data to give us a full picture of what's above and below the surface." Powered by solar and wind energy, they can operate autonomously for months at sea. [Saildrone] said the autonomous sailboats can support operations such as illegal fishing detection, border enforcement, and strategic asset protection... The four "Voyagers" will be first in operation for a three-month trial, as Denmark and NATO allies aim at extending maritime presence, especially around critical undersea infrastructure such as fibre optic cables and power lines. NATO and its allies have increased sea patrolling following several incidents.
Graphics

Graphics Artists In China Push Back On AI and Its Averaging Effect (theverge.com) 33

Graphic artists in China are pushing back against AI image generators, which they say "profoundly shifts clients' perception of their work, specifically in terms of how much that work costs and how much time it takes to produce," reports The Verge. "Freelance artists or designers working in industries with clients that invest in stylized, eye-catching graphics, like advertising, are particularly at risk." From the report: Long before AI image generators became popular, graphic designers at major tech companies and in-house designers for large corporate clients were often instructed by managers to crib aesthetics from competitors or from social media, according to one employee at a major online shopping platform in China, who asked to remain anonymous for fear of retaliation from their employer. Where a human would need to understand and reverse engineer a distinctive style to recreate it, AI image generators simply create randomized mutations of it. Often, the results will look like obvious copies and include errors, but other graphic designers can then edit them into a final product.

"I think it'd be easier to replace me if I didn't embrace [AI]," the shopping platform employee says. Early on, as tools like Stable Diffusion and Midjourney became more popular, their colleagues who spoke English well were selected to study AI image generators to increase in-house expertise on how to write successful prompts and identify what types of tasks AI was useful for. Ultimately, it was useful for copying styles from popular artists that, in the past, would take more time to study. "I think it forces both designers and clients to rethink the value of designers," Jia says. "Is it just about producing a design? Or is it about consultation, creativity, strategy, direction, and aesthetic?" [...]

Across the board, though, artists and designers say that AI hype has negatively impacted clients' view of their work's value. Now, clients expect a graphic designer to produce work on a shorter timeframe and for less money, which also has its own averaging impact, lowering the ceiling for what designers can deliver. As clients lower budgets and squish timelines, the quality of the designers' output decreases. "There is now a significant misperception about the workload of designers," [says Erbing, a graphic designer in Beijing who has worked with several ad agencies and asked to be called by his nickname]. "Some clients think that since AI must have improved efficiency, they can halve their budget." But this perception runs contrary to what designers spend the majority of their time doing, which is not necessarily just making any image, Erbing says.

EU

Denmark To Tackle Deepfakes By Giving People Copyright To Their Own Features (theguardian.com) 48

An anonymous reader quotes a report from The Guardian: The Danish government is to clamp down on the creation and dissemination of AI-generated deepfakes by changing copyright law to ensure that everybody has the right to their own body, facial features and voice. The Danish government said on Thursday it would strengthen protection against digital imitations of people's identities with what it believes to be the first law of its kind in Europe. Having secured broad cross-party agreement, the department of culture plans to submit a proposal to amend the current law for consultation before the summer recess and then submit the amendment in the autumn. It defines a deepfake as a very realistic digital representation of a person, including their appearance and voice.

The Danish culture minister, Jakob Engel-Schmidt, said he hoped the bill before parliament would send an "unequivocal message" that everybody had the right to the way they looked and sounded. He told the Guardian: "In the bill we agree and are sending an unequivocal message that everybody has the right to their own body, their own voice and their own facial features, which is apparently not how the current law is protecting people against generative AI." He added: "Human beings can be run through the digital copy machine and be misused for all sorts of purposes and I'm not willing to accept that."

The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent. It will also cover "realistic, digitally generated imitations" of an artist's performance without consent. Violation of the proposed rules could result in compensation for those affected. The government said the new rules would not affect parodies and satire, which would still be permitted.
"Of course this is new ground we are breaking, and if the platforms are not complying with that, we are willing to take additional steps," said Engel-Schmidt.

He expressed hope that other European countries will follow suit and warned that "severe fines" will be imposed if tech platforms fail to comply.
AI

Fed Chair Powell Says AI Is Coming For Your Job 68

Federal Reserve Chair Jerome Powell told the U.S. Senate that while AI hasn't yet dramatically impacted the economy or labor market, its transformative effects are inevitable -- though the timeline remains uncertain. The Register reports: Speaking to the US Senate Banking Committee on Wednesday to give his semiannual monetary policy report, Powell told elected officials that AI's effect on the economy to date is "probably not great" yet, but it has "enormous capabilities to make really significant changes in the economy and labor force." Powell declined to predict how quickly that change could happen, only noting that the final few leaps to get from a shiny new technology to practical implementation can be a slow one.

"What's happened before with technology is that it seems to take a long time to be implemented," Powell said. "That last phase has tended to take longer than people expect." AI is likely to follow that trend, Powell asserted, but he has no idea what sort of timeline that puts on the eventual economy-transforming maturation point of artificial intelligence. "There's a tremendous uncertainty about the timing of [economic changes], what the ultimate consequences will be and what the medium term consequences will be," Powell said. [...]

That continuation will be watched by the Fed, Powell told Senators, but that doesn't mean he'll have the power to do anything about it. "The Fed doesn't have the tools to address the social issues and the labor market issues that will arise from this," Powell said. "We just have interest rates."
Advertising

A Developer Built a Real-World Ad Blocker For Snap Spectacles (uploadvr.com) 11

An anonymous reader quotes a report from UploadVR: Software developer Stijn Spanhove used the newest SDK features of Snap OS to build a prototype of [a real-world ad blocker for Snap Spectacles]. If you're unfamiliar, Snap Spectacles are a bulky AR glasses development kit available to rent for $99/month. They run Snap OS, the company's made-for-AR operating system, and developers build apps called Lenses for them using Lens Studio or WebXR.

Spanhove built the real-world ad blocker using the new Depth Module API of Snap OS, integrated with the vision capability of Google's Gemini AI via the cloud. The Depth Module API caches depth frames, meaning that coordinate results from cloud vision models can be mapped to positions in 3D space. This enables detecting and labeling real-world objects, for example. Or, in the case of Spanhove's project, projecting a red rectangle onto real-world ads.

However, while the software approach used for Spanhove's real-world ad blocker is sound, two fundamental hardware limitations mean it wouldn't be a practical way to avoid seeing ads in your reality. Firstly, the imagery rendered by see-through transparent AR systems like Spectacles isn't fully opaque. Thus, as you can see in the demo clip, the ads are still visible through the blocking rectangle. The other problem is that see-through transparent AR systems have a very limited field of view. In the case of Spectacles, just 46 degrees diagonal. So ads are only "blocked" whenever you're looking directly at them, and you'll still see them when you're not.

Privacy

Facebook Is Asking To Use Meta AI On Photos In Your Camera Roll You Haven't Yet Shared (techcrunch.com) 19

Facebook is prompting users to opt into a feature that uploads photos from their camera roll -- even those not shared on the platform -- to Meta's servers for AI-driven suggestions like collages and stylized edits. While Meta claims the content is private and not used for ads, opting in allows the company to analyze facial features and retain personal data under its broad AI terms, raising privacy concerns. TechCrunch reports: The feature is being suggested to Facebook users when they're creating a new Story on the social networking app. Here, a screen pops up and asks if the user will opt into "cloud processing" to allow creative suggestions. As the pop-up message explains, by clicking "Allow," you'll let Facebook generate new ideas from your camera roll, like collages, recaps, AI restylings, or photo themes. To work, Facebook says it will upload media from your camera roll to its cloud (meaning its servers) on an "ongoing basis," based on information like time, location, or themes.

The message also notes that only you can see the suggestions, and the media isn't used for ad targeting. However, by tapping "Allow," you are agreeing to Meta's AI Terms. This allows your media and facial features to be analyzed by AI, it says. The company will additionally use the date and presence of people or objects in your photos to craft its creative ideas. [...] According to Meta's AI Terms around image processing, "once shared, you agree that Meta will analyze those images, including facial features, using AI. This processing allows us to offer innovative new features, including the ability to summarize image contents, modify images, and generate new content based on the image," the text states.

The same AI terms also give Meta's AIs the right to "retain and use" any personal information you've shared in order to personalize its AI outputs. The company notes that it can review your interactions with its AIs, including conversations, and those reviews may be conducted by humans. The terms don't define what Meta considers personal information, beyond saying it includes "information you submit as Prompts, Feedback, or other Content." We have to wonder whether the photos you've shared for "cloud processing" also count here.

China

DeepSeek Faces Ban From Apple, Google App Stores In Germany 15

Germany's data protection commissioner has urged Apple and Google to remove Chinese AI startup DeepSeek from their app stores due to concerns about data protection. Reuters reports: Commissioner Meike Kamp said in a statement on Friday that she had made the request because DeepSeek illegally transfers users' personal data to China. The two U.S. tech giants must now review the request promptly and decide whether to block the app in Germany, she added, though her office has not set a precise timeframe. According to its own privacy policy, DeepSeek stores numerous pieces of personal data, such as requests to its AI program or uploaded files, on computers in China.

"DeepSeek has not been able to provide my agency with convincing evidence that German users' data is protected in China to a level equivalent to that in the European Union," [Commissioner Meike Kamp] said. "Chinese authorities have far-reaching access rights to personal data within the sphere of influence of Chinese companies," she added. The commissioner said she took the decision after asking DeepSeek in May to meet the requirements for non-EU data transfers or else voluntarily withdraw its app. DeepSeek did not comply with this request, she added.
AI

Big Accounting Firms Fail To Track AI Impact on Audit Quality, Says Regulator (ft.com) 21

The six largest UK accounting firms do not formally monitor how automated tools and AI impact the quality of their audits, the regulator has found, even as the technology becomes embedded across the sector. From a report: The Financial Reporting Council on Thursday published its first AI guide alongside a review of the way firms were using automated tools and technology, which found "no formal monitoring performed by the firms to quantify the audit quality impact of using" them.

The watchdog found that audit teams in the Big Four firms -- Deloitte, EY, KPMG and PwC -- as well as BDO and Forvis Mazars were increasingly using this technology to perform risk assessments and obtain evidence. But it said that the firms primarily monitored the tools to understand how many teams were using them for audits, "typically for licensing purposes," rather than to assess their impact on audit quality.

Businesses

Uber In Talks With Founder Travis Kalanick To Fund Self-Driving Car Deal (nytimes.com) 1

Facing mounting competition from autonomous taxi services like Waymo, Uber is in early talks to help fund Travis Kalanick's potential acquisition of Pony.ai's U.S. subsidiary (source paywalled; alternative source). If completed, the deal would reunite Kalanick with Uber (now under CEO Dara Khosrowshahi) and position Pony.ai to operate independently of its Chinese parent amid rising U.S. regulatory pressures. The New York Times reports: The company, Pony.ai, was founded in Silicon Valley in 2016 but has its main presence in China, and has permits to operate robot taxis and trucks in the United States and China. The talks are preliminary, said the people, who were not authorized to speak about the confidential conversations. Mr. Kalanick will run Pony if the deal is completed, they said. It is unclear what role, if any, Uber would take in Pony as an investor. Financial details of the potential transaction could not be determined. Pony went public last year in the United States, raising $260 million in a share sale. Its market capitalization stands around $4.5 billion.

If the deal goes through, Mr. Kalanick, 48, will remain in his day job running CloudKitchens, a virtual restaurant start-up that he founded after leaving Uber in 2017. He would also work more closely with Dara Khosrowshahi, who took over as Uber's chief executive after Mr. Kalanick's ouster. The discussions are the starkest sign yet that Uber is under pressure from Waymo, the driverless car unit spun out of Google, and other autonomous car services. When Mr. Kalanick was Uber's chief executive, the company tried developing autonomous vehicle technology. It then bought Otto, a self-driving trucking start-up run by Anthony Levandowski, a former Google engineer. Google later sued Mr. Levandowski for theft of trade secrets and sued Uber to bar it from using its self-driving technology.

Under Mr. Khosrowshahi, Uber has taken a different tack to self-driving cars. The company has struck roughly 18 partnerships with autonomous vehicle companies like Wayve, May Mobility and WeRide to bring pilot programs for driverless car services into Europe, the Middle East and Asia. The goal, Mr. Khosrowshahi has said in podcast interviews, has been to put "as many cars on Uber's network as possible." He has maintained that while autonomous vehicles are growing steadily, ride-hailing networks will have both human and robot drivers for years.

Advertising

As AI Kills Search Traffic, Google Launches Offerwall To Boost Publisher Revenue (techcrunch.com) 37

An anonymous reader quotes a report from TechCrunch: Google's AI search features are killing traffic to publishers, so now the company is proposing a possible solution. On Thursday, the tech giant officially launched Offerwall, a new tool that allows publishers to generate revenue beyond the more traffic-dependent options, like ads.

Offerwall lets publishers give their sites' readers a variety of ways to access their content, including through options like micropayments, taking surveys, watching ads, and more. In addition, Google says that publishers can add their own options to the Offerwall, like signing up for newsletters. The new feature is available for free in Google Ad Manager after earlier tests with 1,000 publishers that spanned over a year.
While no broad case studies were shared, India's Sakal Media Group implemented Google Ad Manager's Offerwall feature and saw a 20% revenue boost and up to 2 million more impressions in three months. Overall, publishers testing Offerwall experienced an average 9% revenue lift, with some seeing between 5% and 15%.
Youtube

YouTube Search Gets Its Own Version of Google's AI Overviews 8

Google is bringing its AI Overviews-like feature to YouTube in the form of an "AI-powered search results carousel." The Verge reports: As shown in a video, the search results carousel will show a big video clip up top, thumbnails to a selection of other relevant video clips directly under that, and an AI-generated bit of text responding to your query. To see a full video, tap on the big clip at the top of the carousel.

The feature is currently only accessible on iOS and Android and for videos in English and will be available to test until July 30th, per the YouTube experiments page. Additionally, only a "randomly selected number of Premium members" will have access to it, YouTube says in a support document.
AI

Who Needs Accenture in the Age of AI? (economist.com) 30

Accenture is facing mounting challenges as AI threatens to disrupt the consulting industry the company helped build. The Dublin-based firm, which made its fortune advising clients on adapting to new technologies from the internet to cloud computing, now confronts the same predicament as generative AI reshapes business operations.

The company's new generative AI contracts slowed to $100 million in the most recent quarter, down from $200 million per quarter last year. Technology partners including Microsoft and SAP are increasingly integrating AI directly into their offerings, allowing systems to work immediately without extensive consulting support. Newcomers like Palantir are embedding their own engineers with customers, enabling clients to bypass traditional consultants.

Between 2015 and 2024, Accenture generated a 370% total return by helping companies navigate technological transitions. The firm reached a $250 billion valuation in February before losing $60 billion in market value. CEO Julie Sweet insists that the company is reorganizing around "reinvention services." A recent survey found 42% of companies abandoned most AI initiatives, up from 17% a year ago.
AI

Study Finds LLM Users Have Weaker Understanding After Research (msn.com) 111

Researchers at the University of Pennsylvania's Wharton School found that people who used large language models to research topics demonstrated weaker understanding and produced less original insights compared to those using Google searches.

The study, involving more than 4,500 participants across four experiments, showed LLM users spent less time researching, exerted less effort, and wrote shorter, less detailed responses. In the first experiment, over 1,100 participants researched vegetable gardening using either Google or ChatGPT. Google users wrote longer responses with more unique phrasing and factual references. A second experiment with nearly 2,000 participants presented identical gardening information either as an AI summary or across mock webpages, with Google users again engaging more deeply and retaining more information.
AI

Salesforce CEO Says 30% of Internal Work Is Being Handled by AI (yahoo.com) 44

Salesforce chief executive Marc Benioff said Thursday his company has automated a significant chunk of work with AI, another example of a firm touting labor-replacing potential of the emerging technology. From a report: "AI is doing 30% to 50% of the work at Salesforce now," Benioff said in an interview, pointing at job functions including software engineering and customer service.

[...] Salesforce has said that use of AI internally has allowed it to hire fewer people. The San Francisco-based software company is focused on selling an AI product that promises to handle tasks such as customer service without human supervision. Benioff said that tool has reached about 93% accuracy, including for large customers such as Walt Disney.

Facebook

Meta Beats Copyright Suit From Authors Over AI Training on Books (bloomberglaw.com) 83

An anonymous reader shares a report: Meta escaped a first-of-its-kind copyright lawsuit from a group of authors who alleged the tech giant hoovered up millions of copyrighted books without permission to train its generative AI model called Llama.

San Francisco federal Judge Vince Chhabria ruled Wednesday that Meta's decision to use the books for training is protected under copyright law's fair use defense, but he cautioned that his opinion is more a reflection on the authors' failure to litigate the case effectively. "This ruling does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful," Chhabria said.

Microsoft

Microsoft Sued By Authors Over Use of Books in AI Training (reuters.com) 15

Microsoft has been hit with a lawsuit by a group of authors who claim the company used their books without permission to train its Megatron artificial intelligence model. From a report: Kai Bird, Jia Tolentino, Daniel Okrent and several others alleged that Microsoft used pirated digital versions of their books to teach its AI to respond to human prompts. Their lawsuit, filed in New York federal court on Tuesday, is one of several high-stakes cases brought by authors, news outlets and other copyright holders against tech companies including Meta Platforms, Anthropic and Microsoft-backed OpenAI over alleged misuse of their material in AI training.

[...] The writers alleged in the complaint that Microsoft used a collection of nearly 200,000 pirated books to train Megatron, an algorithm that gives text responses to user prompts.

Businesses

Bernie Sanders Says If AI Makes Us So Productive, We Should Get a 4-Day Work Week (techcrunch.com) 181

Senator Bernie Sanders called for a four-day work week during a recent interview with podcaster Joe Rogan, arguing that AI productivity gains should benefit workers rather than just technology companies and corporate executives. Sanders proposed reducing the standard work week to 32 hours when AI tools increase worker productivity, rather than eliminating jobs entirely.

"Technology is gonna work to improve us, not just the people who own the technology and the CEOs of large corporations," Sanders said. "You are a worker, your productivity is increasing because we give you AI, right? Instead of throwing you out on the street, I'm gonna reduce your work week to 32 hours."
Education

Majority of US K-12 Teachers Now Using AI for Lesson Planning, Grading (apnews.com) 21

A Gallup and Walton Family Foundation poll found 6 in 10 US teachers in K-12 public schools used AI tools for work during the past school year, with higher adoption rates among high school educators and early-career teachers. The survey of more than 2,000 teachers nationwide conducted in April found that those using AI tools weekly estimate saving about six hours per week.

About 8 in 10 teachers using AI tools report time savings on creating worksheets, assessments, quizzes and administrative work. About 6 in 10 said AI improves their work quality when modifying student materials or providing feedback. However, approximately half of teachers worry student AI use will diminish teens' critical thinking abilities and independent problem-solving persistence.
Programming

'The Computer-Science Bubble Is Bursting' 128

theodp writes: The job of the future might already be past its prime," writes The Atlantic's Rose Horowitch in The Computer-Science Bubble Is Bursting. "For years, young people seeking a lucrative career were urged to go all in on computer science. From 2005 to 2023, the number of comp-sci majors in the United States quadrupled. All of which makes the latest batch of numbers so startling. This year, enrollment grew by only 0.2 percent nationally, and at many programs, it appears to already be in decline, according to interviews with professors and department chairs. At Stanford, widely considered one of the country's top programs, the number of comp-sci majors has stalled after years of blistering growth. Szymon Rusinkiewicz, the chair of Princeton's computer-science department, told me that, if current trends hold, the cohort of graduating comp-sci majors at Princeton is set to be 25 percent smaller in two years than it is today. The number of Duke students enrolled in introductory computer-science courses has dropped about 20 percent over the past year."

"But if the decline is surprising, the reason for it is fairly straightforward: Young people are responding to a grim job outlook for entry-level coders. In recent years, the tech industry has been roiled by layoffs and hiring freezes. The leading culprit for the slowdown is technology itself. Artificial intelligence has proved to be even more valuable as a writer of computer code than as a writer of words. This means it is ideally suited to replacing the very type of person who built it. A recent Pew study found that Americans think software engineers will be most affected by generative AI. Many young people aren't waiting to find out whether that's true."

Meanwhile, writing in the Communications of the ACM, Orit Hazzan and Avi Salmon ask: Should Universities Raise or Lower Admission Requirements for CS Programs in the Age of GenAI? "This debate raises a key dilemma: should universities raise admission standards for computer science programs to ensure that only highly skilled problem-solvers enter the field, lower them to fill the gaps left by those who now see computer science as obsolete due to GenAI, or restructure them to attract excellent candidates with diverse skill sets who may not have considered computer science prior to the rise of GenAI, but who now, with the intensive GenAI and vibe coding tools supporting programming tasks, may consider entering the field?

Slashdot Top Deals