×
Google

Google's Plans for Chrome Extensions 'Won't Really Help Security', Argues EFF (eff.org) 35

Is Google making the wrong response to the DataSpii report on a "catastrophic data leak"? The EFF writes: In response to questions about DataSpii from Ars Technica, Google officials pointed out that they have "announced technical changes to how extensions work that will mitigate or prevent this behavior." Here, Google is referring to its controversial set of proposed changes to curtail extension capabilities, known as Manifest V3.

As both security experts and the developers of extensions that will be greatly harmed by Manifest V3, we're here to tell you: Google's statement just isn't true. Manifest V3 is a blunt instrument that will do little to improve security while severely limiting future innovation... The only part of Manifest V3 that goes directly to the heart of stopping DataSpii-like abuses is banning remotely hosted code. You can't ensure extensions are what they appear to be if you give them the ability to download new instructions after they're installed.

But you don't need the rest of Google's proposed API changes to stop this narrow form of bad extension behavior. What Manifest V3 does do is stifle innovation...

The EFF makes the following arguments Google's proposal:
  • Manifest V3 will still allow extensions to observe the same data as before, including what URLs users visit and the contents of pages users visit
  • Manifest V3 won't change anything about how "content scripts" work...another way to extract user browsing data.
  • Chrome will still allow users to give extensions permission to run on all sites.

In response Google argued to Forbes that the EFF "fails to account for the proposed changes to how permissions work. It is the combination of these two changes, along with others included in the proposal, that would have prevented or significantly mitigated incidents such as this one."

But the EFF's technology projects director also gave Forbes their response. "We agree that Google isn't killing ad-blockers. But they are killing a wide range of security and privacy enhancing extensions, and so far they haven't justified why that's necessary."

And in the same article, security researcher Sean Wright added that Google's proposed change "appears to do little to prevent rogue extensions from obtaining information from loaded sites, which is certainly a privacy issue and it looks as if the V3 changes don't help."

The EFF suggests Google just do a better job of reviewing extensions.


Facebook

Did WhatsApp Backdoor Rumor Come From 'Unanswered Questions ' and 'Leap of Faith' For Closed-Source Encryption Products? (forbes.com) 105

On Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from the EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."

Schneier has also added the words "This story is wrong" to his original blog post. "The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference." But that Forbes contributor has also responded, saying that he'd first asked Facebook three times about when they'd deploy the backdoor in WhatsApp -- and never received a response.

Asked again on July 25th the company's plans for "moderating end to end encrypted conversations such as WhatsApp by using on device algorithms," a company spokesperson did not dispute the statement, instead pointing to Zuckerberg's blog post calling for precisely such filtering in its end-to-end encrypted products including WhatsApp [apparently this blog post], but declined to comment when asked for more detail about precisely when such an integration might happen... [T]here are myriad unanswered questions, with the company declining to answer any of the questions posed to it regarding why it is investing in building a technology that appears to serve little purpose outside filtering end-to-end encrypted communications and which so precisely matches Zuckerberg's call. Moreover, beyond its F8 presentation, given Zuckerberg's call for filtering of its end-to-end encrypted products, how does the company plan on accomplishing this apparent contradiction with the very meaning of end-to-end encryption?

The company's lack of transparency and unwillingness to answer even the most basic questions about how it plans to balance the protections of end-to-end encryption in its products including WhatsApp with the need to eliminate illegal content reminds us the giant leap of faith we take when we use closed encryption products whose source we cannot review... Governments are increasingly demanding some kind of compromise regarding end-to-end encryption that would permit them to prevent such tools from being used to conduct illegal activity. What would happen if WhatsApp were to receive a lawful court order from a government instructing it to insert such content moderation within the WhatsApp client and provide real-time notification to the government of posts that match the filter, along with a copy of the offending content?

Asked about this scenario, Carl Woog, Director of Communications for WhatsApp, stated that he was not aware of any such cases to date and noted that "we've repeatedly defended end-to-end encryption before the courts, most notably in Brazil." When it was noted that the Brazilian case involved the encryption itself, rather than a court order to install a real-time filter and bypass directly within the client before and after the encryption process at national scale, which would preserve the encryption, Woog initially said he would look into providing a response, but ultimately did not respond.

Given Zuckerberg's call for moderation of the company's end-to-end encryption products and given that Facebook's on-device content moderation appears to answer directly to this call, Woog was asked whether its on-device moderation might be applied in future to its other end-to-end encrypted products rather than WhatsApp. After initially saying he would look into providing a response, Woog ultimately did not respond.

Here's the exact words from Zuckerberg's March blog post. It said Facebook is "working to improve our ability to identify and stop bad actors across our apps by detecting patterns of activity or through other means, even when we can't see the content of the messages, and we will continue to invest in this work. "
Electronic Frontier Foundation

EFF Warns Proposed Law Could Create 'Life-Altering' Copyright Lawsuits (forbes.com) 117

Forbes reports: In July, members of the federal Senate Judiciary Committee chose to move forward with a bill targeting copyright abuse with a more streamlined way to collect damages, but critics say that it could still allow big online players to push smaller ones around -- and even into bankruptcy.

Known as the Copyright Alternative in Small-Claims Enforcement (or CASE) Act, the bill was reintroduced in the House and Senate this spring by a roster of bipartisan lawmakers, with endorsements from such groups as the Copyright Alliance and the Graphic Artists' Guild. Under the bill, the U.S. Copyright Office would establish a new 'small claims-style' system for seeking damages, overseen by a three-person Copyright Claims Board. Owners of digital content who see that content used without permission would be able to file a claim for damages up to $15,000 for each work infringed, and $30,000 in total, if they registered their content with the Copyright Office, or half those amounts if they did not.

"Easy $5,000 copyright infringement tickets won't fix copyright law," argues the EFF, in an article shared by long-time Slashdot reader SonicSpike: The bill would supercharge a "copyright troll" industry dedicated to filing as many "small claims" on as many Internet users as possible in order to make money through the bill's statutory damages provisions. Every single person who uses the Internet and regularly interacts with copyrighted works (that's everyone) should contact their Senators to oppose this bill...

[I]f Congress passes this bill, the timely registration requirement will no longer be a requirement for no-proof statutory damages of up to $7,500 per work. In other words, nearly every photo, video, or bit of text on the Internet can suddenly carry a $7,500 price tag if uploaded, downloaded, or shared even if the actual harm from that copying is nil. For many Americans, where the median income is $57,652 per year, this $7,500 price tag for what has become regular Internet behavior would result in life-altering lawsuits from copyright trolls that will exploit this new law.

Facebook

Facebook Insists No Security 'Backdoor' Is Planned for WhatsApp (medium.com) 56

An anonymous reader shares a report: Billions of people use the messaging tool WhatsApp, which added end-to-end encryption for every form of communication available on its platform back in 2016. This ensures that conversations between users and their contacts -- whether they occur via text or voice calls -- are private, inaccessible even to the company itself. But several recent posts published to Forbes' blogging platform call WhatsApp's future security into question. The posts, which were written by contributor Kalev Leetaru, allege that Facebook, WhatsApp's parent company, plans to detect abuse by implementing a feature to scan messages directly on people's phones before they are encrypted. The posts gained significant attention: A blog post by technologist Bruce Schneier rehashing one of the Forbes posts has the headline "Facebook Plans on Backdooring WhatsApp." It is a claim Facebook unequivocally denies.

"We haven't added a backdoor to WhatsApp," Will Cathcart, WhatsApp's vice president of product management, wrote in a statement. "To be crystal clear, we have not done this, have zero plans to do so, and if we ever did, it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise, which is why we are opposed to it."

UPDATE: Later Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."
Encryption

Is Facebook Planning on Backdooring WhatsApp? (schneier.com) 131

Bruce Schneier: This article points out that Facebook's planned content moderation scheme will result in an encryption backdoor into WhatsApp: "In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted. The company even noted. that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service. Facebook's model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once."

Once this is in place, it's easy for the government to demand that Facebook add another filter -- one that searches for communications that they care about -- and alert them when it gets triggered. Of course alternatives like Signal will exist for those who don't want to be subject to Facebook's content moderation, but what happens when this filtering technology is built into operating systems?
Separately The Guardian reports: British, American and other intelligence agencies from English-speaking countries have concluded a two-day meeting in London amid calls for spies and police officers to be given special, backdoor access to WhatsApp and other encrypted communications. The meeting of the "Five Eyes" nations -- the UK, US, Australia, Canada and New Zealand -- was hosted by new home secretary, Priti Patel, in an effort to coordinate efforts to combat terrorism and child abuse.
UPDATE: 8/2/2019 On Friday technologist Bruce Schneier wrote that after reviewing responses from WhatsApp, he's concluded that reports of a pre-encryption backdoor are a false alarm. He also says he got an equally strong confirmation from WhatsApp's Privacy Policy Manager Nate Cardozo, who Facebook hired last December from EFF. "He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this."
Electronic Frontier Foundation

EFF Argues For 'Empowerment, Not Censorship' Online (eff.org) 62

An activism director and a legislative analyst at the EFF have co-authored an essay arguing that the key to children's safetly online "is user empowerment, not censorship," reporting on a recent hearing by the U.S. Senate's Judiciary Commitee: While children do face problems online, some committee members seemed bent on using those problems as an excuse to censor the Internet and undermine the legal protections for free expression that we all rely on, including kids. Don't censor users; empower them to choose... [W]hen lawmakers give online platforms the impossible task of ensuring that every post meets a certain standard, those companies have little choice but to over-censor.

During the hearing, Stephen Balkam of the Family Online Safety Institute provided an astute counterpoint to the calls for a more highly filtered Internet, calling to move the discussion "from protection to empowerment." In other words, tech companies ought to give users more control over their online experience rather than forcing all of their users into an increasingly sanitized web. We agree.

It's foolish to think that one set of standards would be appropriate for all children, let alone all Internet users. But today, social media companies frequently make censorship decisions that affect everyone. Instead, companies should empower users to make their own decisions about what they see online by letting them calibrate and customize the content filtering methods those companies use. Furthermore, tech and media companies shouldn't abuse copyright and other laws to prevent third parties from offering customization options to people who want them.

The essay also argues that Congress "should closely examine companies whose business models rely on collecting, using, and selling children's personal information..."

"We've highlighted numerous examples of students effectively being forced to share data with Google through the free or low-cost cloud services and Chromebooks it provides to cash-strapped schools. We filed a complaint with the FTC in 2015 asking it to investigate Google's student data practices, but the agency never responded."
Android

Privacy-Focused Android Q Still Lets Advertisers Track You (sdtimes.com) 63

"The upcoming version of the Android operating system is taking a strong focus on privacy," reports SD Times, "but the Electronic Frontier Foundation (EFF) believes it could still do better." Android Q's new privacy features include: user control over app access to device location, new limits on access to files in shared external storage, restrictions on launching activities, and restrictions on access to the device's hardware and sensors... "However, in at least one area, Q's improvements are undermined by Android's continued support of a feature that allows third-party advertisers, including Google itself, to track users across apps," Bennett Cyphers, engineer for the EFF, wrote in a post. "Furthermore, Android still doesn't let users control their apps' access to the Internet, a basic permission that would address a wide range of privacy concerns."

According to Cyphers, while Android Q has new restrictions on non-resettable device identifies, it will allow unrestricted access for its own tracking identifier [called "advertising ID"]... "Facebook and other targeting companies allow businesses to upload lists of ad IDs that they have collected in order to target those users on other platforms," he wrote... "On Android, there is no way for the user to control which apps can access the ID, and no way to turn it off. While we support Google taking steps to protect other hardware identifiers from unnecessary access, its continued support of the advertising ID -- a "feature" designed solely to support tracking -- undercuts the company's public commitment to privacy," he wrote...

Cypher also noted that while Apple's iOS has similar identifiers for advertisers that contradict with its privacy campaign, it does enable users to turn off the tracking.

In fact, Android Q also ships with an "opt out of ad personalization" checkbox where users can indicate that they don't want Google's identifier to track them, Cyphers reports -- but "the checkbox doesn't affect the ad ID in any way.

"It only encodes the user's 'preference', so that when an app asks Android whether a user wants to be tracked, the operating system can reply 'no, actually they don't.' Google's terms tell developers to respect this setting, but Android provides no technical safeguards to enforce this policy."
AI

Will Machine Learning Build Up Dangerous 'Intellectual Debt'? (newyorker.com) 206

Long-time Slashdot reader JonZittrain is an international law professor at Harvard Law School, and an EFF board member. Wednesday he contacted us to share his new article in the New Yorker: I've been thinking about what happens when AI gives us seemingly correct answers that we wouldn't have thought of ourselves, without any theory to explain them. These answers are a form of "intellectual debt" that we figure we'll repay -- but too often we never get around to it, or even know where it's accruing.

A more detailed (and unpaywalled) version of the essay draws a little from how and when it makes sense to pile up technical debt to ask the same questions about intellectual debt.

The first article argues that new AI techniques "increase our collective intellectual credit line," adding that "A world of knowledge without understanding becomes a world without discernible cause and effect, in which we grow dependent on our digital concierges to tell us what to do and when."

And the second article has a great title. "Intellectual Debt: With Great Power Comes Great Ignorance." It argues that machine learning "at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball -- except they appear to be consistently right." And it ultimately raises the prospect that humanity "will build models dependent on, and in turn creating, underlying logic so far beyond our grasp that they defy meaningful discussion and intervention..."
AT&T

EFF Hits AT&T With Class-Action Lawsuit For Selling Customers' Location To Bounty Hunters (vice.com) 53

An anonymous reader quotes a report from Motherboard: Tuesday, the Electronic Frontier Foundation (EFF) filed a class action lawsuit against AT&T and two data brokers over their sale of AT&T customers' real-time location data. The lawsuit seeks an injunction against AT&T, which would ban the company from selling any more customer location data and ensure that any already sold data is destroyed. The move comes after multiple Motherboard investigations found AT&T, T-Mobile, Sprint, and Verizon sold their customers' data to so-called location aggregators, which then ended up in the hands of bounty hunters and bail bondsman.

The lawsuit, focused on those impacted in California, represents three Californian AT&T customers. Katherine Scott, Carolyn Jewel, and George Pontis are all AT&T customers who were unaware the company sold access to their location. The class action complaint says the three didn't consent to the sale of their location data. The complaint alleges that AT&T violated the Federal Communications Act by not properly protecting customers' real-time location data; and the California Unfair Competition Law and the California Consumers Legal Remedies Act for misleading its customers around the sale of such data. It also alleges AT&T and the location aggregators it sold data through violated the California Constitutional Right to Privacy.
The lawsuit highlights AT&T's Privacy Policy that says "We will not sell your personal information to anyone, for any purpose. Period."

An AT&T spokesperson said in a statement "While we haven't seen this complaint, based on our understanding of what it alleges we will fight it. Location-based services like roadside assistance, fraud protection, and medical device alerts have clear and even life-saving benefits. We only share location data with customer consent. We stopped sharing location data with aggregators after reports of misuse."
AT&T

Data Broker LocationSmart Will Fight Class Action Lawsuit Over Selling AT&T Data (vice.com) 30

A broker that helped sell AT&T customers' real-time location data says it will fight a class action lawsuit against it. From a report: The broker, called LocationSmart, was involved in a number of data selling and cybersecurity incidents, including selling location data that ended up in the hands of bounty hunters. "LocationSmart will fight this lawsuit because the allegations of wrongdoing are meritless and rest on recycled falsehoods," a LocationSmart spokesperson said in an emailed statement. LocationSmart did not point to any specific part of the lawsuit to support these claims. On Tuesday, activist group the Electronic Frontier Foundation (EFF) and law firm Pierce Bainbridge filed a class action lawsuit against LocationSmart, another data broker called Zumigo, and telecom giant AT&T. The lawsuit's plaintiffs are three California residents who say they did not consent to AT&T selling their real-time location data through the data brokers. The lawsuit alleges all three companies violated the California Constitutional Right to Privacy, and seeks monetary damages as well as an injunction against AT&T to ensure the deletion of any sold data.
Privacy

What Happens When Landlords Can Get Cheap Surveillance Software? (slate.com) 167

"Cheap surveillance software is changing how landlords manage their tenants and what laws police can enforce," reports Slate.

For example, there's a private company contracting with property managers that says they now have 475 security cameras in place and can sometimes scan more than 1.5 million license plates in a week. (According to Clayton Burnett, Watchstore Security's director of "innovation and new technology".) Burnett's company regularly hands over location data to police, he says, as evidence for cases large and small. But that investigative firepower also comes in handy for more routine landlord-tenant affairs. They've investigated tree trimmers charging for a day of work they didn't do and caught people dumping trash on private property. Sometimes, he says, a tenant will claim her car was hit in the building's parking lot and ask for free rent. His company can search for her plate and see that one day, she left the lot with her bumper intact and then came back later with a dent in it. Probably once a week, Burnett says, Watchtower uses it to prove that a tenant has "a buddy crashing on their couch," violating their lease. "Normally, there's some limit to how long they can stay, like five days," he says, "and we can prove they're going over that." One search, and they have proof that that buddy has been coming over every night for a month.

I was wondering how tenants felt about this, and I asked Burnett whether anyone had ever complained about the license plate readers. "No," he said with a laugh. "I'd say they probably don't know about it...."

[A]s the technology has matured, it's gotten in the hands of organizations that, five years ago, would never have been able to consider it. Small-town police departments can suddenly afford to conduct surveillance at a massive scale. Neighborhood homeowners associations and property managers are buying up cameras by the dozen. And in many jurisdictions, cheap automatic license plate reader (ALPR) cameras are creeping into neighborhoods -- with almost nothing restricting how they're used besides the surveiller's own discretion....

If you know that a bald guy in a gray Toyota illegally dumped trash in your lawn, the police won't try to track him down. But if they have the plate, enforcing lower-level crime becomes much easier. Several of the property managers and homeowners associations I spoke to emphasized that this is one of the main benefits of their ALPR systems. Along with burglaries, they're mostly concerned about people breaking into cars to steal personal belongings; police wouldn't investigate that before, but now homeowners associations can do the investigation for them and hand over the evidence. As Burnett put it, "[Police] are not going to be able to investigate [a small crime] unless we hand it to them on a silver platter. Which we've done plenty of times."

The article points out that today's software can detect dents on cars and watch for specific bumper stickers (or Lyft tags) -- and often the software can be retrofitted to existing traffic cameras. A contractor working with police in one Pennsylvania county says they've now "virtually gated" an entire 20,000-person town south of Pittsburgh. "Any way you can come in and out, you're on camera."

A senior investigative researcher at the EFF points out that "Now a cop can look up your license plate and see where you've been for the past two years."
EU

Microsoft Office 365: Now Illegal In Many Schools in Germany (zdnet.com) 137

"Schools in the central German state of Hesse [population: 6 million] have been told it's now illegal to use Microsoft Office 365," reports ZDNet: The state's data-protection commissioner has ruled that using the popular cloud platform's standard configuration exposes personal information about students and teachers "to possible access by US officials".

That might sound like just another instance of European concerns about data privacy or worries about the current US administration's foreign policy. But in fact the ruling by the Hesse Office for Data Protection and Information Freedom is the result of several years of domestic debate about whether German schools and other state institutions should be using Microsoft software at all.

Besides the details that German users provide when they're working with the platform, Microsoft Office 365 also transmits telemetry data back to the US. Last year, investigators in the Netherlands discovered that that data could include anything from standard software diagnostics to user content from inside applications, such as sentences from documents and email subject lines. All of which contravenes the EU's General Data Protection Regulation, or GDPR, the Dutch said...

To allay privacy fears in Germany, Microsoft invested millions in a German cloud service, and in 2017 Hesse authorities said local schools could use Office 365. If German data remained in the country, that was fine, Hesse's data privacy commissioner, Michael Ronellenfitsch, said. But in August 2018 Microsoft decided to shut down the German service. So once again, data from local Office 365 users would be data transmitted over the Atlantic. Several US laws, including 2018's CLOUD Act and 2015's USA Freedom Act, give the US government more rights to ask for data from tech companies.

ZDNet also quotes Austrian digital-rights advocate Max Schrems, who summarizes the dilemma. "If data is sent to Microsoft in the US, it is subject to US mass-surveillance laws. This is illegal under EU law."
United States

House Lawmakers Demand End To Warrantless Collection of Americans' Data (techcrunch.com) 111

Two House lawmakers are pushing an amendment that would effectively defund a massive data collection program run by the National Security Agency unless the government promises to not intentionally collect data of Americans. TechCrunch reports: The bipartisan amendment -- just 15 lines in length -- would compel the government to not knowingly collect communications -- like emails, messages and browsing data -- on Americans without a warrant. Reps. Justin Amash (R-MI, 3rd) and Zoe Lofgren (D-CA, 19th) have already garnered the support from some of the largest civil liberties and rights groups, including the ACLU, the EFF, FreedomWorks, New America and the Sunlight Foundation.

Under the current statute, the NSA can use its Section 702 powers to collect and store the communications of foreign targets located outside the U.S. by tapping into the fiber cables owned and run by U.S. telecom giants. But this massive data collection effort also inadvertently vacuums up Americans' data, who are typically protected from unwarranted searches under the Fourth Amendment. The government has consistently denied to release the number of how many Americans are caught up in the NSA's data collection. For the 2018 calendar year, the government said it made more than 9,600 warrantless searches of Americans' communications, up 28% year-over-year.

Google

YouTube's Crackdown on Violent Extremism Mistakenly Whacks Channels Fighting Violent Extremism (boingboing.net) 313

AmiMoJo shares an article by Cory Doctorow: Wednesday, Youtube announced that it would shut down, demonetize and otherwise punish channels that promoted violent extremism, "supremacy" and other forms of hateful expression; predictably enough, this crackdown has caught some of the world's leading human rights campaigners, who publish Youtube channels full of examples of human rights abuses in order to document them and prompt the public and governments to take action....

Some timely reading: Caught in the Net: The Impact of "Extremist" Speech Regulations on Human Rights Content, a report by the Electronic Frontier Foundation's Jillian C York: "The examples highlighted in this document show that casting a wide net into the Internet with faulty automated moderation technology not only captures content deemed extremist, but also inadvertently captures useful content like human rights documentation, thus shrinking the democratic sphere. No proponent of automated content moderation has provided a satisfactory solution to this problem."

A British history teacher living in Romania complained Wednesday that his YouTube channel had been banned completely from YouTube, possibly over its documenting of propaganda speeches from World War II. He tweeted that he was frustrated that "15 years of materials for #HistoryTeacher community have ended so abruptly."

Later that same day, his account was restored -- but he's still concerned about other YouTube accounts. "It's absolutely vital that @YouTube work to undo the damage caused by their indiscriminate implementation as soon as possible," he tweeted Wednesday. "Access to important material is being denied wholesale as many other channels are left branded as promoting hate when they do nothing of the sort."
Advertising

Google Struggles To Justify Why It's Restricting Ad Blockers In Chrome (vice.com) 178

An anonymous reader quotes a report from Vice News: Google has found itself under fire for plans to limit the effectiveness of popular ad blocking extensions in Chrome. While Google says the changes are necessary to protect the "user experience" and improve extension security, developers and consumer advocates say the company's real motive is money and control. In the wake of ongoing backlash to the proposal, Chrome software security engineer Chris Palmer took to Twitter this week to claim the move was intended to help improve the end-user browsing experience, and paid enterprise users would be exempt from the changes.

Chrome security leader Justin Schuh also said the changes were driven by privacy and security concerns. Adblock developers, however, aren't buying it. uBlock Origin developer Raymond Hill, for example, argued this week that if user experience was the goal, there were other solutions that wouldn't hamstring existing extensions. "Web pages load slow because of bloat, not because of the blocking ability of the webRequest API -- at least for well crafted extensions," Hill said. Hill said that Google's motivation here had little to do with the end user experience, and far more to do with protecting advertising revenues from the rising popularity of adblock extensions.
The team behind the EFF's Privacy Badger ad-blocking extension also spoke out against the changes. "Google's claim that these new limitations are needed to improve performance is at odds with the state of the internet," the organization said. "Sites today are bloated with trackers that consume data and slow down the user experience. Tracker blockers have improved the performance and user experience of many sites and the user experience. Why not let independent developers innovate where the Chrome team isn't?"
Electronic Frontier Foundation

Redditor Allowed To Stay Anonymous, Court Rules (cnet.com) 131

Online free speech has been given a victory, with a federal court ruling that a Redditor can remain anonymous in a copyright lawsuit. From a report: This means anyone from around the globe who posts on Reddit can still rely on First Amendment protections for anonymous free speech, because Reddit is a US platform with a US audience. The Electronic Frontier Foundation fought on behalf of Reddit commenter Darkspilver, a Jehovah's Witness who posted public and internal documents from The Watch Tower Bible and Tract Society online. Watch Tower subpoenaed Reddit to provide identity information on Darkspilver for the court case, but the EFF filed a motion to quash this, citing "deep concerns that disclosure of their identity would cause them to be disfellowshipped by their community." In February 2019, Darkspilver posted an advertisement by the Jehovah's Witness organization that asks for donations, as well as a chart showing what personal data the organization keeps. Watch Tower said both of these were copyrighted items. The Redditor argued it was fair use, because he posted the ad for commentary and criticism purposes.
Government

Critics Call White House Social Media Bias Survey A 'Data Collection Ploy' (sfgate.com) 199

An anonymous reader quotes the Washington Post: Venky Ganesan, a partner at technology investor Menlo Ventures, told The Washington Post that the White House's new survey about bias on social media is "pure kabuki theatre" and an attempt to curry political points with conservatives. He said the Trump administration's repeated accusations that tech companies censor conservative voices are unfounded because even though most Silicon Valley executives are liberal or libertarian, they wouldn't let politics get in the way of their primary goal: making money...

The Internet Association, a trade association representing Facebook, Google and other tech companies, also pushed back on President Trump's repeated accusations that their products are biased against conservatives. The association says the platforms are open and enable the speech of all Americans -- including the president himself. "That's why the president uses Twitter so much," said Michael Beckerman, the Internet Association's chief executive. "He actually used Twitter for this particular announcement, which is perhaps ironic."

The article adds that the Trump administration "declined to tell The Washington Post what it planned to do with the data it's amassing." But on Twitter the New York Times technology columnist Kevin Roose argued that the survey "is just going to be used to assemble a voter file, which Trump will then pay Facebook millions of dollars to target with ads about how biased Facebook is."

Vice also believes it's a "craven data collection ploy" and "an elaborate way of getting people to subscribe to the White House's email list," adding "If this whole enterprise feels shady, that's because it is... The site isn't even hosted on a government server, but was created with Typeform, a Spain-based web tool that lets anyone set up simple surveys." Mashable also notes that the site "also just so happens to have an absolutely bonkers privacy policy" which includes allowing the White House to edit everything that's submitted.

Click here to read even more reactions.
Electronic Frontier Foundation

Censorship 'Can't Be The Only Answer' To Anti-Vax Misinformation, Argues EFF (eff.org) 313

Despite the spread of anti-vaccine misinformation, "censorship cannot be the only answer," argues the EFF, adding that "removing entire categories of speech from a platform does little to solve the underlying problems."

"Tech companies and online platforms have other ways to address the rapid spread of disinformation, including addressing the algorithmic 'megaphone' at the heart of the problem and giving users control over their own feeds... " Anti-vax information is able to thrive online in part because it exists in a data void in which available information about vaccines online is "limited, non-existent, or deeply problematic." Because the merit of vaccines has long been considered a decided issue, there is little recent scientific literature or educational material to take on the current mountains of disinformation. Thus, someone searching for recent literature on vaccines will likely find more anti-vax content than empirical medical research supporting vaccines. Censoring anti-vax disinformation won't address this problem.

Even attempts at the impossible task of wiping anti-vax disinformation from the Internet entirely will put it beyond the reach of researchers, public health professionals, and others who need to be able to study it and understand how it spreads. In a worst-case scenario, well-intentioned bans on anti-vax content could actually make this problem worse. Facebook, for example, has over-adjusted in the past to the detriment of legitimate educational health content...

Platforms must address one of the root causes behind disinformation's spread online: the algorithms that decide what content users see and when. And they should start by empowering users with more individualized tools that let them understand and control the information they see.... Users shouldn't be held hostage to a platform's proprietary algorithm. Instead of serving everyone "one algorithm to rule them all" and giving users just a few opportunities to tweak it, platforms should open up their APIs to allow users to create their own filtering rules for their own algorithms. News outlets, educational institutions, community groups, and individuals should all be able to create their own feeds, allowing users to choose who they trust to curate their information and share their preferences with their communities.

Government

California's Politicians Rush To Gut Internet Privacy Law With Pro-Tech Giant Amendments (theregister.co.uk) 59

The right for Californians to control the private data that tech companies hold on them may be undermined today at a critical committee hearing in Sacramento. The Register reports: The Privacy And Consumer Protection Committee will hold a special hearing on Tuesday afternoon to discuss and vote on nine proposed amendments to the California Consumer Privacy Act (CCPA) -- which was passed last year in the U.S. state but has yet to come into force. Right now, the legislation is undergoing tweaks at the committee stage. Privacy advocates are warning that most of the proposals before the privacy committee are influenced by the very industry that the law was supposed to constrain: big tech companies like Google, Facebook, and Amazon.

In most cases, the amendments seek to add carefully worded exemptions to the law that would benefit business at the cost of consumer rights. But most upsetting to privacy folk is the withdrawal of an amendment by Assembly member Buffy Wicks (D-15th District) that incorporated changes that would enhance consumer data privacy rights. Wicks' proposal would have given consumers more of a say of what is done with their personal data and more power to sue companies that break the rules. But the Assembly member pulled the measure the day before the hearing because it was not going to get the necessary votes. If a measure is voted down it cannot be reintroduced in that legislative session.

Privacy

Corporate Surveillance: When Employers Collect Data on Their Workers (cnbc.com) 54

An anonymous reader quotes CNBC: The emergence of sensor and other technologies that let businesses track, listen to and even watch employees while on company time is raising concern about corporate levels of surveillance... Earlier this year, Amazon received a patent for an ultrasonic bracelet that can detect a warehouse worker's location and monitor their interaction with inventory bins by using ultrasonic sound pulses. The system can track when and where workers put in or remove items from the bins. An Amazon spokesperson said the company has "no plans to introduce this technology" but that, if implemented in the future, could free up associates' hands, which now hold scanners to check and fulfill orders.

Walmart last year patented a system that lets the retail giant listen in on workers and customers. The system can track employee "performance metrics" and ensure that employees are performing their jobs efficiently and correctly by listening for sounds such as rustling of bags or beeps of scanners at the checkout line and can determine the number of items placed in bags and number of bags. Sensors can also capture sounds from guests talking while in line and determine whether employees are greeting guests. Walmart spokesman Kory Lundberg said the company doesn't have any immediate plans to implement the system.

Logistics company UPS has been using sensors in their delivery trucks to track usage to make sure drivers are wearing seat belts and maintenance is up to date.

Companies are also starting to analyze digital data, such as emails and calendar info, in the hopes of squeezing more productivity out of their workers. Microsoft's Workplace Analytics lets employers monitor data such as time spent on email, meeting time or time spent working after hours. Several enterprises, including Freddie Mac and CBRE, have tested the system.

A senior staff attorney for the EFF argues that new consumer privacy laws may not apply to employees. The article also cites a recent survey by Accenture in which 62% of executives "said their companies are using new technologies to collect data on people -- from the quality of work to safety and well-being" -- even though "fewer than a third said they feel confident they are using the data responsibly."

Yet the leader of Accenture's talent and organization practice argues that workforce data "could boost revenue by 6.4%. This has encouraged workers to be open to responsible use of data, but they want to know that they will get benefits and return on their time."

Slashdot Top Deals