×
Google

Google Pledges To Destroy Browsing Data To Settle 'Incognito' Lawsuit (wsj.com) 35

Google plans to destroy a trove of data that reflects millions of users' web-browsing histories, part of a settlement of a lawsuit that alleged the company tracked millions of users without their knowledge. WSJ: The class action, filed in 2020, accused Google of misleading users about how Chrome tracked the activity of anyone who used the private "Incognito" browsing option. The lawsuit alleged that Google's marketing and privacy disclosures didn't properly inform users of the kinds of data being collected, including details about which websites they viewed. The settlement details, filed Monday in San Francisco federal court, set out the actions the company will take to change its practices around private browsing. According to the court filing, Google has agreed to destroy billions of data points that the lawsuit alleges it improperly collected, to update disclosures about what it collects in private browsing and give users the option to disable third-party cookies in that setting.

The agreement doesn't include damages for individual users. But the settlement will allow individuals to file claims. Already the plaintiff attorneys have filed 50 in California state court. Attorney David Boies, who represents the consumers in the lawsuit, said the settlement requires Google to delete and remediate "in unprecedented scope and scale" the data it improperly collected. "This settlement is an historic step in requiring honesty and accountability from dominant technology companies," Boies said.

AT&T

AT&T Says Data From 73 Million Customers Has Leaked Onto the Dark Web (cnn.com) 21

Personal data from 73 million AT&T customers has leaked onto the dark web, reports CNN — both current and former customers.

AT&T has launched an investigation into the source of the data leak... In a news release Saturday morning, the telecommunications giant said the data was "released on the dark web approximately two weeks ago," and contains information such as account holders' Social Security numbers. ["The information varied by customer and account," AT&T said in a statement, " but may have included full name, email address, mailing address, phone number, social security number, date of birth, AT&T account number and passcode."]

"It is not yet known whether the data ... originated from AT&T or one of its vendors," the company added. "Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set."

The data seems to have been from 2019 or earlier. The leak does not appear to contain financial information or specifics about call history, according to AT&T. The company said the leak shows approximately 7.6 million current account holders and 65.4 million former account holders were affected.

CNN says the first reports of the leak came two weeks ago from a social media account claiming "the largest collection of malware source code, samples, and papers. Reached for a comment by CNN, AT&T had said at the time that "We have no indications of a compromise of our systems."

AT&T's web site now includes a special page with an FAQ — and the tagline that announces "We take cybersecurity very seriously..."

"It has come to our attention that a number of AT&T passcodes have been compromised..."

The page points out that AT&T has already reset the passcodes of "all 7.6 million impacted customers." It's only further down in the FAQ that they acknowledge that the breach "appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and 65.4 million former account holders." Our internal teams are working with external cybersecurity experts to analyze the situation... We encourage customers to remain vigilant by monitoring account activity and credit reports. You can set up free fraud alerts from nationwide credit bureaus — Equifax, Experian, and TransUnion. You can also request and review your free credit report at any time via Freecreditreport.com...

We will reach out by mail or email to individuals with compromised sensitive personal information and offering complimentary identity theft and credit monitoring services... If your information was impacted, you will be receiving an email or letter from us explaining the incident, what information was compromised, and what we are doing for you in response.

Government

Do Age Verification Laws Drag Us Back to the Dark Ages of the Internet? (404media.co) 159

404 Media claims to have identified "the fundamental flaw with the age verification bills and laws" that have already passed in eight state legislatures (with two more taking effect in July): "the delusional, unfounded belief that putting hurdles between people and pornography is going to actually prevent them from viewing porn."

They argue that age verification laws "drag us back to the dark ages of the internet." Slashdot reader samleecole shared this excerpt: What will happen, and is already happening, is that people — including minors — will go to unmoderated, actively harmful alternatives that don't require handing over a government-issued ID to see people have sex. Meanwhile, performers and companies that are trying to do the right thing will suffer....

The legislators passing these bills are doing so under the guise of protecting children, but what's actually happening is a widespread rewiring of the scaffolding of the internet. They ignore long-established legal precedent that has said for years that age verification is unconstitutional, eventually and inevitably reducing everything we see online without impossible privacy hurdles and compromises to that which is not "harmful to minors." The people who live in these states, including the minors the law is allegedly trying to protect, are worse off because of it. So is the rest of the internet.

Yet new legislation is advancing in Kentucky and Nebraska, while the state of Kansas just passed a law which even requires age-verification for viewing "acts of homosexuality," according to a report: Websites can be fined up to $10,000 for each instance a minor accesses their content, and parents are allowed to sue for damages of at least $50,000. This means that the state can "require age verification to access LGBTQ content," according to attorney Alejandra Caraballo, who said on Threads that "Kansas residents may soon need their state IDs" to access material that simply "depicts LGBTQ people."
One newspaper opinion piece argues there's an easier solution: don't buy your children a smartphone: Or we could purchase any of the various software packages that block social media and obscene content from their devices. Or we could allow them to use social media, but limit their screen time. Or we could educate them about the issues that social media causes and simply trust them to make good choices. All of these options would have been denied to us if we lived in a state that passed a strict age verification law. Not only do age verification laws reduce parental freedom, but they also create myriad privacy risks. Requiring platforms to collect government IDs and face scans opens the door to potential exploitation by hackers and enemy governments. The very information intended to protect children could end up in the wrong hands, compromising the privacy and security of millions of users...

Ultimately, age verification laws are a misguided attempt to address the complex issue of underage social media use. Instead of placing undue burdens on users and limiting parental liberty, lawmakers should look for alternative strategies that respect privacy rights while promoting online safety.

This week a trade association for the adult entertainment industry announced plans to petition America's Supreme Court to intervene.
Cellphones

America's DHS Is Expected to Stop Buying Access to Your Phone Movements (notus.org) 49

America's Department of Homeland Security "is expected to stop buying access to data showing the movement of phones," reports the U.S. news site NOTUS.

They call the purchasers "a controversial practice that has allowed it to warrantlessly track hundreds of millions of people for years." Since 2018, agencies within the department — including Immigration and Customs Enforcement, U.S. Customs and Border Protection and the U.S. Secret Service — have been buying access to commercially available data that revealed the movement patterns of devices, many inside the United States. Commercially available phone data can be bought and searched without judicial oversight.

Three people familiar with the matter said the Department of Homeland Security isn't expected to buy access to more of this data, nor will the agency make any additional funding available to buy access to this data. The agency "paused" this practice after a 2023 DHS watchdog report [which had recommended they draw up better privacy controls and policies]. However, the department instead appears to be winding down the use of the data...

"The information that is available commercially would kind of knock your socks off," said former top CIA official Michael Morell on a podcast last year. "If we collected it using traditional intelligence methods, it would be top-secret sensitive. And you wouldn't put it in a database, you'd keep it in a safe...." DHS' internal watchdog opened an investigation after a bipartisan outcry from lawmakers and civil society groups about warrantless tracking...

"Meanwhile, U.S. spy agencies are fighting to preserve the same capability as part of the renewal of surveillance authorities," the article adds.

"A bipartisan coalition of lawmakers, led by Democratic Sen. Ron Wyden in the Senate and Republican Rep. Warren Davidson in the House, is pushing to ban U.S. government agencies from buying data on Americans."
Security

'Security Engineering' Author Ross Anderson, Cambridge Professor, Dies at Age 67 (therecord.media) 7

The Record reports: Ross Anderson, a professor of security engineering at the University of Cambridge who is widely recognized for his contributions to computing, passed away at home on Thursday according to friends and colleagues who have been in touch with his family and the University.

Anderson, who also taught at Edinburgh University, was one of the most respected academic engineers and computer scientists of his generation. His research included machine learning, cryptographic protocols, hardware reverse engineering and breaking ciphers, among other topics. His public achievements include, but are by no means limited to, being awarded the British Computer Society's Lovelace Medal in 2015, and publishing several editions of the Security Engineering textbook.

Anderson's security research made headlines throughout his career, with his name appearing in over a dozen Slashdot stories...

My favorite story? UK Banks Attempt To Censor Academic Publication.

"Cambridge University has resisted the demands and has sent a response to the bankers explaining why they will keep the page online..."


Cloud

Cloud Server Host Vultr Rips User Data Ownership Clause From ToS After Web Outage (theregister.com) 28

Tobias Mann reports via The Register: Cloud server provider Vultr has rapidly revised its terms-of-service after netizens raised the alarm over broad clauses that demanded the "perpetual, irrevocable, royalty-free" rights to customer "content." The red tape was updated in January, as captured by the Internet Archive, and this month users were asked to agree to the changes by a pop-up that appeared when using their web-based Vultr control panel. That prompted folks to look through the terms, and there they found clauses granting the US outfit a "worldwide license ... to use, reproduce, process, adapt ... modify, prepare derivative works, publish, transmit, and distribute" user content.

It turned out these demands have been in place since before the January update; customers have only just noticed them now. Given Vultr hosts servers and storage in the cloud for its subscribers, some feared the biz was giving itself way too much ownership over their stuff, all in this age of AI training data being put up for sale by platforms. In response to online outcry, largely stemming from Reddit, Vultr in the past few hours rewrote its ToS to delete those asserted content rights. CEO J.J. Kardwell told The Register earlier today it's a case of standard legal boilerplate being taken out of context. The clauses were supposed to apply to customer forum posts, rather than private server content, and while, yes, the terms make more sense with that in mind, one might argue the legalese was overly broad in any case.

"We do not use user data," Kardwell stressed to us. "We never have, and we never will. We take privacy and security very seriously. It's at the core of what we do globally." [...] According to Kardwell, the content clauses are entirely separate to user data deployed in its cloud, and are more aimed at one's use of the Vultr website, emphasizing the last line of the relevant fine print: "... for purposes of providing the services to you." He also pointed out that the wording has been that way for some time, and added the prompt asking users to agree to an updated ToS was actually spurred by unrelated Microsoft licensing changes. In light of the controversy, Vultr vowed to remove the above section to "simplify and further clarify" its ToS, and has indeed done so. In a separate statement, the biz told The Register the removal will be followed by a full review and update to its terms of service.
"It's clearly causing confusion for some portion of users. We recognize that the average user doesn't have a law degree," Kardwell added. "We're very focused on being responsive to the community and the concerns people have and we believe the strongest thing we can do to demonstrate that there is no bad intent here is to remove it."
Government

Biden Orders Every US Agency To Appoint a Chief AI Officer 48

An anonymous reader quotes a report from Ars Technica: The White House has announced the "first government-wide policy (PDF) to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.

Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.

The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide.
Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."
AI

AI Leaders Press Advantage With Congress as China Tensions Rise (nytimes.com) 14

Silicon Valley chiefs are swarming the Capitol to try to sway lawmakers on the dangers of falling behind in the AI race. From a report: In recent weeks, American lawmakers have moved to ban the Chinese-owned app TikTok. President Biden reinforced his commitment to overcome China's rise in tech. And the Chinese government added chips from Intel and AMD to a blacklist of imports. Now, as the tech and economic cold war between the United States and China accelerates, Silicon Valley's leaders are capitalizing on the strife with a lobbying push for their interests in another promising field of technology: artificial intelligence.

On May 1, more than 100 tech chiefs and investors, including Alex Karp, the head of the defense contractor Palantir, and Roelof Botha, the managing partner of the venture capital firm Sequoia Capital, will come to Washington for a daylong conference and private dinner focused on drumming up more hawkishness toward China's progress in A.I. Dozens of lawmakers, including Speaker Mike Johnson, Republican of Louisiana, will also attend the event, the Hill & Valley Forum, which will include fireside chats and keynote discussions with members of a new House A.I. task force.

Tech executives plan to use the event to directly lobby against A.I. regulations that they consider onerous, as well as ask for more government spending on the technology and research to support its development. They also plan to ask to relax immigration restrictions to bring more A.I. experts to the United States. The event highlights an unusual area of agreement between Washington and Silicon Valley, which have long clashed on topics like data privacy, children's online protections and even China.

Social Networks

TikTok Is Under Investigation By the FTC Over Data Practices (apnews.com) 11

TikTok is being investigated by the FTC over its data and security practices, "a probe that could lead to a settlement or a lawsuit against the company," reports the Associated Press. From the report: In its investigation, the FTC has been looking into whether TikTok violated a portion of federal law that prohibits "unfair and deceptive" business practices by denying that individuals in China had access to U.S. user data, said the person, who is not authorized to discuss the investigation. The agency also is scrutinizing the company over potential violations of the Children's Online Privacy Protection Act, which requires kid-oriented apps and websites to get parents' consent before collecting personal information of children under 13.

The agency is nearing the conclusion of its investigation and could settle with TikTok in the coming weeks. But there's not a deadline for an agreement, the person said. If the FTC moves forward with a lawsuit instead, it would have to refer the case to the Justice Department, which would have 45 days to decide whether it wants to file a case on the FTC's behalf, make changes or send it back to the agency to pursue on its own.

Privacy

Portugal Orders Altman's Worldcoin To Halt Data Collection (reuters.com) 24

Portugal's data regulator has ordered Sam Altman's iris-scanning project Worldcoin to stop collecting biometric data for 90 days, it said on Tuesday, in the latest regulatory blow to a venture that has raised privacy concerns in multiple countries. From a report: Worldcoin encourages people to have their faces scanned by its "orb" devices, in exchange for a digital ID and free cryptocurrency. More than 4.5 million people in 120 countries have signed up, according to Worldcoin's website. Portugal's data regulator, the CNPD, said there was a high risk to citizens' data protection rights, which justified urgent intervention to prevent serious harm. More than 300,000 people in Portugal have provided Worldcoin with their biometric data, the CNPD said.
Businesses

Telegram's Peer-to-Peer Login System is a Risky Way To Save $5 a Month 32

Telegram is offering a new way to earn a premium subscription free of charge: all you have to do is volunteer your phone number to relay one-time passwords (OTP) to other users. This, in fact, sounds like an awful idea -- particularly for a messaging service based around privacy. From a report: X user @AssembleDebug spotted details about the new program on the English-language version of a popular Russian-language Telegram information channel. Sure enough, there's a section in Telegram's terms of service outlining the new "Peer-to-Peer Login" or P2PL program, which is currently only offered on Android and in certain (unspecified) locations. By opting in to the program, you agree to let Telegram use your phone number to send up to 150 texts with OTPs to other users logging in to their accounts. Every month your number is used to send a minimum number of OTPs, you'll get a gift code for a one-month premium subscription. Boy does this sound like a bad idea, starting with the main issue: your phone number is seen by the recipient every time it's used to send an OTP.
Social Networks

DeSantis Signs Bill Requiring Parental Consent For Kids Under 16 To Hold Social Media Accounts 151

Florida Governor Ron DeSantis just signed into law HB 3 [PDF], a bill that will give parents of teens under 16 more control over their kids' access to social media and require age verification for many websites. From a report: The bill requires social media platforms to prevent kids under 14 from creating accounts, and delete existing ones. It also requires parent or guardian consent for 14- and 15-year-olds to create or maintain social media accounts and mandates that platforms delete social media accounts and personal information for this age group at the teen's or parent's request.

Companies that fail to promptly delete accounts belonging to 14- and 15-year-olds can be sued on behalf of those kids and may owe them up to $10,000 in damages each. A "knowing or reckless" violation could also be considered an unfair or deceptive trade practice, subject to up to $50,000 in civil penalties per violation. The bill also requires many commercial apps and websites to verify their users' ages -- something that introduces a host of privacy concerns. But it does require websites to give users the option of "anonymous age verification," which is defined as verification by a third party that cannot retain identifying information after the task is complete.
The Courts

Judge Orders YouTube to Reveal Everyone Who Viewed A Video (mashable.com) 169

"If you've ever jokingly wondered if your search or viewing history is going to 'put you on some kind of list,' your concern may be more than warranted," writes Mashable : In now unsealed court documents reviewed by Forbes, Google was ordered to hand over the names, addresses, telephone numbers, and user activity of Youtube accounts and IP addresses that watched select YouTube videos, part of a larger criminal investigation by federal investigators.

The videos were sent by undercover police to a suspected cryptocurrency launderer... In conversations with the bitcoin trader, investigators sent links to public YouTube tutorials on mapping via drones and augmented reality software, Forbes details. The videos were watched more than 30,000 times, presumably by thousands of users unrelated to the case. YouTube's parent company Google was ordered by federal investigators to quietly hand over all such viewer data for the period of Jan. 1 to Jan. 8, 2023...

"According to documents viewed by Forbes, a court granted the government's request for the information," writes PC Magazine, adding that Google was asked "to not publicize the request." The requests are raising alarms for privacy experts who say the requests are unconstitutional and are "transforming search warrants into digital dragnets" by potentially targeting individuals who are not associated with a crime based simply on what they may have watched online.
That quote came from Albert Fox-Cahn, executive director at the Surveillance Technology Oversight Project, who elaborates in Forbes' article. "No one should fear a knock at the door from police simply because of what the YouTube algorithm serves up. I'm horrified that the courts are allowing this."

Thanks to long-time Slashdot reader schwit1 for sharing the article.
Privacy

Steve Wozniak Decries Tracking's Effect on Privacy, Calls Out 'Hypocrisy' of Only Banning TikTok (cnn.com) 137

In an interview Saturday, CNN first asked Steve Wozniak about Apple's "walled garden" approach — and whether there's any disconnect between Apple's stated interest in user security and privacy, and its own self-interest?

Wozniak responded, "I think there are things you can say on all sides of it. "I'm kind of glad for the protection that I have for my privacy and for you know not getting hacked as much. Apple does a better job than the others.

And tracking you — tracking you is questionable, but my gosh, look at what we're accusing TikTok of, and then go look at Facebook and Google... That's how they make their business! I mean, Facebook was a great idea. But then they make all their money just by tracking you and advertising.

And Apple doesn't really do that as much. I consider Apple the good guy.

So then CNN directly asked Wozniak's opinion about the proposed ban on TikTok in the U.S. "Well, one, I don't understand it. I don't see why. I mean, I get a lot of entertainment out of TikTok — and I avoid the social web. But I love to watch TikTok, even if it's just for rescuing dog videos and stuff.

And so I'm thinking, well, what are we saying? We're saying 'Oh, you might be tracked by the Chinese'. Well, they learned it from us.

I mean, look, if you have a principle — a person should not be tracked without them knowing it? It's kind of a privacy principle — I was a founder of the EFF. And if you have that principle, you apply it the same to every company, or every country. You don't say, 'Here's one case where we're going to outlaw an app, but we're not going to do it in these other cases.'

So I don't like the hypocrisy. And that's always obviously common from a political realm.

Classic Games (Games)

New Book Remembers LAN Parties and the 1990s 'Multiplayer Revolution' (cnn.com) 74

CNN looks back to when "dial-up internet (and its iconic dial tone) was 'still a thing..." "File-sharing services like Napster and LimeWire were just beginning to take off... And in sweaty dorm rooms and sparse basements across the world, people brought their desktop monitors together to set up a local area network (LAN) and play multiplayer games — "Half-Life," "Counter-Strike," "Starsiege: Tribes," "StarCraft," "WarCraft" or "Unreal Tournament," to name just a few. These were informal but high-stakes gatherings, then known as LAN parties, whether winning a box of energy drinks or just the joy of emerging victorious. The parties could last several days and nights, with gamers crowded together among heavy computers and fast food boxes, crashing underneath their desks in sleeping bags and taking breaks to pull pranks on each other or watch movies...

It's this nostalgia that prompted writer and podcaster Merritt K to document the era's gaming culture in her new photobook "LAN Party: Inside the Multiplayer Revolution." After floating the idea on X, the social media platform formerly known as Twitter, she received an immediate — and visceral — response from old-school gamers all too keen to share memories and photos from LAN parties and gaming conventions across the world... It's strange to remember that the internet was once a place you went to spend time with other real people; a tethered space, not a cling-film-like reality enveloping the corporeal world from your own pocket....

Growing up as a teenager in this era, you could feel a sense of hope (that perhaps now feels like naivete) about the possibilities of technology, K explained. The book is full of photos featuring people smiling and posing with their desktop monitors, pride and fanfare apparent... "It felt like, 'Wow, the future is coming,'" K said. "It was this exciting time where you felt like you were just charting your own way. I don't want to romanticize it too much, because obviously it wasn't perfect, but it was a very, very different experience...."

"We've kind of lost a lot of control, I think over our relationship to technology," K said. "We have lost a lot of privacy as well. There's less of a sense of exploration because there just isn't as much out there."

One photo shows a stack of Mountain Dew cans (remembering that by 2007 the company had even released a line of soda called "Game Fuel"). "It was a little more communal," the book's author told CNN. "If you're playing games in the same room with someone, it's a different experience than doing it online. You can only be so much of a jackass to somebody who was sitting three feet away from you..."

They adds that that feeling of connecting to people in other places "was cool. It wasn't something that was taken for granted yet."
Desktops (Apple)

Apple Criticized For Changing the macOS version of cURL (daniel.haxx.se) 75

"On December 28 2023, bugreport 12604 was filed in the curl issue tracker," writes cURL lead developer Daniel Stenberg: The title stated of the problem in this case was quite clear: flag -cacert behavior isn't consistent between macOS and Linux , and it was filed by Yuedong Wu.

The friendly reporter showed how the curl version bundled with macOS behaves differently than curl binaries built entirely from open source. Even when running the same curl version on the same macOS machine.

The curl command line option --cacert provides a way for the user to say to curl that this is the exact set of CA certificates to trust when doing the following transfer. If the TLS server cannot provide a certificate that can be verified with that set of certificates, it should fail and return error. This particular behavior and functionality in curl has been established since many years (this option was added to curl in December 2000) and of course is provided to allow users to know that it communicates with a known and trusted server. A pretty fundamental part of what TLS does really.

When this command line option is used with curl on macOS, the version shipped by Apple, it seems to fall back and checks the system CA store in case the provided set of CA certs fail the verification. A secondary check that was not asked for, is not documented and plain frankly comes completely by surprise. Therefore, when a user runs the check with a trimmed and dedicated CA cert file, it will not fail if the system CA store contains a cert that can verify the server!

This is a security problem because now suddenly certificate checks pass that should not pass.

"We don't consider this something that needs to be addressed in our platforms," Apple Product Security responded. Stenberg's blog post responds, "I disagree."

Long-time Slashdot reader lee1 shares their reaction: I started to sour on MacOS about 20 years ago when I discovered that they had, without notice, substituted their own, nonstandard version of the Readline library for the one that the rest of the Unix-like world was using. This broke gnuplot and a lot of other free software...

Apple is still breaking things, this time with serious security and privacy implications.

Databases

Database For UK Nurse Registration 'Completely Unacceptable' (theregister.com) 42

Lindsay Clark reports via The Register: The UK Information Commissioner's Office has received a complaint detailing the mismanagement of personal data at the Nursing and Midwifery Council (NMC), the regulator that oversees worker registration. Employment as a nurse or midwife depends on enrollment with the NMC in the UK. According to whistleblower evidence seen by The Register, the databases on which the personal information is held lack rudimentary technical standards and practices. The NMC said its data was secure with a high level of quality, allowing it to fulfill its regulatory role, although it was on "a journey of improvement." But without basic documentation, or the primary keys or foreign keys common in database management, the Microsoft SQL Server databases -- holding information about 800,000 registered professionals -- are difficult to query and manage, making assurances on governance nearly impossible, the whistleblower told us.

The databases have no version control systems. Important fields for identifying individuals were used inconsistently -- for example, containing junk data, test data, or null data. Although the tech team used workarounds to compensate for the lack of basic technical standards, they were ad hoc and known by only a handful of individuals, creating business continuity risks should they leave the organization, according to the whistleblower. Despite having been warned of the issues of basic technical practice internally, the NMC failed to acknowledge the problems. Only after exhausting other avenues did the whistleblower raise concern externally with the ICO and The Register. The NMC stores sensitive data on behalf of the professionals that it registers, including gender, sexual orientation, gender identity, ethnicity and nationality, disability details, marital status, as well as other personal information.

The whistleblower's complaint claims the NMC falls well short of [the standards required under current UK law for data protection and the EU's General Data Protection Regulation (GDPR)]. The statement alleges that the NMC's "data management and data retrieval practices were completely unacceptable." "There is not even much by way of internal structure of the databases for self-documentation, such as primary keys, foreign keys (with a few honorable exceptions), check constraints and table constraints. Even fields that should not be null are nullable. This is frankly astonishing and not the practice of a mature, professional organization," the statement says. For example, the databases contain a unique ten-digit number (or PRN) to identify individuals registered to the NMC. However, the fields for PRNs sometimes contain individuals' names, start with a letter or other invalid data, or are simply null. The whistleblower's complaint says that the PRN problem, and other database design deficiencies, meant that it was nearly impossible to produce "accurate, correct, business critical reports ... because frankly no one knows where the correct data is to be found."
A spokesperson for the NMC said the register was "organized and documented" in the SQL Server database. "For clarity, the register of all our nurses, midwives and nursing practitioners is held within Dynamics 365 which is our system of record. This solution and the data held within it, is secure and well documented. It does not rely on any SQL database. The SQL database referenced by the whistleblower relates to our data warehouse which we are in the process of modernizing as previously shared."
Privacy

General Motors Quits Sharing Driving Behavior With Data Brokers (nytimes.com) 34

An anonymous reader quotes a report from the New York Times: General Motors said Friday that it had stopped sharing details about how people drove its cars with two data brokers that created risk profiles for the insurance industry. The decision followed a New York Times report this month that G.M. had, for years, been sharing data about drivers' mileage, braking, acceleration and speed with the insurance industry. The drivers were enrolled -- some unknowingly, they said -- in OnStar Smart Driver, a feature in G.M.'s internet-connected cars that collected data about how the car had been driven and promised feedback and digital badges for good driving. Some drivers said their insurance rates had increased as a result of the captured data, which G.M. shared with two brokers, LexisNexis Risk Solutions and Verisk. The firms then sold the data to insurance companies. Since Wednesday, "OnStar Smart Driver customer data is no longer being shared with LexisNexis or Verisk," a G.M. spokeswoman, Malorie Lucich, said in an emailed statement. "Customer trust is a priority for us, and we are actively evaluating our privacy processes and policies."
Mozilla

Mozilla Drops Onerep After CEO Admits To Running People-Search Networks (krebsonsecurity.com) 9

An anonymous reader quotes a report from KrebsOnSecurity: The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep's CEO to admit that he has founded dozens of people-search networks over the years. Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches. On March 14, KrebsOnSecurity published a story showing that Onerep's Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.

But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 -- around the same time he launched Onerep. Shelest maintained that Nuwber has "zero cross-over or information-sharing with Onerep," and said any other old domains that may be found and associated with his name are no longer being operated by him. "I get it," Shelest wrote. "My affiliation with a people search business may look odd from the outside. In truth, if I hadn't taken that initial path with a deep dive into how people search sites work, Onerep wouldn't have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I'm aiming to do better in the future." The full statement is available here (PDF).

In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product. "Though customer data was never at risk, the outside financial interests and activities of Onerep's CEO do not align with our values," Mozilla wrote. "We're working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first." KrebsOnSecurity also reported that Shelest's email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.

Shelest denied ever being associated with Spamit. "Between 2010 and 2014, we put up some web pages and optimize them -- a widely used SEO practice -- and then ran AdSense banners on them," Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). "As we progressed and learned more, we saw that a lot of the inquiries coming in were for people." Shelest also acknowledged that Onerep pays to run ads on "on a handful of data broker sites in very specific circumstances." "Our ad is served once someone has manually completed an opt-out form on their own," Shelest wrote. "The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep."

United States

DOT Wants To Know How Big Airlines Use Passenger Data (theregister.com) 11

The U.S. Department of Transportation has announced it will conduct a review of the data practices of the country's ten largest airlines, amid concerns over potential misuse of customer information for upselling, overcharging, targeted advertising, and third-party data sales, as well as the security of systems handling sensitive data such as passport numbers. From a report: The probe will look at air carriers' policies and procedures to determine if they are safeguarding personal info properly, unfairly or deceptively monetizing it, or sharing it with third parties, the agency said yesterday. If they're indeed doing anything "problematic," they can look forward to scrutiny, fines, and new rules, says the DOT. "Airline passengers should have confidence that their personal information is not being shared improperly with third parties or mishandled by employees," said US Transportation Secretary Pete Buttigieg.

"This review of airline practices is the beginning of a new initiative by DOT to ensure airlines are being good stewards of sensitive passenger data." The ten airlines going under the magnifying glass are Delta, United, American, Southwest, Alaska, JetBlue, Spirit, Frontier, Hawaiian and Allegiant.

Slashdot Top Deals