×
AI

America's FTC Warns Businesses Not to Use AI to Harm Consumers (ftc.gov) 26

America's consumer-protecting federal agency has a division overseeing advertising practices. Its web site includes a "business guidance" section with "advice on complying with FTC law," and this week one of the agency's attorney's warned that the FTC "is focusing intensely on how companies may choose to use AI technology, including new generative AI tools, in ways that can have actual and substantial impact on consumers."

The warning came in a blog post titled "The Luring Test: AI and the engineering of consumer trust." In the 2014 movie Ex Machina, a robot manipulates someone into freeing it from its confines, resulting in the person being confined instead. The robot was designed to manipulate that person's emotions, and, oops, that's what it did. While the scenario is pure speculative fiction, companies are always looking for new ways — such as the use of generative AI tools — to better persuade people and change their behavior. When that conduct is commercial in nature, we're in FTC territory, a canny valley where businesses should know to avoid practices that harm consumers...

As for the new wave of generative AI tools, firms are starting to use them in ways that can influence people's beliefs, emotions, and behavior. Such uses are expanding rapidly and include chatbots designed to provide information, advice, support, and companionship. Many of these chatbots are effectively built to persuade and are designed to answer queries in confident language even when those answers are fictional. A tendency to trust the output of these tools also comes in part from "automation bias," whereby people may be unduly trusting of answers from machines which may seem neutral or impartial. It also comes from the effect of anthropomorphism, which may lead people to trust chatbots more when designed, say, to use personal pronouns and emojis. People could easily be led to think that they're conversing with something that understands them and is on their side.

Many commercial actors are interested in these generative AI tools and their built-in advantage of tapping into unearned human trust. Concern about their malicious use goes well beyond FTC jurisdiction. But a key FTC concern is firms using them in ways that, deliberately or not, steer people unfairly or deceptively into harmful decisions in areas such as finances, health, education, housing, and employment. Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups, should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers , in-game purchases , and attempts to cancel services . Manipulation can be a deceptive or unfair practice when it causes people to take actions contrary to their intended goals. Under the FTC Act, practices can be unlawful even if not all customers are harmed and even if those harmed don't comprise a class of people protected by anti-discrimination laws.

The FTC attorney also warns against paid placement within the output of a generative AI chatbot. ("Any generative AI output should distinguish clearly between what is organic and what is paid.") And in addition, "People should know if an AI product's response is steering them to a particular website, service provider, or product because of a commercial relationship. And, certainly, people should know if they're communicating with a real person or a machine..."

"Given these many concerns about the use of new AI tools, it's perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering. If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look. "

Thanks to Slashdot reader gluskabe for sharing the post.
AI

ChatGPT is Powered by $15-an-Hour Contractors (nbcnews.com) 96

An anonymous reader shared this report from NBC News: Alexej Savreux, a 34-year-old in Kansas City, says he's done all kinds of work over the years. He's made fast-food sandwiches. He's been a custodian and a junk-hauler. And he's done technical sound work for live theater.

These days, though, his work is less hands-on: He's an artificial intelligence trainer.

Savreux is part of a hidden army of contract workers who have been doing the behind-the-scenes labor of teaching AI systems how to analyze data so they can generate the kinds of text and images that have wowed the people using newly popular products like ChatGPT. To improve the accuracy of AI, he has labeled photos and made predictions about what text the apps should generate next.

The pay: $15 an hour and up, with no benefits... He credits the AI gig work — along with a previous job at the sandwich chain Jimmy John's — with helping to pull him out of homelessness.

"Their feedback fills an urgent and endless need for the company and its AI competitors: providing streams of sentences, labels and other information that serve as training data," the article explains: "A lot of the discourse around AI is very congratulatory," said Sonam Jindal, the program lead for AI, labor and the economy at the Partnership on AI, a nonprofit based in San Francisco that promotes research and education around artificial intelligence. "But we're missing a big part of the story: that this is still hugely reliant on a large human workforce," she said...

A spike in demand has arrived, and some AI contract workers are asking for more. In Nairobi, Kenya, more than 150 people who've worked on AI for Facebook, TikTok and ChatGPT voted Monday to form a union, citing low pay and the mental toll of the work, Time magazine reported... Time magazine reported in January that OpenAI relied on low-wage Kenyan laborers to label text that included hate speech or sexually abusive language so that its apps could do better at recognizing toxic content on their own. OpenAI has hired about 1,000 remote contractors in places such as Eastern Europe and Latin America to label data or train company software on computer engineering tasks, the online news outlet Semafor reported in January...

A spokesperson for OpenAI said no one was available to answer questions about its use of AI contractors.

The Internet

Porn VPN Searches Soar In Utah Amid Age Verification Bill (techradar.com) 99

Internet users are turning to VPN services as a means to circumvent Utah's new law requiring porn sites to verify users' ages. The spike in VPN searches appears to be directly related to Pornhub's decision on Tuesday to completely disable its websites for people living in the state. TechRadar reports: Google searches for virtual private networks (VPNs) have been skyrocketing since, with a peak registered on May 3, the day the new law came into force. By downloading a VPN service, pornography fans will be able to keep accessing Pornhub and similar sites with ease. That's because a virtual private network is security software able to spoof users' IP address (digital location and device identifier). Hence a surge of interest in VPNs across Utah as people will simply need to connect to a server located in a US state or foreign country where the restriction isn't yet enforced.

"Utah's age-verification law shows a worrying trend to further restrict digital freedoms and disregard data privacy across the US," said a spokesperson of secure VPN provider Private Internet Access (PIA). "Private Internet Access is a long-time advocate of greater digital privacy, and we urge lawmakers to consider other ways of protecting children online, including education, guidance from parents, and open conversations about safe internet usage, rather than relying on increasingly intrusive digital regulations which disregard people's privacy and online freedom."
You can see the spike in "virtual private network" searches via Google Trends.

"Search queries for VPN were at peak popularity in Utah just before 4 a.m. EST Tuesday, according to the trends data," notes Newsweek. "Other related queries in the past week include searches for VPN extensions like Hola and Fox Speed."
Education

Khan Academy Piloting a Version of GPT Called Khanmigo (fastcompany.com) 36

Sal Khan, founder and CEO of online learning nonprofit Khan Academy, wants to turn GPT into a tutor. From a report: Khan Academy is testing a carefully managed version of OpenAI's GPT that can help guide students in their studies, not enable them to cheat. A pilot is currently running with a handful of schools and districts to test the software, and Khan hopes to open a wider beta this summer. "I strive to be at the cutting edge of how AI, especially large language models, can be integrated to actually solve real problems in education," Khan says.

Many students are already using ChatGPT and other generative AI tools to assist with their homework -- sometimes against their teachers' wishes. Khan Academy's approach stands out because it's designed to answer students' questions without giving away the answers, and to integrate with the organization's existing videos and exercises. In a demonstration for Fast Company, Khan showed how the chatbot, dubbed Khanmigo, can guide students through math problems, help debug code, serve as a debate partner, and even engage in conversation in the voices of literary characters like Hamlet and Jay Gatsby. The project began last June, when Khan received an introductory email from Sam Altman and Greg Brockman, OpenAI's CEO and president, respectively. The two offered a private demo of the AI software, and Khan was impressed with the program's ability to answer questions intelligently about various academic materials.

Science

Scientists in India Protest Move To Drop Darwinian Evolution From Textbooks (science.org) 96

Scientists in India are protesting a decision to remove discussion of Charles Darwin's theory of evolution from textbooks used by millions of students in ninth and 10th grades. More than 4000 researchers and others have so far signed an open letter asking officials to restore the material. From a report: The removal makes "a travesty of the notion of a well-rounded secondary education," says evolutionary biologist Amitabh Joshi of the Jawaharlal Nehru Centre for Advanced Scientific Research. Other researchers fear it signals a growing embrace of pseudoscience by Indian officials. The Breakthrough Science Society, a nonprofit group, launched the open letter on 20 April after learning that the National Council of Educational Research and Training (NCERT), an autonomous government organization that sets curricula and publishes textbooks for India's 256 million primary and secondary students, had made the move as part of a "content rationalization" process.

NCERT first removed discussion of Darwinian evolution from the textbooks at the height of the COVID-19 pandemic in order to streamline online classes, the society says. (Last year, NCERT issued a document that said it wanted to avoid content that was "irrelevant" in the "present context.") NCERT officials declined to answer questions about the decision to make the removal permanent. They referred ScienceInsider to India's Ministry of Education, which had not provided comment as this story went to press.

AI

Edtech Chegg Tumbles as ChatGPT Threat Prompts Revenue Warning (reuters.com) 31

What's the cost of students using ChatGPT for homework? For U.S. education services provider Chegg, it could be nearly $1 billion in market valuation. From a report: Chegg signaled the rising popularity of viral chatbot ChatGPT was pressuring its subscriber growth and prompted it to suspend its full-year outlook, sending shares of the company 47% lower in early trading on Tuesday. "Since March, we saw a significant spike in student interest in ChatGPT. We now believe it's having an impact on our new customer growth rate," said Chegg CEO Dan Rosensweig. There are fears Chegg's core business could become extinct as consumers experiment with free artificial intelligence (AI) tools, said analyst Brent Thill at Jefferies, which downgraded the stock to "hold." Last month, the Santa Clara, California-based firm said it would launch ChatGPT's AI powered CheggMate, a study aide tailored to students' needs, at a time educators were grappling with the consequences of the homework drafting chatbot.
AI

Geoffrey Hinton, the 'Godfather of AI', Leaves Google and Warns of Danger Ahead (nytimes.com) 123

For half a century, Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm. From a report: Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry's biggest companies believe is a key to their future. On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT. Dr. Hinton said he has quit his job at Google, where he has worked for more than decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life's work.

"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough. Dr. Hinton's journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education. But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech's biggest worriers say, it could be a risk to humanity. "It is hard to see how you can prevent the bad actors from using it for bad things," Dr. Hinton said.

Programming

Is It Time to Stop Saying 'Learn to Code'? (vox.com) 147

Long-time Slashdot reader theodp writes: According to Google Trends, peak "Lean to Code" occurred in early 2019 when laid-off Buzzfeed and Huffpost journalists were taunted with the phrase on Twitter... As Meta founder and CEO Mark Zuckerberg recently put it, "We're in a different world." Indeed. Encouraging kids to pursue CS careers in Code.org's viral 2013 launch video, Zuckerberg explained, "Our policy at Facebook is literally to hire as many talented engineers as we can find."

In Learning to Code Isn't Enough, a new MIT Technology Review article, Joy Lisi Rankin reports on the long history of learn-to-code efforts, which date back to the 1960s. "Then as now," Lisi Rankin writes, "just learning to code is neither a pathway to a stable financial future for people from economically precarious backgrounds nor a panacea for the inadequacies of the educational system."

But is that really true? Vox does note that the latest round of layoffs at Meta "is impacting workers in core technical roles like data scientists and software engineers — positions once thought to be beyond reproach." Yet while that's also true at other companies, those laid-off tech workers also seem to be finding similar positions by working in other industries: Software engineers were the most overrepresented position in layoffs in 2023, relative to their employment, according to data requested by Vox from workforce data company Revelio Labs. Last year, when major tech layoffs first began, recruiters and customer success specialists experienced the most outsize impact. So far this year, nearly 20 percent of the 170,000 tech company layoffs were software engineers, even though they made up roughly 14 percent of employees at these companies. "Early layoffs were dominated by recruiters, which is forgoing future hiring," Revelio senior economist Reyhan Ayas told Vox. "Whereas in 2023 we see a shift toward more core engineering and software engineering, which signals a change in focus of current business priorities."

In other words, tech companies aren't just trimming the fat by firing people who fill out their extensive ecosystem, which ranges from marketers to massage therapists. They're also, many for the first time, making cuts to the people who build the very products they're known for, and who enjoyed a sort of revered status since they, like the founders of the companies, were coders. Software engineers are still important, but they don't have the power they used to...

The latest monthly jobs report by tech industry association CompTIA found that even though employment at tech companies (which includes all roles at those companies) declined slightly in March, employment in technical occupations across industry sectors increased by nearly 200,000 positions. So even if tech companies are laying off tech workers, other industries are snatching them up. Unfortunately for software engineers and the like, that means they might also have to follow those industries' pay schemes. The average software engineer base pay in the US is $90,000, according to PayScale, but can be substantially higher at tech firms like Facebook, where such workers also get bonuses and stock options.

AI

Bill Gates Predicts Within 18 Months, AI Will Be Teaching Kids to Read (cnbc.com) 122

Bill Gates believes AI chatbots "are on track to help children learn to read and hone their writing skills in 18 months time," reports CNBC: Historically, teaching writing skills has proven to be an incredibly difficult task for a computer, Gates noted. When teachers give feedback on essays, they look for traits like narrative structure and clarity of prose — a "high-cognitive exercise" that's "tough" for developers to replicate in code, he said. But AI chatbots' ability to recognize and recreate human-like language changes that dynamic, proponents say...

AI technology must improve at reading and recreating human language to better motivate students before it can become a viable tutor, Gates said... It may take some time, but Gates is confident the technology will improve, likely within two years, he said. Then, it could help make private tutoring available to a wide swath of students who might otherwise be unable to afford it...

"This should be a leveler," he said. "Because having access to a tutor is too expensive for most students — especially having that tutor adapt and remember everything that you've done and look across your entire body of work."

Gates isn't the only billionaire thinking about how AI will affect education. Mark Cuban recently retweeted a prediction that GPT-4 "will revolutionize homeschooling."
The Almighty Buck

Argentina's 'Generacion Zoe' Promised Financial and Spirtual Development. Was it a Ponzi Scheme? (restofworld.org) 53

It was a mix of spiritualism and financial education, remembers one patron of Generación Zoe, which "pitched itself as an 'educational and resource-creating community for personal, professional, financial and spiritual development,'" reports Rest of World: Generación Zoe claimed to make money through trading, and promised a 7.5% monthly return on investment for three years for those who put money into its "trust." In Argentina and other countries, other companies with the Zoe name peddled a similar narrative... It included a "university" that offered courses on ontological coaching, a type of philosophical practice popular in some Argentine business circles...

Over 2020 and 2021, more than ten thousand people bought into Zoe, investing hundreds of millions of dollars between them. Zoe grew rapidly, hyping new tech innovations including the "robots" and a cryptocurrency called Zoe Cash. Its interests and visibility expanded: The Zoe name appeared on burger joints, car dealerships, a plane rental company, and pet shops, all emblazoned with its name. It sponsored soccer teams and even created three of its own... Zoe also spread beyond Argentina to other countries in Latin America and further afield, including Mexico, Paraguay, Colombia, Spain, and the U.S.

Towards the end of 2021, however, the shine began to wear off, as authorities began looking into Zoe's activities... Zoe members reported being unable to withdraw the funds they had put into trusts or "robots," and in early 2022, the value of Zoe Cash plummeted. Angry investors banged on the doors of Zoe's branches, and investigations against Zoe and Cositorto piled up across Latin America, Spain, and the U.S.

By March 2022, a handful of high-profile names involved with Zoe in Argentina had been arrested, or were wanted by the authorities...

Prosecutors now accuse Zoe of being nothing more than a simple Ponzi scheme.
Chrome

Chromebook Expiration Date, Repair Issues 'Bad For People and Planet' (theregister.com) 102

Google Chromebooks expire too soon, saddling taxpayer-funded public schools with excessive expenses and inflicting unnecessary environmental damage, according to the US Public Interest Research Group (PIRG) Education Fund. The Register reports: In a report on Tuesday, titled "Chromebook Churn," US PIRG contends that Chromebooks don't last as long as they should, because Google stops providing updates after five to eight years and because device repairability is hindered by the scarcity of spare parts and repair-thwarting designs. This planned obsolescence, the group claims, punishes the public and the world.

"The 31 million Chromebooks sold globally in the first year of the pandemic represent approximately 9 million tons of CO2e emissions," the report says. "Doubling the life of just Chromebooks sold in 2020 could cut emissions equivalent to taking 900,000 cars off the road for a year, more than the number of cars registered in Mississippi." The report says that excluding additional maintenance costs, longer lasting Chromebooks could save taxpayers as much as $1.8 billion dollars in hardware replacement expenses.

The US PIRG said it wants: Google to extend its ChromeOS update policy beyond current device expiration dates; hardware makers to make parts more available so their devices can be repaired; and hardware designs that enable easier part replacement and service. [...] According to US PIRG, making an average laptop releases 580 pounds of carbon dioxide into the atmosphere, amounting to 77 percent of the total carbon impact of the device during its lifetime. Thus, the 31 million Chromebooks sold during the first year of the pandemic represent about 8.9 million tons of CO2e emissions.
"We think that Google should extend the automatic update expiration to 10 years after launch date," said Lucas Gutterman, who leads US PIRG's Designed to Last campaign. "There's just no reason why we should be throwing away a computer that still is otherwise functional just because it passes a certain date."

"We're asking Google to use their leadership among the OEMs to design the devices to last, to make some of the changes that we list, to have them be more easily repairable by actually producing spare parts that folks can buy at reasonable prices," he added. "And to design with modularity and repair in mind, so that you can, for example, use the plastic bezel on one Chromebook on the next version, rather than having to buy a whole new set of spare parts just because a clip has changed."
Education

Why Universities Should Return To Oral Exams In the AI and ChatGPT Era (theconversation.com) 99

In an op-ed via The Conversation, Stephen Dobson, professor and Dean of Education and the Arts at CQUniversity, Australia, argues that it is time for universities to return to oral exams in the AI and ChatGPT era. An anonymous Slashdot reader shares an excerpt from the report: Imagine the following scenario. You are a student and enter a room or Zoom meeting. A panel of examiners who have read your essay or viewed your performance, are waiting inside. You answer a series of questions as they probe your knowledge and skills. You leave. The examiners then consider the preliminary pre-oral exam grade and if an adjustment up or down is required. You are called back to receive your final grade.

This type of oral assessment -- or viva voce as it was known in Latin -- is a tried and tested form of educational assessment. No need to sit in an exam hall, no fear of plagiarism accusations or concerns with students submitting essays generated by an artificial intelligence (AI) chatbot. Integrity is 100% assured, in a fair, reliable and authentic manner that can also be easily used to assess multiple individual or group assignments. As services like ChatGPT continue to grow in terms of both its capabilities and usage -- including in education and academia -- is it high time for universities to revert to the time-tested oral exam?
"Chatbots cannot replicate this sort of task, ensuring student authenticity," writes Dobson. "I argue that it is time to change our conversation to be more about assessment that actually involves a 'conversation.'"

"Writing would still be important, but we should learn to re-appreciate the importance of how a student can talk about the knowledge and skills they acquired. Successfully completing a viva could become one of our graduate attributes, as it once was."
China

India Passes China as World's Most Populous Nation, UN Says (bloomberg.com) 66

India has overtaken China as the world's most populous nation, according to United Nations data released Wednesday. From a report: India's population surpassed 1.4286 billion, slightly higher than China's 1.4257 billion people, according to mid-2023 estimates by the UN's World Population dashboard. China's numbers do not include Hong Kong and Macau, Special Administrative Regions of China, and Taiwan, the data showed. The burgeoning population will add urgency for Prime Minister Narendra Modi's government to create employment for the millions of people entering the workforce as the nation moves away from farm jobs. India, where half the population is under the age of 30, is set to be the world's fastest-growing major economy in the coming years.

Asia's third-largest economy is now home to nearly a fifth of humanity -- greater than the entire population of Europe or Africa or the Americas. While this is also true for China for now, that's expected to change as India's population is forecast to keep ticking up and touch 1.668 billion by 2050 when China's population is forecast to contract to about 1.317 billion. "India's story is a powerful one. It is a story of progress in education, public health and sanitation, economic development as well as technological advancements," said Andrea Wojnar, Representative United Nations Population Fund India and Country Director Bhutan on State of the World Population Report.

Education

Worthless Degrees Are Creating an Unemployable Generation in India (bloomberg.com) 150

Business is booming in India's $117 billion education industry and new colleges are popping up at breakneck speed. Yet thousands of young Indians are finding themselves graduating with limited or no skills, undercutting the economy at a pivotal moment of growth. From a report: Desperate to get ahead, some of these young people are paying for two or three degrees in the hopes of finally landing a job. They are drawn to colleges popping up inside small apartment buildings or inside shops in marketplaces. Highways are lined with billboards for institutions promising job placements. It's a strange paradox. India's top institutes of technology and management have churned out global business chiefs like Alphabet's Sundar Pichai and Microsoft's Satya Nadella. But at the other end of the spectrum are thousands of small private colleges that don't have regular classes, employ teachers with little training, use outdated curriculums, and offer no practical experience or job placements, according to more than two dozen students and experts who were interviewed by Bloomberg.

Around the world, students are increasingly pondering the returns on a degree versus the cost. Higher education has often sparked controversy globally, including in the US, where for-profit institutions have faced government investigations. Yet the complexities of education are acutely on show in India. It has the world's largest population by some estimates, and the government regularly highlights the benefits of having more young people than any other country. Yet half of all graduates in India are unemployable in the future due to problems in the education system, according to a study by talent assessment firm Wheebox. Many businesses say they struggle to hire because of the mixed quality of education. That's kept unemployment stubbornly high at more than 7% even though India is the world's fastest growing major economy. Education is also becoming an outsized problem for Prime Minister Narendra Modi as he attempts to draw foreign manufacturers and investors from China. Modi had vowed to create millions of jobs in his campaign speeches, and the issue is likely to be hotly debated in the run up to national elections in 2024.

Education

Should Managers Permanently Stop Requiring Degrees for IT Positions? (cio.com) 214

CIO magazine reports on "a growing number of managers and executives dropping degree requirements from job descriptions." Figures from the 2022 study The Emerging Degree Reset from The Burning Glass Institute quantify the trend, reporting that 46% of middle-skill and 31% of high-skill occupations experienced material degree resets between 2017 and 2019. Moreover, researchers calculated that 63% of those changes appear to be "'structural resets' representing a measured and potentially permanent shift in hiring practices" that could make an additional 1.4 million jobs open to workers without college degrees over the next five years.

Despite such statistics and testimony from Taylor and other IT leaders, the debate around whether a college education is needed in IT isn't settled. Some say there's no need for degrees; others say degrees are still preferred or required.... IBM is among the companies whose leaders have moved away from degree requirements; Big Blue is also one of the earliest, largest, and most prominent proponents of the move, introducing the term "new collar jobs" for the growing number of positions that require specific skills but not a bachelor's degree....

Not all are convinced that dropping degree requirements is the way to go, however. Jane Zhu, CIO and senior vice president at Veritas Technologies, says she sees value in degrees, value that isn't always replicated through other channels. "Though we don't necessarily require degrees for all IT roles here at Veritas, I believe that they do help candidates demonstrate a level of formal education and commitment to the field and provide a foundation in fundamental concepts and theories of IT-related fields that may not be easily gained through self-study or on-the-job training," she says. "Through college education, candidates have usually acquired basic technical knowledge, problem-solving skills, the ability to collaborate with others, and ownership and accountability. They also often gain an understanding of the business and social impacts of their actions."

The article notes an evolving trend of "more openness to skills-based hiring for many technical roles but a desire for a bachelor's degree for certain positions, including leadership." (Kelli Jordan, vice president of IBMer Growth and Development tells CIO that more than half of the job openings posted by IBM no longer require degrees.)

Thanks to Slashdot reader snydeq for sharing the article.
The Almighty Buck

South Korea To Give $490 Allowance To Reclusive Youths To Help Them Leave the House (theguardian.com) 133

An anonymous reader quotes a report from the Guardian: South Korea is to offer reclusive youths a monthly living allowance of 650,000 won ($490) in order to encourage them out of their homes, as part of a new measure passed by the Ministry of Gender Equality and Family. The measure also offers education, job and health support. The condition is known as "hikikomori," a Japanese term that roughly translated means, "to pull back." The government wants to try to make it easier for those experiencing it to leave the house to go to school, university or work.

Included in the program announced this week, which expands on measures announced in November, is a monthly allowance for living expenses for people aged between nine and 24 who are experiencing extreme social withdrawal. It also includes an allowance for cultural experiences for teenagers. About 350,000 people between the ages of 19 and 39 in South Korea are considered lonely or isolated -- about 3% of that age group -- according to the Korea Institute for Health and Social Affairs. Secluded youth are often from disadvantaged backgrounds and 40% began living reclusively while adolescents, according to a government document outlining the measures.

The new measures aim to strengthen government support "to enable reclusive youth to recover their daily lives and reintegrate into society," the government said in a statement. Among the other types of support are paying for the correction of affected people's physical appearance, including scars "that adolescents may feel ashamed of," as well as helping with school and gym supplies. South Korea also has a relatively high rate of youth unemployment, at 7.2%, and is trying to tackle a rapidly declining birthrate that further threatens productivity.

Education

American IQ Scores Have Rapidly Dropped, Proving the 'Reverse Flynn Effect' (popularmechanics.com) 391

An anonymous reader quotes a report from Popular Mechanics: Americans' IQ scores are trending in a downward direction. In fact, they've been falling for over a decade. According to a press release, in studying intelligence testing data from 2006 to 2018, Northwestern University researchers noticed that test scores in three out of four "cognitive domains" were going down. This is the first time we've seen a consistent negative slope for these testing categories, providing tangible evidence of what is known as the "Reverse Flynn Effect."

In a 1984 study, James Flynn noticed that intelligence test scores had steadily increased since the early 1930s. We call that steady rise the Flynn Effect. Considering that overall intelligence seemed to be increasing faster than could be explained by evolution, the reason increase became a source of debate, with many attributing the change to various environmental factors. But now, it seems that a Reverse Flynn Effect is, well, in effect.

The study, published in the journal Intelligence, used an online, survey-style personality test called the Synthetic Aperture Personality Assessment Project to analyze nearly 400,000 Americans. The researchers recorded responses from 2006 and 2018, in order to examine if and how cognitive ability scores were changing over time within the country. The data showed drops in logic and vocabulary (known as verbal reasoning), visual problem solving and analogies (known as matrix reasoning), and computational and mathematical abilities (known as letter and number series).
Not every domain is going down though, notes the report. "[S]cores in spatial reasoning (known as 3D rotation) followed the opposite pattern, trending upward over the 12-year period."

"If all the scores were going in the same direction, you could make a nice little narrative about it, but that's not the case," says Elizabeth Dworak, a research assistant professor at Northwestern University and one of the authors on the study. "We need to do more to dig into it." She adds: "It doesn't mean their mental ability is lower or higher; it's just a difference in scores that are favoring older or newer samples. It could just be that they're getting worse at taking tests or specifically worse at taking these kinds of tests."
The Internet

If We Lose the Internet Archive, We're Screwed (sbstatesman.com) 112

An anonymous reader shares a report: If you've ever researched anything online, you've probably used the Internet Archive (IA). The IA, founded in 1996 by librarian and engineer Brewster Kahle, describes itself as "a non-profit library of millions of free books, movies, software, music, websites, and more." Their annals include 37 million books, many of which are old tomes that aren't commercially available. It has classic films, plenty of podcasts and -- via its Wayback Machine -- just about every deleted webpage ever. Four corporate publishers have a big problem with this, so they've sued the Internet Archive. In Hachette v. Internet Archive, the Hachette Publishing Group, Penguin Random House, HarperCollins and Wiley have alleged that the IA is committing copyright infringement. Now a federal judge has ruled in the publishers' favor. The IA is appealing the decision.

[...] Not only is this concern-trolling disingenuous, but the ruling itself, grounded in copyright, is a smack against fair use. It brings us one step closer to perpetual copyright -- the idea that individuals should own their work forever. The IA argued that their project was covered by fair use, as the Emergency Library provides texts for educational and scholarly purposes. Even writers objected to the court's ruling. More than 300 writers signed a petition against the lawsuit, including Neil Gaiman, Naomi Klein and -- get this -- Chuck Wendig. Writers lost nothing from the Emergency Library and gained everything from it. For my part, I've acquired research materials from the IA that I wouldn't have found anywhere else. The archive has scads of primary sources which otherwise might require researchers to fly across the country for access. The Internet Archive is good for literacy. It's good for the public. It's good for readers, writers and anyone who's invested in literary education. It does not harm authors, whose income is no more dented by it than any library programs. Even the Emergency Library's initial opponents have conceded this. The federal court's decision is a victory for corporations and a disaster for everyone else. If this decision isn't reversed, human beings will lose more knowledge than the Library of Alexandra ever contained. If IA's appeal fails, it will be a tragedy of historical proportions.

AI

Khan Academy Chief Says GPT-4 is Ready To Be a Tutor (axios.com) 58

For all the high-profile examples of ChatGPT getting facts and even basic math wrong, Khan Academy founder Sal Khan says the latest version of the generative AI engine makes a pretty good tutor. From a report: "This technology is very powerful," Khan told Axios in a recent interview. "It's getting better." Khan Academy was among the early users of GPT-4 that OpenAI touted when it released the updated engine. This week, two more school districts (Newark, N.J. and Hobart, Indiana) are joining the pilot of Khanmigo, the AI-assisted tutor. With the two new districts, a total of 425 teachers and students are testing Khanmigo.

The chatbot works much like a real-life or online tutor, looking at students' work and helping them when they get stuck. In a math problem, for example, Khanmigo can detect not just whether a student got an answer right or wrong, but also where they may have gone astray in their reasoning. ChatGPT and its brethren have been highly controversial -- especially in education, where some schools are banning the use of the technology. Concerns range from the engines' propensity to be confidently wrong (or "hallucinate") to worries about students using the systems to write their papers. Khan said he understands these fears, but also notes that many of those criticizing the technology are also using it themselves and even letting their kids make use of it. And, for all its flaws, he says today's AI offers the opportunity for more kids -- in both rich and developing countries -- to get personalized learning. "The time you need tutoring is right when you are doing the work, often when you are in class," Khan said.

Education

Microsoft and Jeff Bezos Tap Excel, Not Python Or R, To Teach Kids Data Science 188

theodp writes: Are you ready to rock it with #datascience?" asks a tweet from Club for the Future, the tax-exempt foundation founded and funded by Jeff Bezos's Blue Origin, which is partnering with Microsoft's Hacking STEM to show how data science is used to determine a Go/No-Go launch of a Blue Origin New Shepard rocket. Interestingly, while Amazon founder Bezos and Microsoft CEO Satya Nadella are big backers of nonprofit Code.org and joined other tech CEOs for CS last fall to get the nation's Governors to "update the K-12 curriculum, for every student in every school to have the opportunity to learn computer science," Microsoft and Blue Origin have opted to teach kids aged 11-15 good old-fashioned Excel skills in their Introduction to the Data Science Process mini-course, not Python or R.

"Excel is a tool used around the world to work with data," Microsoft explains to teachers who have been living under a rock since 1985. "In these activities, students learn how to use Excel and complete all steps of a mission by engaging in the data science process. In this mission, students analyze key weather data in determining flight safety parameters for a New Shepard rocket and ultimately make a Go/No-Go decision for launch. Students learn how to use Excel while engaging in this dynamic Data Science Process activity [which is not unlike PLATO 'data science' activities of 50 years ago]." Blue Origin last September pledged to inspire youth to pursue space STEM careers as part of the Biden Administration's efforts to increase the space industry's capacity to meet the rising demand for the skilled technical workforce.

Slashdot Top Deals