

Gen AI Is Not Replacing Jobs Or Hurting Wages At All, Say Economists 78
An anonymous reader quotes a report from The Register: Instead of depressing wages or taking jobs, generative AI chatbots like ChatGPT, Claude, and Gemini have had almost no wage or labor impact so far -- a finding that calls into question the huge capital expenditures required to create and run AI models. In a working paper released earlier this month, economists Anders Humlum and Emilie Vestergaard looked at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024.
Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers. Yet after Humlum, assistant professor of economics at the Booth School of Business, University of Chicago, and Vestergaard, a PhD student at the University of Copenhagen, analyzed the data, they found the labor and wage impact of chatbots to be minimal. "AI chatbots have had no significant impact on earnings or recorded hours in any occupation," the authors state in their paper.
The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots. The problem isn't that workers are avoiding generative AI chatbots -- quite the contrary. But they simply aren't yet equating to actual economic benefits. "The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."
Humlum said while there are gains and time savings to be had, "there's definitely a question of who they really accrue to. And some of it could be the firms -- we cannot directly look at firm profitability. Some of it could also just be that you save some time on existing tasks, but you're not really able to expand your output and therefore earn more. So it's like it saves you time writing emails. But if you cannot really take on more work or do something else that is really valuable, then that will put a damper on how much we should actually expect those time savings to affect your earning ability, your total hours, your wages."
"In terms of economic outcomes, when we're looking at hard metrics -- in the administrative labor market data on earnings, wages -- these tools have really not made a difference so far," said Humlum. "So I think that that puts in some sense an upper bound on what return we should expect from these tools, at least in the short run. My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers. Yet after Humlum, assistant professor of economics at the Booth School of Business, University of Chicago, and Vestergaard, a PhD student at the University of Copenhagen, analyzed the data, they found the labor and wage impact of chatbots to be minimal. "AI chatbots have had no significant impact on earnings or recorded hours in any occupation," the authors state in their paper.
The report should concern the tech industry, which has hyped AI's economic potential while plowing billions into infrastructure meant to support it. Early this year, OpenAI admitted that it loses money per query even on its most expensive enterprise SKU, while companies like Microsoft and Amazon are starting to pull back on their AI infrastructure spending in light of low business adoption past a few pilots. The problem isn't that workers are avoiding generative AI chatbots -- quite the contrary. But they simply aren't yet equating to actual economic benefits. "The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."
Humlum said while there are gains and time savings to be had, "there's definitely a question of who they really accrue to. And some of it could be the firms -- we cannot directly look at firm profitability. Some of it could also just be that you save some time on existing tasks, but you're not really able to expand your output and therefore earn more. So it's like it saves you time writing emails. But if you cannot really take on more work or do something else that is really valuable, then that will put a damper on how much we should actually expect those time savings to affect your earning ability, your total hours, your wages."
"In terms of economic outcomes, when we're looking at hard metrics -- in the administrative labor market data on earnings, wages -- these tools have really not made a difference so far," said Humlum. "So I think that that puts in some sense an upper bound on what return we should expect from these tools, at least in the short run. My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
Re: (Score:3)
How can something that doesn't exist at all "end all work"?
Re: (Score:2)
Well, when you take the universe and divide it by
Re: (Score:2)
That's because Gen AI will be available at the end of this year. Sam Altman, that AI bro, told so. Pichai thinks 25% of all coding work is already being done by AI at google (yes, he too is hallucinating. He does not like workplace discrimination against bots). Nadella is waiting for his Agents, of the Agentic workflow fame, to get out of diapers.
Next at 9 PM. Mass layoffs hit AI researchers.
Some economist, 1894 (Score:2, Insightful)
Re:Some economist, 1894 (Score:5, Interesting)
Motor vehicles and tractors had almost no impact on the number of PEOPLE employed in transport and agriculture. The truck drivers union is called the Teamsters because before they drove trucks they drove teams of horses.
Re:Some economist, 1894 (Score:4, Insightful)
Motor vehicles and tractors had almost no impact on the number of PEOPLE employed in transport and agriculture.
That looks true only if you compare to the productivity gains. The number of people employed per ton of produce generated went WAY down.
The truck drivers union is called the Teamsters because before they drove trucks they drove teams of horses.
A team of horses pulled one little wagon and at minimum two people were involved in its operation and management. A semi-truck is operated by one person and carries far more. You just provided a strong counterargument to your central point, thanks for that.
Re: (Score:3)
And while nobody went unemployed
[citation needed]
Re: (Score:2)
You seem to be missing the poster's point. He wasn't saying motor vehicles replaced *people*, he was saying motor vehicles replaced *horses*, but that wouldn't have been apparent in economic metrics a decade before the technology became fully practical. AI is still rapidly developing, and it's just not good enough yet to do to people what the tractor did to horses.
Re: (Score:2)
Re: (Score:2)
Unless there is a reference, this isn't "some economist in 1894", but some dumb troll in 2025.
Re: (Score:2)
I see no reason to compare apples and oranges. You are suggesting that some economist got it wrong in 1894, hence economists of today must be getting gen AI wrong. They may get it wrong, but that is not connected to your single economist. This is why people fail at logic, not able to establish any relevant connections, so any BS stands for insight.
this fits well (Score:2)
Re: (Score:2)
This analogy fails on so many levels.
Thus far, any increased productivity from genai gets absorbed by the need for much more product. It's not like transportation, where the growth in need is limited and increase in transport capability will reduce the number of workers needed to transport things (which incidentally also hasn't been the case; there are a hell of a lot more people working in logistics today than in 1894). In fields where genai is being used, the demand has always been vastly higher than prod
Managers always want someone to blame (Score:3)
Re: (Score:2)
The plan is not to remove all use of AI, but to make sure that it is correct. Which is a lot more work for human volunteers, unfortunately.
That's only because we are (Score:3, Interesting)
In a few years things will settle down on the r&d front but the jobs taken by LLMs won't be coming back.
I can tell you that right off the bat most YouTubers have already switched to using AI for the thumbnails there are several artists that used to make a living that can't anymore. Also the dumb little TV commercials for local businesses are often AI generated.
Also remember AI is largely going to be a productivity increaser. If we had a functional and competitive economy than in theory we might see overall job increases in some ways. But we don't enforce antitrust law so what's going to happen is companies are going to bank those productivity increases instead of using them to compete because frankly they don't have to compete.
Re: (Score:2)
Re: (Score:2)
YouTubers have already switched to using AI for the thumbnails
Sounds like a good way for some Youtubers to get blacklisted and then become unable to get adequate art.
Is that really worth getting a few cheap thumbnails?
Most artists are against the existence of generative AI creating images and made it clear they will not do any work for anyone with any involvement in AI. And there is litigation ongoing due the copyright issues from AIs training off others' art.
The few Youtubers who are doing that could
Doesn't seem to have affected their views (Score:4, Insightful)
So they will use non-US artists (Score:3)
The fantasy of Americans that they are dependent on local services should have faded by now. Hollywood has discovered overseas productions, and LA is losing jobs as a result.
Re: (Score:1)
> I can tell you that right off the bat most YouTubers have already switched to using AI for the thumbnails there are several artists that used to make a living that can't anymore. Also the dumb little TV commercials for local businesses are often AI generated.
The question is when measured at the level of an economy, do you see the needle move more than normal fluctuations? That's the real question.
Yes, some youtubers may be using some genAI to produce thumbnail. But is it that they fired their thumbnail
Re: (Score:1)
And it turns out that, for the vast majority of youtubers, it never did make any sense to pay somebody else to make a thumbnail, AI or not.
https://www.youtube.com/watch?... [youtube.com]
Unless the artist wants to get paid a tiny percent of the total revenue generated by the uploader per video, which in this case would mean pennies, then they never would have agreed to even do the job to begin with.
And I have to say, most of the thumbnails you see on youtube, even since well before AI got involved, are basically crap. For
Re: (Score:2)
I'm currently engaged in a hopeless war against the tide- i block every channel with misleading or manipulatively attention grabbing headlines or thumbnails.
It's 100% pissing into the wind, but whatever, gives me an outlet for my annoyance.
So I think you're kind of missing my point (Score:2)
When that bubble bursts, and it will, the only thing left is going to be the AI systems that replaced jobs. And all the jobs that were created building the systems and figuring out which AI systems worked and which didn't are going to just go away.
There's a lot of AI equivalents of online pet food companies rig
Re: (Score:2)
As highlighted in the summary, no one wants to pay for the service at it's real cost let alone making a profit. IBM went through all this over a decade ago.
OpenAI and the likes are just investment leaches.
I've thought about that (Score:2)
Have you ever heard the question, if they automate all the jobs who is going to by their products? It's an old thing that comes from us saying from Henry Ford.
The 1% of thought of that and it eats them alive. If You've ever seen a 1%er trying to hang out with one of us they are absolutely disgusted by us. The one that always sticks with me is mitt Romney and the other look of disgust on him and his wife's face while he was running for president.
Re: (Score:2)
You are taught when you are a young impressionable kid that Washington chose not to be a king, but you are not taught is that he was so staggeringly wealthy he didn't see the point of being a king.
Washington made a lot of money in real estate, but his net worth at his death (around $500 million in today's dollars) makes him a piker in comparison to other world leaders. William the Conqueror, for example, was worth about $200 billion in today's dollars. Putin's net worth is estimated to be in the neighborhood of $200 billion.
Re: (Score:2)
It's natural enough for employers to want to replace workers with automation. Workers cost money that they would rather keep. There isn't any intrinsic moral sin with this, but the consequences are problematic for those of us who depend on our jobs to survive (you know, basically everyone).
Fortunately, workers are a large voting demographic so when the problems get bad there will be high interest in political action that, one way or another, preserves our livelihoods. So, more than a few politicians will
Re: (Score:2)
The main drive for automation is not saving money on workers, but increase in quality, reduction of waste and increase in production rate. It's mainly used to increase output and reducing losses rather than removing workers. The exceptions are things like first line customer support and other really menial jobs, where worker turnover is high and the output is extremely low margin and not quality sensitive.
Re: (Score:2)
Yes, it's far too early to be measuring the ultimate effect AI will have on employment.
Economic theory and history doesn't point to a single simple, straightforward effect of all productivity enhancing technologies upon employment. Take the automotive assembly line; this was a massive increase in worker productivity, resulting in market prices for cars dropping dramatically and market demand. That increase in demand means that many more people are employed in auto manufacturing than there would be if we st
Something's (not) Rotten in Denmark (Score:2)
Re: (Score:2)
The ones laying off staff are US companies. Not European companies. As a general observation, skilled workers in Europe not only have job security, but are being headhunted by companies desperate for more workforce.
Manual labor and such is a different matter, and the majority of unemployment.
Nobody's getting fired from AI, except in some edge cases like first line support where nobody wants to work anyway. Automation in general, in Europe, is done to increase quality and output. And AI is not great for that
Re: Something's (not) Rotten in Denmark (Score:2)
Transformative (Score:1)
My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
"Oh well, I'm rich!" - Nvidia CEO Jensen Huang. AI did make an economic difference, it inflated the bank accounts of hype-sters selling this AI infrastructure.
/. Headlines (Score:3)
This economist should read some /. headlines.
The efforts to not hire people are real and well-documented.
I might think they'll fail and AI might eventually be a net positive productivity aid but the claim that the CEO's aren't trying to do it is flat out wrong.
To a tiny degree I envy the Professional Managerial Class for getting paid handsome salaries to be wrong all the time but ultimately their work is not meaningful and living a parasitic life isn't worth the investment.
Re: (Score:1)
I knew a guy who managed an animation team and they all got laid off because of customers cancelling projects, specifically due to going to a smaller cheaper competitor that uses AI tools extensively.
Small example, maybe it doesn't move the needle.
I see on social media tons of artsy people complain about AI image generation. Besides the usual complaint -- that training an AI on art without the creator's permission constitutes IP infringement -- they do seem *really* upset that regular people are using image
Re: (Score:2)
The effort of musko to "reduce government spending by $2T" was also real and well-documented.
It also failed spectacularly.
Not everything that pops into the head of the CEO is valid, probable or possible.
Re: (Score:2)
What is important is WHY Elmo is failing. First, there is not much "waste, fraud, and abuse" in administering gov. programs. The real money is what they spend on the dear old American people in the form of health care, etc. You can argue whether that is WFA but Elmo won't find it, no matter how much he and his minder lie about it.
More to point, if we take SS as an example. Walking in and arbitrarily firing people is not gong to make the system more efficient. Has he met Americans? They aren't a particularly
Re: (Score:2)
It failed because the goal was never "cutting waste" or whatever, it was a power grab and a grift attempt.
Gen Alpha is gonna be pissed (Score:4, Interesting)
Grandpa can't keep up the these gens anymore.
Re: (Score:2)
First there's Baby Boomers, then Gen X, Gen Y, Gen Z, Gen Alpha .... Now ... Gen AI ?
Grandpa can't keep up the these gens anymore.
Gen Al? What’s next, Au as in A U kids get off my lawn!
Re: (Score:2)
now, seriously, when we look at the economic outco (Score:2)
So, AI isn't increasing productivity, in other words all this AI bullshit does is waste energy... and capture your attention so you can't do more productive things. I'm not even going to pose that as a question.
Complete whoosh (Score:2)
Re: (Score:2)
step 1: enhance workers productivity
step 2: ????
step 3: entire society profit?!
Re: (Score:2)
Step 2 is taxation and redistribution
Too soon to tell (Score:2)
This technology is under evaluation at most companies, and for us has been a mixed bag. Eventually companies that automate all their employees away will find that there is no market for their products and services, and they will fail. But at least they created tons of short term value for the shareholders that are now broke. Do you think people on UBI will invest in markets? Do you think young adults saddled with massive debt loads and a huh cost of living will invest in markets and consume products/service
Re: (Score:2)
They're acting like evaluation is over and it's already a sure bet. The level of spending just doesn't match up with reality.
AI doesn't hurt jobs. It replaces them. (Score:2)
Jobs still available won't have a wage-sink. If you need a human, you're going to have to pay living wages appropriate for the amount of time and effort the human put in to acquiring his neccessary skills. Obviously.
And human positions not needed any more aren't communicated as decomissioned. They're simple not listed anymore.
It's a clearly observable truth that with industrialized IT and AI human brain-power is only needed in a fraction of the amount it was 2 decades ago. And the rate of change due to AI i
Living wages - nah - what you have to pay them (Score:2)
'you're going to have to pay living wages appropriate for the amount of time and effort the human put in to acquiring his neccessary skills'
Only in the very long term. As the demand for those skills falls, the wages paid will fall and companies will only pay what they have to. As long as kids are being fooled into training in those skills, the supply won't change much.
Re: (Score:2)
What is clearly observable truth is that with increased capacity to automate things like code generation, like improvements in compilers, IDE's, source control and so on, the demand for code has exploded, and the need for human brain power with it. Currently there is no way for human brain power to even remotely satiate the demand for more code.
Using generative AI to enhance productivity will not reverse this trend. If anything, it will accelerate it.
Unless a massive breakthrough in how generative AI works
LLMs can't do more complex and nuanced tasks (Score:5, Interesting)
I've been following LLMs for a while now, particularly for coding. As someone with decades of experience, I really struggle to find a use for it. I can type fast, read code fast, and I can imagine what the solution is going to look like, and most of my time is spent understanding the application in the broader context of how it's being used in real life. The software is part of a larger system, and we're really building that system, not just the software.
I can see how LLM coding is useful for people who just don't know how to code. They're instantly getting (mostly) working code, but only code that looks like things people have written before, and only for solutions to simple problems. It's basically like StackOverflow but a little bit more convenient. Don't get me wrong... I use StackOverflow when I need to know how to interact with some arcane operating system or framework API. Yet even so, an LLM still hallucinates and makes annoying mistakes.
The problem I have with all the younger programmers using AI-generated code, and vibe-coding in particular, is that they're going to spend 5 or 10 years doing this and not pickup the skills that currently make a programmer with 5 or 10 of experience years so valuable.
I appreciate that there are a few jobs where you just create whole new apps or simple programs to solve simple problems, spending a few weeks or a couple months on it, and then you walk away from it, but I'd argue that's fairly low skill programming. There's a lot of programmers like me working on big continuously-evolving legacy systems whose lifespan literally spans decades. These systems need to be structured such that people can still navigate them, modify them, and improve on them 5 or 10 years down the road. From what I can tell, LLMs can't scratch the surface of these large systems because their token buffer is nowhere near big enough, and the codebase will have its own domain specific quirks which the LLM won't have any context for. Many of these systems contain domain specific languages (DSLs) inside of them. Plus, the LLM would also need to understand the very unique context that the system is working in, and that would mean training it on the idiosyncrasies of how your small set of users go about their daily business.
Finally, we're also starting to get tools that let us peer into LLMs and see how they work, and the evidence is pretty mundane: LLMs aren't reasoning or "thinking." They're just text prediction engines with no insight other than connections between tokens that came before and tokens that likely come after this point in the text. There's no there there. What we have is massive investment into this new technology that most people are realizing isn't much more than a glorified auto-complete, and the investors are getting worried, and the people making LLMs need to keep cranking out bolder and bolder claims so that investors keep shoveling money at them. It's a typical technology bubble. Are there a few niche industries where LLMs will probably find a use? Absolutely. Will it replace all workers? No way.
* That isn't to say all AI is useless. Clearly both image and video generative AI is useful and is going to find wide adoption for both useful and harmful purposes. Also, image classifying AI is already useful, for instance, when sorting waste recycling streams. But neither of these are LLMs. I suspect the two main things LLMs will be used for in the future will be cheating on homework, and generating scam emails and propaganda posts across the internet, and I wonder if the LLM industry can be sustained when its only revenue streams are expected to come from those sources.
Re: (Score:2)
If the devs knew how to debug code, that is one thing, but if they expect to just wave a magic wand and have a mound of code work, then panic when it doesn't, that is bad. It reminds me of someone who played video games for five years, and claimed to be a Windows admin... yes, they did stuff related to it, but Windows admin duties... not really.
I can AI as a tool for a programmer, just like an optimizing compiler... but trusting it to do everything... no way. We have not gotten rid of hand-tuned assembly
Re: (Score:2)
I've been following LLMs for a while now, particularly for coding. As someone with decades of experience, I really struggle to find a use for it. I can type fast, read code fast, and I can imagine what the solution is going to look like, and most of my time is spent understanding the application in the broader context of how it's being used in real life. The software is part of a larger system, and we're really building that system, not just the software.
I am in a similar boat. And what I see is that LLMs are essentially a force multiplier. If you don't know what you are doing they help you a little bit, but not that much.
On the other hand, with your programming expertise you probably can gain some productivity. Mostly because they are pretty good at trivial tasks. Just last week, I needed to extract logs from a tool to get activity timestamp, merge close timestamps in intervals, and generate some reporting visuals. It's not a ground breaking problem, but I
Re: (Score:2)
It can do joe jobs... reformatting, simple refactoring, generate short paragraphs of code where you don't have to consult the documentation for syntax. If you save 1 hour per day, you're getting about a 15% boost in efficiency.. not solution to all problems, but not a bad improvement.
Re: (Score:3)
As someone with decades of experience, I really struggle to find a use for it.
I'm also a full-time software developer for the last 30 years, and it sounds to me like you are very deeply entrenched in some specific languages / API / frameworks that you seldom work outside of. So I can understand not finding use for these LLMs if you're doing the same kind of stuff you've been doing for a long time.
In my case, outside of my main gigs (which is mainly LAMP stuff), I do a ton of very diverse things. I have iOS and Android apps on the market (Swift and Java respectively), do a lot of embe
Re: (Score:2)
Re: LLMs can't do more complex and nuanced tasks (Score:2)
I have found genAI to be useful for menial coding tasks like "write a benchmark that compares the CPU performance of these two functions for this use case". I think that's about it so far. It's not nothing.
Re: (Score:3)
>> I really struggle to find a use for it
I find uses for it every day, maybe you aren't looking very hard. One typical situation would be where you have to fix bugs in some unfamiliar code that was written in a big hurry years ago by people who are long gone. There is no documentation or comments. Ask the AI to explain that code to you and generate a README about it, a huge help. And then frequently you can just describe a bug and it will either spot some likely culprit or write some investigative cod
Too soon (Score:2, Interesting)
This is Denmark, and up to 2 years ago.
We're surely in the "too soon to tell" phase, particularly for a European country with decent labour protections, where laying off a significant number of your employees both takes time and is expensive in payoff terms; you're surely going to evaluate AI for longer, and then maybe reduce headcount by the slower but cheaper methods such as natural wastage and re-deployment.
This is not to say AI will, or won't, cause significant job losses, just that this study seems pre
Re: (Score:3)
I agree. Why should we think Denmark is representative? And why should we think the AI of 2 years ago was anything near as powerful as it is now.
Liar, Liar, Pants on Fire! (Score:3)
Stories from Slashdot... (Score:2)
History repeats (Score:2)
The 2000's dot-com boom had a similar problem: lots of interesting ideas, but too many entrepreneurs had difficulty making a profit, and when investors realized this, they backed off, triggering a slump. It took a decade or so before enough figured out the profit thing to bring investors back.
Re: (Score:2)
I think we have an issue with our economy - wealth concentration. There is a class of people who have far more than they need and don't know what to do with it except try for 'moar'.
They jump on the latest buzzword and build obvious crap to milk money from the people below them who have just enough to try investing but not enough experience to understand they're getting fleeced.
Most of the things you see attracting attention simply shouldn't.
AI Journalists (Score:2)
IDEs are not hurting software developers (Score:2)
Tools like IDEs and even high level programming languages made software development by orders of magnitude faster. The need for developers still did not decrease.
Better tools mean better results. Not less need for people using the tools. AI isn't doing anything without a human using it. Watch less I, robot and explore more how the tools you're fearing actually work. Especially artists who are used to generative fill, background removal, smart selection and a ton of other AI features since many years should
Re: (Score:1)
You must not have noticed the last few years of layoffs?
Companies are NOT hiring people because they wishfully think AI will eliminate some of the jobs. So it already has cut jobs but they don't show up because those are future jobs; also masked by economic conditions.
Higher productivity = less labor = less jobs. Productivity gains doesn't increase labor, it reduces it and almost all of that goes to the owners too.
An artist can take a day to make it from scratch or they can prompt engineer and modify AI out
Bullshit. I'll list 12 (Score:2)
1: MSN
2: Google
3: DUKKAN
4: IKEA
5: BlueFocus
6: SalesForce
7: Duolingo
8: Turnitin
9: Clarna
10: Best Buy
11: Dell https://www.businessinsider.co... [businessinsider.com]
12: Workday: https://www.msn.com/en-us/mone... [msn.com]
https://tech.co/news/companies... [tech.co]
This list does not include companies that have announced plans to replace workers. (which includes IBM)
show of hands (Score:2)
Has anyone here had an AI customer service chatbot help them with something so that they did not have to communicate with an actual person?
* - For Carefully chosen categories. In Denmark. (Score:2)
There are some large caveats to this "study".
This is only for Denmark, which has actual, effective worker protection laws, not the US where you can be fired and replaced at any time for any reason.
These categories are very specific - Do they have many call centers in Denmark? What are "Marketing professionals", is that just the ones doing market research and media strategy, or does that include visual and audio artists? It matters because the latter ones are the people that got their business cut off at t