An anonymous reader quotes a report from Ars Technica: In a blog post, YouTube outlined more specific definitions of hate speech and what kinds of incendiary content wouldn't be eligible for monetization. Three categories are classified as hate speech, with the broadest one being "hateful content." YouTube is defining this as anything that "promotes discrimination or disparages or humiliates an individual or group of people on the basis of the individual's or group's race, ethnicity, or ethnic origin, nationality, religion, disability, age, veteran status, sexual orientation, gender identity, or other characteristic associated with systematic discrimination or marginalization." The second category is "inappropriate use of family entertainment characters," which means content showing kid-friendly characters in "violent, sexual, vile, or otherwise inappropriate behavior," no matter if the content is satirical or a parody. The final category is somewhat broad: "incendiary and demeaning content" means that anything "gratuitously" demeaning or shameful toward an individual or group is prohibited. The updated guidelines are a response to creators asking YouTube to clarify what will and will not be deemed advertiser-friendly. YouTube acknowledges that its systems still aren't perfect, but it says it's doing its best to inform creators while maintaining support for advertisers. YouTube also launched a new course in its Creator Academy that creators can take to learn more about how to make "content appealing for a broad range of advertisers."