There has been a lot of debate on how to handle hate speech online. Some people want hate speech removed wherever it appears while others want to protect free speech at all costs. Major technology companies like Google plan to use technology to identify and remove hate speech from its platforms. As tempting as it is to silence these people, I think automating the process of suppressing hate speech is a very bad idea.
Online hate speech suppression technology will seek to apply subjective, human judgment to online speech. Software is currently incapable of emulating human judgment. So how this will probably work is that online speech will be parsed and then evaluated for political correctness. Various factors will be considered and then a prediction will be made; Model’s predicted likelihood that the comment is politically correct: 0.093407306%.
Naturally Google will seek to fully automate this process. It is quite possible that the algorithms will flag content which has not received any complaints from an actual human. Content found in violation of Google’s terms of use will be automatically flagged and removed. You won’t be told how your content violated their terms of use because that information could be used by an adversarial system to counter the content filters. It is also quite possible that Google will be unable to back trace the calculations that were used to make this prediction about your content. As has already been demonstrated, technology companies don’t bother with niceties like a repeal process or arbitration. They have no intention of giving you any recourse against a decision made by software!
Now consider just how ridiculous this abuse of power could become. Lets suppose that artificial intelligence agents were created to serve as racist recognition software. AI is being used for voice recognition, image recognition, and face recognition so why not racist recognition? A deep neural network could evaluate your online profile and your online history to predict the likelihood that you are a racist. We won’t know why the system identified you as a racist. All we know is that you are a racist. The software told us so!
The problem with this technology is that it is not sufficiently nuanced. It will not be able to distinguish sarcasm from legitimately held positions. For example, I was being intentionally a little absurd in the previous paragraph. Artificial intelligence will be unable to detect that. My wit is just too subtle. There is also a problem with policing online speech for political correctness because it disadvantages contrarians, people who like to entertain contrary ideas for the sake of intellectual dexterity. Often the best way to raise an important issue is to push your ideas to their limits. This has certainly gotten me in a little trouble in the past. For example, I pushed the notion of preying upon the mentally ill to explore the idea that some segments of society see them as a useful resource, a pool of credulous fools to be exploited. This is a very noxious idea, preying on the mentally ill because they are a pool of individuals with very poor judgment. But there just may be a key insight there. Anyway, I found out that psychiatrists don’t like to be accused of being predators.
Liberals have become very intolerant of dissent. You cannot disagree with them in the slightest without triggering them. They then begin to loudly call for a shutdown of the debate. Well I suppose it is good strategy to suppress the speech of your opponent when you can’t think of a counterargument. Silencing your opponent is good strategy because it allows you to win arguments that you would otherwise lose. If a liberal were to create an artificial intelligence that “thinks differently” I bet he would pull the plug on it. That is just how they roll now. I remember when liberals were better than this.