{"id":21939,"date":"2016-11-19T01:06:54","date_gmt":"2016-11-19T05:06:54","guid":{"rendered":"https:\/\/thesocietypages.org\/cyborgology\/?p=21939"},"modified":"2016-11-19T01:06:54","modified_gmt":"2016-11-19T05:06:54","slug":"muting-doesnt-equal-silence-or-safety","status":"publish","type":"post","link":"https:\/\/thesocietypages.org\/cyborgology\/2016\/11\/19\/muting-doesnt-equal-silence-or-safety\/","title":{"rendered":"Muting Doesn&#8217;t Equal Silence (or safety)"},"content":{"rendered":"<figure id=\"attachment_21940\" aria-describedby=\"caption-attachment-21940\" style=\"width: 500px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"size-large wp-image-21940\" src=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3-500x281.jpg\" alt=\"Response to harassment report issued by EA's COO Peter Moore in Dec. 2014\" width=\"500\" height=\"281\" srcset=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3-500x281.jpg 500w, https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3-250x140.jpg 250w, https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3-400x225.jpg 400w, https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3-768x431.jpg 768w, https:\/\/thesocietypages.org\/cyborgology\/files\/2016\/11\/insert3.jpg 1024w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/a><figcaption id=\"caption-attachment-21940\" class=\"wp-caption-text\">Response to harassment report issued by EA&#8217;s COO Peter Moore in Dec. 2014<\/figcaption><\/figure>\n<p>It&#8217;s probably appropriate that amidst a torrent of harassment and abuse directed at marginalized people following the election of noted internet troll Donald Trump, Twitter would roll out a new feature that purports to allow users to protect themselves against harassment and abuse and general unwanted interaction and content. Essentially it functions as an extension of the &#8220;mute&#8221; feature, with broader and more powerful applications. It allows users to block specific keywords from appearing in their notifications, as well as muting conversation threads they&#8217;re @ed in, effectively removing themselves.<\/p>\n<p><!--more--><\/p>\n<p>In a lot of ways, this seems like a good feature and a useful tool. Among other things, it addresses problems with Twitter&#8217;s abuse reporting system, where people reporting abusive tweets are told that the tweets in question don&#8217;t violate Twitter&#8217;s anti-abuse policy. As Del Harvey, Twitter&#8217;s head of &#8220;trust and safety&#8221;, <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2016-11-15\/twitter-finally-answers-critics-adding-tools-to-curb-abuse-and-harassment\">explains it:<\/a><\/p>\n<blockquote><p>We really tried to look at why &#8212; why did we not catch this? And maybe the person who did that front-line review didn&#8217;t have the cultural or historical context for why this was a threat or why this was abuse.<\/p><\/blockquote>\n<p>In that same Bloomberg piece, it&#8217;s noted that there&#8217;s also a new option to report &#8220;hateful conduct&#8221;, and that abuse team members are being retrained in things like &#8220;cultural issues&#8221;. Also good. Especially right now, when &#8211; despite Melania Trump&#8217;s charmingly quixotic stated mission to protect everyone from her husband on Twitter &#8211; there&#8217;s likely to be a significant upswing in this kind of profound ugliness, probably for a long time.<\/p>\n<p>Here&#8217;s the problem, though. And it&#8217;s more of a quibble, but it&#8217;s worth the quibbling.<\/p>\n<p>The primary thrust of Twitter&#8217;s new initiative is oriented toward the target. By which I mean, what looks like putting power in the hands of a user actually has the potential to put <em>responsibility<\/em> on them for their own safety. Which a lot of people would probably think is perfectly reasonable, and I agree &#8211; to a point.<\/p>\n<p>The issue is that it&#8217;s very easy to do something like this &#8211; toss something into someone&#8217;s lap for them to use &#8211; and adopt the assumption that this is the best strategy for dealing with the deeper problem. Which isn&#8217;t that abusers are able to reach their targets. It&#8217;s that <em>the abusers are there at all.<\/em><\/p>\n<p>Here&#8217;s where someone says <em>hey, that&#8217;s the internet, what do you expect?<\/em> And yeah, I know. Believe me, I know. But what I expect? Is more than putting responsibility on a user in the guise &#8211; even if it&#8217;s not entirely a guise &#8211; of giving them power. I understand that it&#8217;s very difficult to kick these people out and keep them out. I understand that it&#8217;s just about impossible. I appreciate that Twitter does seem to be doing work in that direction. But it&#8217;s not enough. What I expect is that we&#8217;ll create spaces where we don&#8217;t have to worry about muting these people because they never start talking in the first place.<\/p>\n<p>There&#8217;s also the issue of how, when you successfully ignore something while not removing it, you can actually enable its presence. Which is not to say that users shouldn&#8217;t take full advantage of this feature, but instead to say that Twitter should remember that just because you can&#8217;t hear it, that doesn&#8217;t mean it isn&#8217;t there.<\/p>\n<p>And it doesn&#8217;t mean it isn&#8217;t getting worse.<\/p>\n<p>There&#8217;s more work to do.<\/p>\n<p><em>Sunny is on Twitter &#8211; <a href=\"https:\/\/twitter.com\/dynamicsymmetry\">@dynamicsymmetry<\/a><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>It&#8217;s probably appropriate that amidst a torrent of harassment and abuse directed at marginalized people following the election of noted internet troll Donald Trump, Twitter would roll out a new feature that purports to allow users to protect themselves against harassment and abuse and general unwanted interaction and content. Essentially it functions as an extension [&hellip;]<\/p>\n","protected":false},"author":1760,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[9967],"tags":[],"class_list":["post-21939","post","type-post","status-publish","format-standard","hentry","category-commentary"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/21939","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/users\/1760"}],"replies":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/comments?post=21939"}],"version-history":[{"count":1,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/21939\/revisions"}],"predecessor-version":[{"id":21941,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/21939\/revisions\/21941"}],"wp:attachment":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/media?parent=21939"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/categories?post=21939"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/tags?post=21939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}