{"id":22480,"date":"2017-03-24T09:30:47","date_gmt":"2017-03-24T13:30:47","guid":{"rendered":"https:\/\/thesocietypages.org\/cyborgology\/?p=22480"},"modified":"2017-04-02T06:18:19","modified_gmt":"2017-04-02T10:18:19","slug":"accident-tourist-1","status":"publish","type":"post","link":"https:\/\/thesocietypages.org\/cyborgology\/2017\/03\/24\/accident-tourist-1\/","title":{"rendered":"Accident Tourist: Driverless car crashes, ethics, machine learning"},"content":{"rendered":"<figure id=\"attachment_22494\" aria-describedby=\"caption-attachment-22494\" style=\"width: 3618px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2.jpg\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-22494 size-full\" src=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2.jpg\" alt=\"\" width=\"3618\" height=\"1620\" srcset=\"https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2.jpg 3618w, https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2-250x112.jpg 250w, https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2-400x179.jpg 400w, https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2-768x344.jpg 768w, https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2-500x224.jpg 500w\" sizes=\"auto, (max-width: 3618px) 100vw, 3618px\" \/><\/a><figcaption id=\"caption-attachment-22494\" class=\"wp-caption-text\">Street view, Calcutta, 2010.<\/figcaption><\/figure>\n<p>&nbsp;<\/p>\n<p><span style=\"font-family: Georgia,serif\">A<\/span><span style=\"font-family: Georgia,serif\">ccording to its author JG Ballard, <\/span><a href=\"https:\/\/en.wikipedia.org\/wiki\/Crash_(J._G._Ballard_novel)\"><span style=\"font-family: Georgia,serif\"><i>Crash<\/i><\/span><\/a><span style=\"font-family: Georgia,serif\"> is <\/span><span style=\"font-family: Georgia,serif\">&#8216;<\/span><span style=\"font-family: Georgia,serif\">the first porn<\/span><span style=\"font-family: Georgia,serif\">ographic novel based on technology&#8217;<\/span><span style=\"font-family: Georgia,serif\">. <\/span><span style=\"font-family: Georgia,serif\">From bizarre to oddly pleasing, the book&#8217;s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The \u201cTV scientist\u201d, Vaughan, and his motley crew of car crash fetishists seek out crashes in-the-making, even causing them, just for the thrill of it. The protagonist of the tale, a James Ballard, gets deeply entangled with Vaughan and his ragged band after his own experience of a crash. Vaughan&#8217;s ultimate sexual fantasy is to die in a head-on collision with the actress Elizabeth Taylor. <\/span><\/p>\n<p><span style=\"font-family: Helvetica,sans-serif\"><span style=\"font-family: Georgia,serif\">Like a STS <\/span><span style=\"font-family: Georgia,serif\">scholar-<\/span><span style=\"font-family: Georgia,serif\">version of Vaughan, I imagine what it may be like to arrive on the scene of\u00a0 a driverless car crash, and draw maps to understand what happened. Scenario planning is one kind of map-making to plan for &#8216;unthinkable futures&#8217;. <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">The &#8216;scenario&#8217; is a phenomenon that became prominent during the Korean War, and through the following decades of the Cold War, to allow the US army to plan its strategy in the event of nuclear disaster. Peter Galison <a href=\"http:\/\/galison.scholar.harvard.edu\/files\/andrewhsmith\/files\/galison_futureofscenarios.pdf\">describes scenarios<\/a> as a \u201cliterature of future war\u201d \u201clocated somewhere between a story outline and ever more sophisticated role-playing war games\u201d, \u201ca staple of the new futurism\u201d. Since then scenario-planning has been adopted by a range of organisations, and features in the modelling of risk and to identify errors. Galison cites the Boston Group as having written a scenario &#8211; their very first one-\u00a0 in which feminist epistemologists, historians, and philosophers of science running amok might present a threat to the release of radioactive waste from the Cold War (&#8220;A Feminist World, 2091&#8221;). <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">The applications of the <a href=\"http:\/\/www.philosopherstoolkit.com\/the-trolley-problem.php\">Trolley Problem<\/a> to driverless car crashes are a sort of scenario planning exercise. Now familiar to most readers of mainstream technology reporting, t<span style=\"color: #000000\"><span lang=\"en-GB\">he <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">Trolley <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">problem <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">is<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\"> presented <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">as a series of hypothetical situations <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">with <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">different<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\"> outcomes <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">derived from a pitting of <a href=\"http:\/\/www.philosophybasics.com\/branch_consequentialism.html\">consequentialis<\/a><\/span><\/span><a href=\"http:\/\/www.philosophybasics.com\/branch_consequentialism.html\"><span style=\"color: #000000\"><span lang=\"en-GB\">m<\/span><\/span><\/a><span style=\"color: #000000\"><span lang=\"en-GB\"> against <a href=\"http:\/\/www.philosophybasics.com\/branch_deontology.html\">deontological ethics<\/a>. <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">Trolley Problems are constructed as either\/or scenarios where a single choice must be made. <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">MIT&#8217;s <a href=\"http:\/\/moralmachine.mit.edu\/\">Moral Machine project<\/a> materialises this thought experiment with an online template in which <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">the user has to complete scenarios <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">in which she has to instruct the driverless car about which kinds of pedestrian to avoid in the case of brake failure: runaway criminals, pets, children, parents, athletes, old people, or fat people.\u00a0 Patrick Lin has a short animation describing the application of the Trolley Problem in the driverless car scenario.<br \/>\n<\/span><\/span><\/span><\/span><\/p>\n<p><iframe loading=\"lazy\" title=\"The ethical dilemma of self-driving cars - Patrick Lin\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/ixIoDYVfKA0?list=PLqNM8qLQYuaZ1wdk6fRctCvrkV2I1HzBw\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p>&nbsp;<\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">The applications of the Trolley Problem, as well as the Pascalian Wager (<a href=\"https:\/\/www.researchgate.net\/publication\/309764271_Autonomous_vehicles_and_moral_uncertainty\">Bhargava, 2016<\/a>), are applications that attempt to create what-if scenarios. These scenarios guide the technical development of what has become both holy grail and a smokescreen in talking about autonomous driving (in the absence of an actual autonomous vehicle): how can we be sure that driverless car software will make the right assessment about the value of human lives in the case of an accident?<br \/>\n<\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">These scenarios and their outcomes is being referred to as the &#8216;ethics&#8217; of autonomous driving.\u00a0 In the development of driverless cars we see an ambition for the development of what James Moor refers to as an &#8216;explicit ethical agent&#8217; &#8211; one that is able to calculate the best action in an ethical dilemma. I resist this construction of ethics because I claim that relationships with complex machines, and the outcomes of our interactions with them, are not either\/or, and are perhaps far more entangled, especially in crashes. There is an assumption in applications of Trolley Problems that crashes take place in certain specific ways,\u00a0 that\u00a0 ethical outcomes can be computed in a logical fashion, and that this will amount to an accountability for machine behaviour. There is an assumption that the driverless car is a kind of neoliberal subject itself, which can somehow account for its behaviour just as it might some day just drive itself (thanks to Martha Poon for this phrasing). <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">Crash scenarios are particular moments that are being planned for and against; what kinds of questions does the crash event allow us to ask about how we&#8217;re constructing relationships with and about machines with artificial intelligence technologies in them? I claim that when we say \u201cethics\u201d in the context of hypothetical driverless car crashes, what we&#8217;re really saying \u201cwho is accountable for this?\u201d or, \u201cwho will pay for this?\u201d, or \u201chow can we regulate machine intelligence?\u201d, or \u201chow should humans apply artificially intelligent technologies in different areas?\u201d. Some of these questions can be drawn out in terms of an accident involving a Tesla semi-autonomous vehicle.<\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\"><span style=\"color: #000000\"><span lang=\"en-GB\">In May 2016, an ex-US Navy veteran was test-driving a Model S Tesla semi-autonomous vehicle. <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">The test driver, <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">who was <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">allegedly <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">watching a Harry Potter movie at the time <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">with the car in &#8216;autopilot&#8217; mode, drove into a large <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">tractor <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">trailer<\/span><\/span> <span style=\"color: #000000\"><span lang=\"en-GB\">whose white surface was mistaken by the <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">computer vision <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">software for the <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">bright <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">sky. Thus it did not stop, and went straight into the truck. <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">The fault, it seemed, was the <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">driver&#8217;s for trusting the <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">a<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">utopilot <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">mode <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">as the <\/span><\/span><a href=\"https:\/\/www.tesla.com\/blog\/tragic-loss\"><span lang=\"en-GB\">Tesla statement <\/span><\/a><span style=\"color: #000000\"><span lang=\"en-GB\">after the event says. <\/span><\/span>Autopilot in the semi-autonomous car is perhaps misleading for those who go up in airplanes, so much so that the German government has told Tesla that <a href=\"https:\/\/techcrunch.com\/2016\/10\/10\/german-report-calls-teslas-autopilot-a-hazard\/\">it cannot use the word &#8216;autopilot&#8217; in the German market<\/a>.<\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">In order to understand what might have happened in the Tesla case, it is necessary to look at applications of computer vision and machine learning in driverless cars. A driverless car is equipped with a variety of sensors and cameras that will record objects around it. These objects will be identified by specialised deep learning algorithms called neural nets. Neural nets are distinct in that they can be programmed to build internal models for identifying features of a dataset, and can learn how those features are related without being explicitly programmed to do so (<a href=\"https:\/\/devblogs.nvidia.com\/parallelforall\/deep-learning-self-driving-cars\/\">NVIDIA 2016<\/a>; <a href=\"https:\/\/arxiv.org\/pdf\/1604.07316v1.pdf\">Bojarski et al 2016<\/a>).<\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">Computer vision software makes an image of an object and breaks that image up into small parts \u2013 edges, lines, corners, colour gradients and so on. By looking at billions of images, neural nets can identify patterns in how combinations of edges, lines, corners and gradients come together to constitute different objects; this is its &#8216;model making&#8217;. The expectation is that such software can identify a ball, a cat, or a child, and that this identification will enable the car&#8217;s software to make a decision about how to react based on that identification<\/span><\/span>.<\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">Yet, this is a technology still in development and there is the possibility for much confusion. So, things that are yellow, or things that have faces and two ears on top of the head, which share features such as shape, or where edges, gradients, and lines come together, can be misread until the software sees enough examples that distinguish how things that are yellow, or things with two ears on the top of the head, are different. The more complex something is visually, without solid edges, curves or single colours; or if an object is fast, small, or flexible, the more difficult it is to read. So, computer vision in cars is shown to have a <a href=\"http:\/\/spectrum.ieee.org\/cars-that-think\/transportation\/self-driving\/the-selfdriving-cars-bicycle-problem\">&#8216;bicycle problem<\/a>&#8216; because bicycles are difficult to identify, do not always havea a specific, structured shape, and can move at different speeds. <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">In the case of the Tesla crash, the software misread the large expanse of the side of the trailer truck for the sky. It is possible that the machine learning was not well-trained enough to make the distinction. <span style=\"color: #000000\">The<\/span><span style=\"color: #000000\"> Tesla crash <\/span><span style=\"color: #000000\">suggests that <\/span><span style=\"color: #000000\">there was both an error in the computer vision and machine learning software, as well as a lapse on the part of the <\/span><span style=\"color: #000000\">test <\/span><span style=\"color: #000000\">driver who misunderstood what autopilot meant. H<\/span><span style=\"color: #000000\">ow, then, are these separate<\/span><span style=\"color: #000000\"> conditions<\/span><span style=\"color: #000000\"> to be understood as part of the dynamic that resulted in the crash? <\/span><span style=\"color: #000000\">How might an ethics be imagined for this sort of crash that comes from an unfortunate entanglement of machine error and human error?<\/span>\u00a0<\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">Looking away from <\/span><span style=\"font-size: medium\">driverless car <\/span><span style=\"font-size: medium\">crashes, and to aviation crashes instead, a <\/span><span style=\"font-size: medium\">specific narrative around the idea of accountability for crashes emerges. <\/span>Galison, again, in his astounding chapter, <i>An Accident of History,\u00a0 <\/i>meticulously unpacks aviation crashes and how they are are investigated, documented and recorded. In doing so he makes the point that it is near impossible to escape the entanglements between human and machine actors involved in an accident. And that in attempting to identify how and why a crash occurred, we find a <span style=\"font-size: medium\">\u201crecurrent strain to between a drive to ascribe final causation to human factors and an equally powerful, countervailing drive to assign agency to technological factors.\u201d <\/span><\/span> Galison finds that in accidents, human action and material agency are entwined to the point that causal chains both seem to terminate at particular, critical actions, as well as radiate out towards human interactions and organisational cultures. Yet, what is embedded in accident reporting is the desire for a \u201csingle point of culpability\u201d, as Alexander Brown puts it, which never seems to come.<\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">Brown&#8217;s own accounts of accidents and engineering at NASA, and Diane Vaughan&#8217;s landmark ethnography about the reasons for the <\/span><span style=\"font-size: medium\"><i>Challenger<\/i><\/span><span style=\"font-size: medium\"> Space Shuttle crash suggest the same: from organisational culture and bureaucratic processes, to faulty design, to a wide swathe of human errors, to the combinations of these, <\/span><span style=\"font-size: medium\">are all implicated in <\/span><span style=\"font-size: medium\">how <\/span><span style=\"font-size: medium\">crashes <\/span><span style=\"font-size: medium\">of complex<\/span><span style=\"font-size: medium\"> vehicles <\/span><span style=\"font-size: medium\">occur. <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\"><span style=\"color: #000000\">Anyone who has ever been in a car crash probably agrees that correctly identifying exactly what happened is a challenging task, at least.\u00a0 H<\/span><span style=\"color: #000000\">ow can the multiple, parallel <\/span><span style=\"color: #000000\">conditions present in driving and car crashes be conceptualised? Rather than<\/span> <span style=\"color: #000000\">a<\/span><span style=\"color: #000000\">n<\/span><span style=\"color: #000000\"> &#8216;<\/span><span style=\"color: #000000\">ethics <\/span><span style=\"color: #000000\">of driver-less cars&#8217;<\/span> <span style=\"color: #000000\">as a<\/span><span style=\"color: #000000\"> set of programmable rules for appropriate action, <\/span><span style=\"color: #000000\">could it be imagined as a <\/span><span style=\"color: #000000\">process <\/span><span style=\"color: #000000\">by which an assemblage<\/span> <span style=\"color: #000000\">of people, <\/span><span style=\"color: #000000\">social groups, cultural codes,<\/span><span style=\"color: #000000\"> institutions, regulatory standards, infrastructures, technical code, <\/span><span style=\"color: #000000\">and<\/span><span style=\"color: #000000\"> engineering <\/span><span style=\"color: #000000\">are framed <\/span><span style=\"color: #000000\">in terms of their<\/span><span style=\"color: #000000\"> interaction? Mike <\/span>Ananny suggests that \u201ctechnology ethics emerges from a mix of institutionalized codes, professional cultures, technological capabilities, social practices, and individual decision making.\u201d He shows that ethics is not a \u201ctest to be passed or a culture to be interrogated but a complex social and <i>cultural achievement\u201d <\/i>(emphasis in original). <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\">What the Trolley Problem scenario and the applications of machine learning in driving suggest is that we&#8217;re seeing a shift in how ethics is being constructed: from accounting for crashes after the fact, to pre-empting them (though, the automotive industry has been using computer simulated crash modeling for over twenty years); <span style=\"color: #000000\">from ethics that is about values, or <\/span><span style=\"color: #000000\">reasoning<\/span><span style=\"color: #000000\">, to ethics as <\/span><span style=\"color: #000000\">based on datasets of correct responses, <\/span><span style=\"color: #000000\">and, <\/span><span style=\"color: #000000\">crucially, of ethics<\/span> <span style=\"color: #000000\">as the outcome of software engineering<\/span><span style=\"color: #000000\">. <\/span><span style=\"color: #000000\">Specifically in the context of driverless cars, t<\/span><span style=\"color: #000000\">here is the shift from ethics as a framework of \u201cvalues for living well and dying well\u201d, as <a href=\"https:\/\/www.theguardian.com\/books\/2015\/jan\/21\/drone-theory-by-gregoire-chamayou-review-provocative-investigation\">Gregoire Chamayou puts it<\/a>, to a framework for \u201ckilling well\u201d, or &#8216;necroethics&#8217;.<\/span> <\/span><\/span><\/p>\n<p><span style=\"font-family: Georgia,serif\"><span style=\"font-size: medium\"><span style=\"color: #000000\"><span lang=\"en-GB\">Perhaps the unthinkable scenario to confront is that\u00a0 ethics is not a machine-learned response, nor an end-point, <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">but<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\"> as a s<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">eries of <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">socio-technical<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">, technical, human, and post-human relationships, <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">ontologies,<\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\"> and exchanges. <\/span><\/span><span style=\"color: #000000\"><span lang=\"en-GB\">These challenging and intriguing scenarios are yet to be mapped.<br \/>\n<\/span><\/span><\/span><\/span><\/p>\n<p><em>Maya is\u00a0 a PhD candidate at Leuphana University and is Director of Applied Research at <a href=\"https:\/\/tacticaltech.org\">Tactical Tech<\/a>. She can be reached via <a href=\"https:\/\/twitter.com\/mayameme\">twitter\u00a0<\/a><\/em><\/p>\n<p><em>(<\/em><em><span style=\"font-family: Liberation Sans,sans-serif\"><span style=\"font-family: Georgia,serif\">Some ideas in this paper have been <\/span><span style=\"font-family: Georgia,serif\">developed <\/span><span style=\"font-family: Georgia,serif\">for a <\/span><span style=\"font-family: Georgia,serif\">paper <\/span><span style=\"font-family: Georgia,serif\">submitted for publication in a peer-reviewed journal, <a href=\"http:\/\/www.aprja.net\/\">APRJA<\/a>)<\/span><\/span><\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; According to its author JG Ballard, Crash is &#8216;the first pornographic novel based on technology&#8217;. From bizarre to oddly pleasing, the book&#8217;s narrative drifts from one erotically charged description to another of mangled human bodies and machines fused together in the moment of a car crash. The \u201cTV scientist\u201d, Vaughan, and his motley crew [&hellip;]<\/p>\n","protected":false},"author":2082,"featured_media":22494,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[9967],"tags":[],"class_list":["post-22480","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-commentary"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/thesocietypages.org\/cyborgology\/files\/2017\/03\/lovestreet2.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/22480","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/users\/2082"}],"replies":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/comments?post=22480"}],"version-history":[{"count":11,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/22480\/revisions"}],"predecessor-version":[{"id":22518,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/posts\/22480\/revisions\/22518"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/media\/22494"}],"wp:attachment":[{"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/media?parent=22480"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/categories?post=22480"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thesocietypages.org\/cyborgology\/wp-json\/wp\/v2\/tags?post=22480"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}