Complicating Disability Studies’ Relationship to Medicine

One of Disability Studies’ major hang-ups is its default position with respect to the field of medicine and—by extension—with medical practitioners. The adversarial stance of DS towards medicine (and doctors) stems largely from the former’s repudiation of the medical model of disability, according to which—as defined by Disability Studies scholars—individual disabled people are identified as “problems” to be “fixed” or “cured.” [1]

The graphic below, borrowed from the website of the Democracy Disability and Society Group, nicely illustrates the medical model of disability as theorized by DS scholars and activists:

medical-model

Image credit Democracy Disability and Society Group (ddsg.org).

Before I dive into my discussion on DS’s positions vis-a-vis “the medical model,” I’d like to clarify that in my own work I make no distinction between “impairment” and “disability,” preferring instead to utilize “disability” to designate the complex matrix of physical/material and socio-cultural phenomena that together produce conditions of disablement for some people. [2]

The main issues that I have with Disability Studies’ framing of “the medical model” in its current incarnation is that it presumes the following:

  • Medicine and physicians are always paternalistic.
  • Recipients of “medical care” are always “passive” and “disempowered.”
  • There is no gray area between the extremes of “cure” and “do nothing” when it comes to medicine.
  • All “medical” care is bad.

It is worth noting that the definition of “medical model”—a term coined by psychiatrist R.D. Laing to describe the working model for training physicians and from which the related “medical model of disability” derives—is one articulated in the 1970s. It therefore bears little resemblance to working models employed by physicians in the 21st-century, especially newer generations of doctors who have moved away from paternalistic attitudes and tend to view them as outdated and ineffective. [3]

The Democracy Disability and Society Group graphic includes both “impairments” (aka “disabilities”) and “chronic illness,” but I’m puzzled as to why they occupy separate categories considering chronic illnesses are in fact disabilities. A disability (again, the graphic uses “impairment” to denote what I call “disability”) is quite simply a mode of functioning that differs from that of the majority of people. For instance: if the majority of people have 2 legs, then having only 1 leg is a “disability” because it involves a physical form (and consequently a mode of ambulation) that differs from that of the majority of the population. If most people do not perceive sights and sounds as overstimulating but someone with Autism does, then Autism is a disability because it involves sensory/cognitive processing modes that differ from those of the majority. It logically follows that if most people have immune systems characterized by a common baseline level of inflammation, people with immune systems characterized by higher-than-average inflammation levels (manifesting in a variety of conditions with names like MS, Rheumatoid Arthritis, Chron’s, etc.) are configured immunologically in a way that differs from the majority of the population and consequently must operate differently from their immunologically “standard” counterparts. In other words: yes, chronic illness (defined as “ongoing immunological inflammation that differs from that found in the majority of the population”) is a disability.

A couple of factors contribute to the “classical” separation within DS between “chronic illness” and “disability.” As shown in the graphic, disability is traditionally viewed as a “physical, mental, [or] sensory” difference, but overwhelmingly “mere” physical differences are prized, with the “ideal” disabled person being an “otherwise healthy” individual with a motor impairment (i.e. – missing limb, spinal cord injury, war trauma, etc.) necessitating either a wheelchair or prosthesis. Within the hierarchy of disability—yes, there is a hierarchy—Deaf and blind people are also prized, since they are “otherwise healthy.” [4] A quick Google image search of the keyword “disability,” while admittedly not scientifically rigorous, provides a terrific example of the hierarchy of disability at play.

My proposal is that this emphasis on “health” as the standard by which people are included or excluded as “disabled” is as outdated as the paternalistic style of medical practice. By emphasizing the image of disability as “mere” physical variation in “otherwise healthy” individuals, Disability Studies is very problematically helping to enshrine the ideal of “health” as well as colluding in the over-arching cultural rhetoric of “health as morality,” wherein immunological variation is code for “immorality” and even “inferiority.” By clinging to mainstream ideals of “health,” Disability Studies works to achieve greater equality for some disabled people by actively oppressing others. For a field allegedly committed to social justice and equality, upholding this kind of hierarchy of oppression is unacceptable.

Because chronic illnesses are many times imperceptible [5], they tend to be overlooked by the general public (including the DS community), and this lack of perception seems to be the second key determinant—besides the prevailing rhetoric of “health”—in their exclusion from disability and Disability Studies. Everyone knows when a paraplegic person enters the room: he’s using a wheelchair. The Deaf person, in signing, not only communicates but also performs his or her Deafness. The blind person with a cane or dark glasses is identifiable as blind. Being identifiable, even by laypeople, as disabled is important to the validation of “disability identity” precisely because of DS’s internalization of cultural ideals of “health.” Disability Studies’ idealization of “health” and its emphasis on perceptible forms of disability are inextricably intertwined.

In contrast with “classically” acknowledged forms of disability like Deafness, blindness, using a wheelchair or prosthesis, etc., chronic illnesses are often not perceptible to the general public. The crucial point here is that chronic illnesses are frequently only perceived (and perceptible) by *medical* professionals—and even then indirectly, via analysis of complex physical exams, blood work, and so forth. They are thus prone to being reflexively (if incorrectly) “medicalized” by default and rejected by DS scholars and activists as “something other than disability.”

It is both poignant and ironic that, while people with perceptible disabilities are more likely to suffer discrimination and exclusion by the non-disabled public by virtue of their disabilities being perceptible, people with imperceptible disabilities (such as chronic illnesses) are routinely excluded from Disability Studies as “other-than-disabled” or “non-disabled” for (in part) the opposite reason. [6]

Disability Studies’ rejection of “the medical model,” combined with immunologically disabled people’s configuration or placement within that model, contribute to conditions that foster the exclusion of chronically ill people from disability and from DS. DS “needs” to reject chronically ill people because it “needs” to reject “the medical model,” and chronically ill people are stubbornly enmeshed within that model. Chronically ill people are treated by the field as “the problem” in need of “cure” or “fixing”—and this “cure” or “fix” is accomplished through segregation, which takes the form of exclusion from the category of “disability.” Oh what a tangled web we weave when nearly an entire field uses the very same working model it claims to loathe as a virtual blueprint for casting off certain members of its own group! [7] 

Instead of rejecting chronic illness as “not disability” simply because it doesn’t fit into the established paradigm of “the medical model of disability” as formulated by Disability Studies scholars and activists, what if we flipped the lens? What if we asked what recognizing chronic illness as a disability could potentially do for our existing understanding of “the medical model of disability”?

One of the first shifts that would occur would pertain to our views on medicine, medical care, and physician-patient relationships. The experiences of people with chronic illnesses (aka “immunological disabilities”) in the realm of medicine often bear little resemblance to the invariably negative and fatalistic views of medicine propagated by leading DS scholars. For starters, since chronic illnesses are not “curable,” there tends to be minimal—if any—fixation on the notion of “cure” on the part of the physician. When and if an insistence on “cure” does occur, it is generally on the part of the chronically ill person, and my argument would be that it is because that particular person has been indoctrinated into the rhetoric of “cure” by organizations like the National MS Society, the Arthritis Foundation, etc. (and on a larger scale, by contemporary society’s worship of “health”). This is no different than an individual paraplegic person expressing his/her desire to not be paraplegic, or an individual blind person maintaining that they would prefer to be sighted. What is different is that chronically ill people receive far less support from the general public should they choose not to oppose the rhetoric of “cure,” coupled with far more (organizational and social) pressure to adhere to this harmful rhetoric. If charities and organizations such as the NMSS and the AF continue to foster the idea that chronic illness is an “evil” and that “cure” is the only solution, then many chronically ill people will continue to succumb to pressure to internalize these views, even if it proves disempowering and unproductive.

The relationships between chronically ill (aka “immunologically disabled”) people and their physicians are typically long-term ones that emphasize continuity of care, partnership, interdependence, and support. Far from being “passive recipients” of care, we are engaged participants in a dynamic that contributes to our own care and that of others. Far from having “cure” (or even “treatment”) imposed on us, we are empowered to provide input regarding how we would like to approach our disability (and how we would like others, including our doctors, to approach it). Notice that I deliberately use terms like “care” and “approach to” instead of “cure” or “fix.” The latter terms simply fail to describe my experience within the context of medicine, and so I avoid them.

An immunomodulatory drug—the type of drug most people with immunological disabilities use—is best viewed as a prosthesis. In The End of Normal: Identity in a Biocultural Era, Lennard Davis affirms: “A drug would be a prosthesis if it restored or imitated some primary state that appears to be natural and useful” (64). Davis makes this statement in the context of his argument that SSRIs are not “chemical prostheses” for depression, since happiness is not a “primary state” of being and since there is compelling evidence to suggest that SSRIs do not actually work (Davis 55-60). His assertion is relevant to my position in this blog post since, unlike SSRIs, immunomodulatory drugs do “restor[e] or imitat[e] some primary state” (levels of immunological inflammation and patterns of immunological behavior more consistent with those of people without autoimmune conditions) that “appea[r] to be natural and useful” (“natural” in the sense that these altered levels and patterns are consistent with those of people without autoimmune conditions, and “useful” in that they restore—to one an extent or another—“normal” immunological function in individuals with altered patterns of immune activity). Like a paraplegic deciding which model of wheelchair to use or an amputee picking the perfect prosthesis, we with chronic immunological conditions have input into which (if any) immunomodulator to use. If the chosen prosthesis (wheelchair, artificial limb, chemical compound) turns out to be ineffective or uncomfortable, we can choose a different one.

Interestingly, because specialists who care for patients with a particular condition (like Multiple Sclerosis or Chron’s) often maintain active research agendas that focus on the condition in which they specialize, their relationships with patients are best characterized as mutually interdependent. The physician needs the patient (or at least some patients) to consent to participating in clinical trials and providing data that will facilitate the physician’s own research, while the patient needs the physician to not only periodically assess his or her function, but also to prescribe (or provide access to) what are in effect chemical prosthetics that enable “normal” function.

The fact that these chemical prostheses are not accessible without recourse to a physician is arbitrary. By this I mean that it is not difficult to imagine an alternate capitalist universe in which 3D printers (with which wheelchair users can now print portable ramps) or even Braille are made for “limited use only” and controlled as tightly as immunomodulatory drugs are now. Wheelchair users got lucky in that they don’t require a new prescription every 30 days and a “co-pay” (imagine a monthly “user’s fee” for a wheelchair) to access the adaptive technology that is their wheelchair or 3D printer. Blind people got lucky in that they don’t require “prior authorization” to use Braille. There is nothing “special” about immunomodulatory drugs—meaning, nothing inherent in the drugs themselves or even the delivery system—that somehow makes them “medical” in contrast to so-called “non-medical” tech like 3D printers, Braille, and wheelchairs. It just worked out that groups of people figured out how to manufacture, control, and ultimately profit off of immunomodulatory drugs before they figured out how to do the same with Braille or 3D printers. Or maybe they figured out ways to make immunomodulatory drugs more profitable than Braille or 3D printers. It doesn’t matter. My point is that immunological prostheses are no more “inherently medical” than any other prostheses. They became medicalized because certain people figured out how to profit off of them by tying them into the established medical system. This is utterly random.

Given the randomness of the system in place; the evolving role of physicians (with shifts toward “patient-centered care” instead of “paternalistic medicine” and relationships of mutual interdependence between both parties rather than unilateral dependence running from patient to physician only); and medicine’s accepted position as an intermediary which, for some disabled people, controls access to certain types of chemical prostheses that have been arbitrarily classified as “medical,” it seems to me that it might be high time to question and, indeed, to complicate Disability Studies’ relationship to medicine. To move forward with such a paradigm shift, the field needs to stop medicalizing chronic illness. It needs to stop labeling people with chronic illnesses (immunological disabilities) as a “problem” in need of “curing” or “fixing” through exclusion from the category of “disability.” It needs to take another look at the so-called “medical model”—one it mimics in its treatment of the chronically ill while simultaneously decrying as “undesirable” for all other disabled people. To do this, the field will need to confront its existing hierarchy of disability and seek to trouble the notion that a disability must be perceptible to laypeople in order to “count.” But most importantly, Disability Studies will need to acknowledge that its “medical model of disability” no longer corresponds to the out-dated “medical model” of medicine on which it is based—and that the widening gap between the two threatens to quash the growth of the field.

[***FIRST DRAFT: WEDNESDAY, JUNE 11th, 2014. 23:01H EDT***]

Notes

1 – I specifically add the clumsy verbiage “as defined by Disability Studies scholars” to emphasize that medical professionals themselves would be unlikely to identify with this view of their own profession. As such, “the medical model of disability” needs to be understood within the context of its formulation by DS scholars and activists. The “model” is not neutral or objective; it is a specific framing of the field of medicine and of medical professionals by people with disabilities and/or their allies, many of whom aggressively oppose any kind of “medical” intervention.

For further reading and some helpful diagrams illustrating differences between “medical” and “social” models of disability, please consult the following pages:

http://ddsg.org.uk/taxi/medical-model.html

http://ddsg.org.uk/taxi/social-model.html

http://ukdisabilityhistorymonth.com/the-social-model/2011/9/13/understanding-the-social-model-of-disability-the-medical-mod.html

2 - For an expanded discussion of my views on the “impairment/disability binary,” see this thread and this document (especially pages 2-3 and notes on page 20).

3 – The NY Times piece is by a cardiologist who discusses grappling with tensions between paternalism and autonomy, and the Forbes article is by a physician criticizing what she refers to as “dinosaur physicians”—that is “old guard” M.D.s who still practice rigidly paternalistic medicine.

4 – Many Deaf people do not view themselves as disabled, since Deafness can also be conceptualized as a cultural and linguistic difference rather than a “disability” per se.

5 – “(Im)perceptible disabilities” is a phrase coined by Stephanie Kerschbaum as a preferable alternative to the ocularcentric “(in)visible disabilities.”

6 – “In part” because DS’s enshrinement of “health” should not be underestimated as a motivating factor in the exclusion of chronically ill people, either.

7 – When scholars within DS do write about medicine, they tend to focus on eugenics, end-of-life care, and assisted suicide, thereby perpetuating the stereotype that medicine equals “sickness and death only.” See recent work by Lennard Davis (The End of Normal: Identity in a Biocultural Era, 2013), especially Chapter 7 and Tom Shakespeare (Disability Rights and Wrongs, 2006), especially Part II.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 23 Comments

Triggernometry Redux: The “Trigger Warning” as Speech Act

Hello-Im-a-victim

An addendum to my earlier post on “trigger warnings,” inspired by a very late night discussion on Facebook:

**

The “trigger warning” can be viewed as a speech act. Considered as such, the act it performs is indirectly declarative; it (pro)claims for oneself and/or others the identity of “victim.” Because in the United States, in particular, the identity of “victim” is culturally enshrined, the deployment of the “trigger warning” is in essence an assertion of “moral superiority.” (It functions much like “not having privilege,” as described in Gawker’s playful online series “The Privilege Tournament”).

The (paradoxically privileged) status of “victim” confers upon its owner(s) the (unquestioned and unquestionable, because “sacred”) right to exert control over narratives (including the speech of other people, especially “non-victims”)—–a right understood as unimpeachable owing to the (pro)claimed, privileged status of “victim” and the authority this status bestows.

This is what George Will meant when he stated that victimhood is a privileged status, and this is just about the only thing he got right in his op-ed. He didn’t mean (or say) it was a privilege to be raped. He said that the status of “victim” comes with certain privileges. And this is what he meant. His greatest taboo, of course, was in exposing the culture of victimhood as one of power and in pointing out that the position of “victim”—-at least in contemporary U.S. society—-is one of power.

In other words, Will’s “transgression” consists of naming the power that the label “victim” intends to occlude, and upon whose occlusion the maintenance of that power depends. In exposing both the underlying mechanisms of power at play and their occlusion, Will’s op-ed threatens to subvert the authority of “victimhood.” It is primarily for this reason that he is currently being skewered online, although no one skewering him is openly admitting that this is the reason—-for doing so would force his critics to even more clearly detail the power structures underpinning the culture of “victimhood.”

[***DRAFT: WEDNESDAY, JUNE 11th, 2014. 17:44 EDT***]

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Triggernometry

tumblr_static_tw-sign6

So “trigger warnings” are back in the news again.

I’ve been reading along for months while concertedly refraining from making any sort of public comment on the discussion, but now I feel kind of obligated since it keeps raging on and I’ve already participated with a certain degree of vigor on closed forums and in Facebook feeds.

This post will consist of two parts: the first in which I express my personal views on “trigger warnings,” and the second in which I offer a brief cultural analysis of the “trigger warning” in hopes of shifting the collective conversation in a new direction.

PART I.

One of the people who spearheaded the resurrection of “trigger warnings”–specifically their use on college campuses–is a sophomore named Bailey Loverin who attends UC Santa Barbara. Loverin has articulated her arguments in favor of implementing campus-wide policies apropos of “trigger warnings” on such national platforms as the NY Times and USA Today:

From music to movies, content and trigger warnings are everywhere. We accept them as a societal standard. 

With these introductory sentences, the author concedes that the impetus behind her support of “trigger warnings” on syllabi stems, at least in part, from having grown up in a society in which “warning labels” appear before films, on music albums, on food, and so on. Ms. Loverin is so used to the ubiquitous presence of warning labels that extending the presence of these labels even further seems not only “natural,” but positive.

“Warning labels” in the United States are a relatively recent trend which began in 1938 under the Federal Food, Drug and, Cosmetic Act. Although they started with food, they quickly spread to tobacco, alcohol, and then finally to music in the late eighties and early nineties. The current ratings system employed for movies also has its roots in the 1990s.

What Loverin does not acknowledge in her opening paragraphs is that the reason these content warnings began to proliferate was due to the uptick in frivolous lawsuits in the U.S. and the desire of companies to engage in what is essentially “defensive advertising”—strategically warning “consumers” about any and all possible risks associated with their products or services beforehand so that said “consumers” cannot sue companies for millions of dollars, claiming the companies “failed to warn” them of any particular risk factor.

A recent frivolous lawsuit provides a classic example of this phenomenon (and makes me wonder if we’ll soon see a new set of “warning labels” on sneakers): a Portland pimp, Sirgiorgio Clardy, sued Nike for 100 million dollars after being convicted and sentenced to 100 years in prison for beating to death a john who had refused to pay him for one of his prostitute’s services. Clardy’s argument?

[...] Nike, Chairman Phil Knight and other executives failed to warn consumers that the shoes could be used as a weapon to cause serious injury or death.

Clardy’s lawsuit against Nike is pending.

Regarding this aspect of Loverin’s apology for the “trigger warning,” I am inclined to agree with Tressie McMillan Cottom, who writes:

[...] the “student-customer” movement is the soft power arm of the neo-liberal corporatization of higher education. No one should ever be uncomfortable because students do not pay to feel things like confusion or anger. That sounds very rational until we consider how the student-customer model doesn’t silence power so much as it stifles any discourse about how power acts on people.

You can read McMillan Cottom’s full post on the subject here.

What bothers me about the “trigger warning” is this: it implies that it is my responsibility, as a speaker and writer, to preemptively modulate the emotional and psychological responses of anyone who might hear or read my words—rather than the responsibility of those individuals to learn how to modulate and/or regulate their own emotional responses to my words (and to the world in general).

More importantly, though, it seems to me that the mass deployment of the “trigger warning” threatens to perpetuate a cycle of victimization and helplessness: people are allowed to bypass material that might disturb them emotionally or psychologically, and thus potentially avoid ever learning how to modulate their own thoughts, reactions, and emotions when confronted unexpectedly with disturbing stimuli.  In this sense, “trigger warnings” are the helicopter parents of language: in seeking to protect, they inadvertently enable large numbers of people to remain walking wounds of unhealed trauma.

In fact much of the available literature on trauma and PTSD advocates against the kind of maladaptive coping mechanism to which the “trigger warning” caters. One particularly apt passage of the Handbook of PTSD: Science and Practice (2010), flatly states:

Negative reinforcement of fear through behavioral avoidance is the primary process that is postulated to sustain, and even promote, the maladaptive fear response. Typical behavior avoidance manifested by traumatized individuals includes avoidance of stimuli associated with the traumatized event, not disclosing or discussing the traumatic event with others, social isolation, and dissociation. (41)

Translated into plain English, this quotation says: “Avoiding stimuli associated with a trauma as a result of fear leads to the perpetuation of both the avoidant response and the fear.” Or, even simpler: “Avoiding triggers perpetuates trauma and the ugly feelings associated with it.”

So much for the declarations of Loverin and others that “trigger warnings” “avert trauma.” Not only do they not “avert trauma,” they may actually serve to perpetuate the trauma and associated feelings of panic, in addition to stalling the healing process, which can only be initiated and sustained by confronting the trauma.

Much like well-meaning but overbearing parents who think they are doing right by their children when they refuse to let them play outside or intrusively moderate their children’s fights, “trigger warnings” do more harm than good to the very population they aim to “protect.”

And while Loverin alleges that “["Trigger warnings" are not] an excuse to avoid challenging subjects; instead, they offer students with post-traumatic stress disorder control over the situation so that they can interact with difficult material,” it is difficult for me to see how the function of a “trigger warning” is anything but an invitation to do precisely that—avoid the subject matter, leave the classroom, and engage in other maladaptive coping strategies.

Exploiting the trope of the “mad student” so familiar from recent media reports and capably analyzed by scholar Margaret Price in her monograph Mad at School: Rhetorics of Mental Disability and Academic Life (2011), Loverin then goes on, in her USA Today op-ed, to paint the following grim picture of the “traumatized student”:

If students are suddenly confronted by material that makes them ill, black out or react violently, they are effectively prevented from learning. If their reaction happens in the classroom, they’ve halted the learning environment. No professor is going to teach over the rape victim who stumbles out in hysterics or the veteran who drops under a chair shouting.

Furthermore, seeing these reactions will leave other students shaken and hesitant to engage. With a trigger warning, a student can prepare to deal with the content. (bold emphasis mine)

Here, again, it is possible to see how proponents of the “trigger warning” are advocating for strategies of trauma avoidance—both on the part of students with PTSD, on the part of faculty and staff, and on the part of students without PTSD who share classroom space with those with PTSD. “Trigger warnings,” according to Loverin, will cut down on classroom outbursts and avoid “disturbing” everyone involved. It is not at all difficult to see the specters of Eric Harris and Dylan Klebold or Kip Kinkel or Seung-Hui Cho lurking between the lines of Loverin’s text.

It is as though Loverin is suggesting that one kind of “trigger warning” will help prevent another, more gruesome “trigger warning”—that of the school shooting. While this type of neat and tidy logic may be very appealing to administrators, it is largely fallacious since the reasons for school shootings have very little to do with PTSD and “trigger warnings” and a lot to do with, basically, the availability of guns and our enshrinement of a culture of violence in the United States.

A claim I’ve heard repeated in various blog posts and op-eds by those in favor of the “trigger warning” stands out: namely, that post-traumatic or distressed reactions by students “hinder” or “prevent” learning. (Loverin takes it a step further, citing “halted learning environments” for both the student experiencing PTSD and others present in the classroom. Interestingly, her description flirts with the idea that witnessing another’s trauma is in and of itself a form of trauma—an argument parallel to the one which asserts that, for any victim of a past trauma, witnessing evidence of similar trauma in the present is always already traumatic.) When I read the passage above, though, I see something quite different: I see an opportunity to engage with the classroom (students and events) in real-time and to use that engagement to promote learning. I believe, in short, that pain can be a site of learning both for those who experience it and those who bear witness to it.

I am not in favor, obviously, of inflicting pain for the sake of inflicting it—that would be sadism. What I am suggesting is that it’s OK for classrooms to be messy, human places where messy, human reactions occur, and that I think it’s better for us to engage with them as they transpire than attempt to curtail them before they can take place. I do not buy the assertion that incidents such as those Loverin describes “prevent learning.”

One aspect of Loverin’s piece which I find compelling is her focus on the concept of “control.” She reiterates a couple of times that trauma victims need to feel “control”—indeed, mastery of trauma entails regaining this feeling. Where we disagree is about how that mastery should unfold and over what—or whom—that control should be exerted. My position is that mastery of trauma is best achieved by confronting trauma rather than seeking to avoid it and that learning to modulate one’s own emotions in a diverse array of settings and when faced with a wide range of subject matter is a good way to regain a sense of “control.” Seeking to exert control over course content or classroom discussions (or other people) for the sake of (unhealthily) avoiding one’s trauma is not.

Which brings me to another observation: whenever I have seen demands for “trigger warnings” deployed, they seem to be deployed by whomever wishes to regulate either a conversational topic or the manner in which it is being articulated. That is, I see “trigger warnings” being used to strategically to silence some voices. I’m reminded again of Tressie McMillan Cottom’s “student-customer” model, since the question of who is attempting to exert control over the discourse has a lot to do with social class (and probably race as well).

I once read somewhere: “Being rich means being able to choose what one does and does not experience in life.” We could modify this statement to read: “The richer you are, the more control you have over what you do and do not experience in life.” It is reasonable to assume that places like Oberlin CollegeUC Santa Barbara, and Rutgers—three institutions of higher learning embroiled in debates about “trigger warnings”—are by and large populated by students from comfortably upper-middle-class families (or above). [1]

These students—more so than poor students—see themselves as “consumers,” which makes sense since the more disposable income you and your family have, the more you engage in patterns of consumption and, more importantly, the more you experience consumer choice. To give a quick, concrete example of this phenomenon at work: if you’re poor and going food shopping you typically go to the cheapest grocery store around and look for the least expensive food item available (like Ramen). Your range of “choice” becomes limited to whatever is cheapest or—on a good day—to several equally as cheap items. Conversely, if you’re upper-middle-class or wealthy, you have the ability to exercise choice over which supermarket you will shop at and then, once there, over which products you will purchase and, within any given food category, which brands you will select. Your horizon of choice is noticeably greater than that of someone with a fraction of your income, so you experience “choice” at every level of your shopping process. You grow accustomed to “choice.”

With “trigger warnings,” students are applying “consumer choice” models to education. This is not necessarily problematic in and of itself and, as some have pointed out, may even be beneficial in empowering students to participate actively in shaping their own learning. The quandary arises when one begins to consider who exactly is exerting their “right” to “consumer choice” through the arm of “trigger warnings.”

In the real lives of people not privileged enough to selectively choose what they will and will not be exposed to, “trigger warnings” do not exist. And it seems to me that we are currently more interested in protecting some students from mention of trauma than we are in protecting others from actual trauma. In a climate where, just yesterday, Johns Hopkins University suspended an entire fraternity for, among other crimes, gang rape, we appear more invested in “protecting” students with PTSD from reminders of past trauma than we do in protecting all students from lived experiences of trauma. In the process, we may also be discouraging students who do experience trauma on campus or while enrolled in our institutions from speaking or writing about their experiences, for fear of “triggering” their peers.

We are creating an environment where speaking, naming, or showing trauma is becoming more taboo than actually traumatizing another human being through an act of violence—and this is a problem, particularly for students from less-privileged socio-economic backgrounds who may leave our classrooms and encounter repeated, ongoing violence at home and in their communities. These students often cannot “choose to avoid” or even “prepare themselves beforehand” for repeated encounters with trauma, for it is happening all around them—to them—on a daily basis. We are coming dangerously close to fostering a culture of silence around trauma that threatens to perhaps “protect”—temporarily, for avoidance is not an effective long-term strategy for dealing with trauma—more privileged students while both failing to protect and silencing less privileged ones. Only if you are privileged enough to experience an end to your lived trauma do you have the time—the luxury, the choice—of insisting that literary and cultural objects reminiscent of your original trauma bear “warning labels.” Only if your lived trauma is not relentless does it even occur to you that you might be able to avoid confronting it (despite the fact that all evidence shows that failure to confront trauma is detrimental to recovery).

Unless you are fortunate enough to exert the kind of control over the rest of your life that you would propose to exert over potentially “triggering” material, avoiding that material in the (more or less) safe space of a classroom will in no way prepare you for what you will encounter after you graduate. On the contrary, you will likely be forced to deal with unanticipated “triggers” on a regular basis—at your job, in your neighborhood, when you travel. The question of “trigger warnings” then evolves into one about whether you’d rather learn how to modulate a panic attack in class or in a boardroom, at the university or the next time you’re deployed for military duty. My take on this is that the classroom and university—where stakes are still relatively low and support is available—would be preferable training grounds for learning how to successfully process trauma.

PART II.

disability_symbols_161

I’d like to contemplate the possibility that demands for “trigger warnings” may not be what they seem, at face value, to be. Up to this point, I’ve dissected Bailey Loverin’s op-ed about these “warnings” and formulated some of my personal objections and challenges to the concept of “trigger warnings” as they intersect with issues of disability and class.

From a Disability Studies perspective, it is reasonable to ask not only whether “trigger warnings” do more harm than good (as I did above, in Part I), but also what it is that we do when we maintain, like David Perry does in “Should Shakespeare come with a warning label?,” that:

The classroom is not a therapist’s clinic [...] Moreover, it’s a decision for a patient and a therapist or doctor to decide and advise a university, rather than for faculty or administrators to decide for themselves.

I’m not really sure that we can have it both ways. If “the classroom is not a therapist’s clinic” and the decision about when, how, and where a student should or should not be exposed to subject matter is “for [...] a therapist and doctor to decide and advise a university,” then why are we even talking about implementing blanket policies on “trigger warnings” in university environments? (Perry himself is not arguing in favor of these blanket policies, but instead indicating that our existing systems of ADA accommodations policies can and should adequately address the needs of students with PTSD, and I am generally inclined to agree with him.)

I quote Perry at this juncture because I have read similar sentiments in tweets and Facebook posts by academics over the past several months—minus Perry’s astute qualification that our existing disability policies can and should sufficiently address the concerns of students like Loverin. For those academics who clamor “we are not therapists” but also support blanket “trigger warning” policies: your position appears internally contradictory.

Also from a Disability Studies perspective, it is worth pondering the advantages and/or drawbacks of such blanket policies. Does a failure to implement them effectively “medicalize” PTSD in a way that would be considered undesirable within the larger framework of Disability Studies? In other words, when we reject blanket policies on “trigger warnings” and instead direct students towards individualized solutions (via therapists and doctors, medication, and ADA accommodations), are we in essence “medicalizing” PTSD–and by extension disability in general? What might this question reveal to us about relationships between (mental) illness and disability as perceived by DS scholars? By the public?

What fascinates me about the idea of over-arching “trigger warning” policies is that, whereas ADA accommodations are tailored towards individual students—with all students enrolled in a given school presumed non-disabled until and unless they declare themselves disabled by requesting accommodations [2]—“trigger warning” policies operate via the inverse principle. They preemptively assume all students are in fact traumatized (or vulnerable to the effects of PTSD). Thus, from a purely theoretical point of view, blanket “trigger warning” policies are quite progressive since they assume disabilitynot able-bodied/mindedness—as the default state. In so doing, the policies fall more in line with “social model” approaches to disability; they identify the problem as residing in society instead of in the bodies/minds of disabled individuals, with these blanket policies acting as the ideological equivalent of an adaptive or assistive technology. If all this is true, then what we’re witnessing is a potentially revolutionary paradigm shift in the way we view mental/psychological disability.

The two types of trauma victims that blanket “trigger warning” policies are cited as “protecting” include soldiers and rape victims. I question why we would be engaged in a discussion now, as a society, about whether or not we wish to move forward with the paradigm shift I’ve just described. Temporarily putting aside my arguments about the “student-consumer,” etc. — why now? I wonder if the desire for “trigger warnings” communicates something about us on a macro level, as a culture. For if, as I have insisted, we as a culture tend to avoid facing trauma—we suppress it, silence it—and if “trigger warnings” are about exerting control (however maladaptive the strategy may be), then perhaps we as a culture are struggling to modulate and control our own large-scale trauma: our nation’s legacy of violence.

When I re-read Ms. Loverlin’s stereotypes of the “hysterical” rape victim and the “shouting” soldier along with that of the student-witnesses who become “shaken and hesitant to engage,” my mind pans reflexively through a Rolodex of events: 9/11; the wars in Iraq and Afghanistan; the financial crisis of 2008; years of gun violence in schools; the Marathon bombings; mass incarceration of U.S. citizens; natural disasters; rape on college campuses.

I remember that students of Ms. Loverin’s age have, for all intents and purposes, never known a world without war, natural disaster, gun violence, terrorism. And I wonder if the ongoing debate surrounding “trigger warnings” might actually be about something far greater, albeit unspoken—an expression of our students’ desire to try and mitigate collective cultural traumas. An attempt, if you will, to exert some control.

[***FIRST DRAFT: TUESDAY, MAY 20th, 2014. 23:45H EDT***]

**

Notes

1 – A complete breakdown of data (including reported family income) for UC – Santa Barbara students is accessible here, in .PDF formatIf anyone can find data on Oberlin, please do contact me; I did some fishing but was unable to find anything like “average family income” for students enrolled. Here (also in .PDF format) is some info. on demographics at Rutgers, with a breakdown by campus within the Rutgers system as well. Apparently (thanks, David!) one indirect measure of student/parent income is the percentage of students at a given institution who receive Pell Grants. Information for any institution about the percentage of its students who receive Pell Grants can be accessed here. In 2012, 31% of Rutgers students received Pell Grants. According to the figures posted in the U.S. News report, this would place Rutgers somewhere in the middle socioeconomically; far more students at Rutgers receive Pell Grants than at Oberlin, yet more students at UC Santa Barbara (whose overall student body is far from impoverished) receive Pell Grants than at Rutgers.

2 – That is, the very framework of “accommodations” presumes a “default” of able-bodiedness.

"The Falling Man," by Richard Drew.

“The Falling Man,” by Richard Drew.

 

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 30 Comments

“Can You Get Me Into College?” – Midnight in Southie

Southie1

Photo by Valéria M. Souza

It was midnight and we sat on the jungle gym of a South Boston playground designated as being “for ages 8-12″ and “requiring upper body strength and coordination.”

We both had some degree of “upper body strength and coordination,” but neither of us was 8-12.

The young man, who had abandoned his skateboard nearby to come talk to me, interrupted my vaguely clumsy acrobatics on the monkey bars to ask: “Yo, what are you doing? Like, why are you on here?”

I dropped to the ground.

“I saw you skateboarding,” I said.

“Yeah—-so?”

The retort was a bit defensive, challenging. Did he think I was a cop or something? “No, I mean—-I don’t care. I just wanted to ask you: do you skate here at night? Do people bother you? Like: tell you to leave? Or is this place chill? That’s all….”

Instantly he relaxed. His shoulders dropped as he shrugged, open-palmed. “Oh, no—it’s cool. Nobody ever bothers us. They’ll tell us to leave during the day, but at night nobody cares.”

“So, like, you think I could come here a few times a week and climb and nobody would bother me?”

“Yeah, for sure. No one’s going to give you a hard time.”

“Cool—thanks.”

“But why are you climbing?”

“I’m training. Practicing.”

“For what?”

I smiled. Silence.

“C’mon—you’re not gonna tell me?”

“I can’t,” I replied.

“Are you gonna climb a mountain?”

“Maybe. Maybe I am.”

“You’re not gonna climb a mountain, I can tell. Are you like sponsored by Red Bull or something?”

“Haha—no. I am most definitely not sponsored by Red Bull or anyone else.”

We faced each other on one of the metal platforms in the playground.

“Do you mind if I smoke a bowl?” he asked.

“I’d rather you not.”

“OK—-I won’t then. How old are you?”

“How old do you think I am?”

“Like 20-something.”

“I’m 34. What about you?”

“22. Listen—OK—can I ask you a question then?”

“Sure.”

“How do you feel about, like, dating younger people? Like would you date someone my age?”

“I would not,” I answered calmly. “To me that’s waaay too young. I’m a college professor. My students are 18-22. That would be like dating a student. That’s really weird, and I would never do it.”

Suddenly he stood up, his body a lightning bolt striking the air between us. Gone was the casual, off-hand questioning. Gone was the interest in smoking a bowl. “Wait. You’re a college professor?”

“Yeah—-here: give me your phone.” I Googled myself, then loaded the faculty page from the university where I worked. “Here, that’s me. Read.”

He read. He looked at my faculty picture, then at me. Again at my faculty picture, then back at me.

“I need to talk to you,” he insisted, handing the phone back to his friend with terse instructions to bookmark that page, yo—the one she’s on. “How do you like….get into college?”

I squinted, unsure of what he meant. A specific college? College in general? Which aspect of “getting in”? This was a far cry from some of the elite universities at which I’d taught—places where students were already richer, savvier, and better-traveled at 18 than I’d be at 80. Those kids attended Milton Academy and Phillips Exeter and had schedules of meticulously planned extracurricular activities and spoke fluent Mandarin. Or fluent French. Those kids had SAT prep and could afford to do unpaid internships because their parents were rich and they didn’t need to work for money. Those kids—so smart and cosmopolitan and sure of themselves—were so different from me. From us.

“What do you mean?”

“I mean like….the whole process. Look. No one in my family has ever gone to college. Nobody knows what to do. The counselors at my high school didn’t help us. I try to research and I know which schools I want to get into, but I don’t know the process.”

“Wow. OK—-well, you’re right. It is a process. There are a lot of steps involved. Hmmm. OK. We’ve got to fill out applications and financial aid stuff and…”

He interrupted, rattling off a list of four or five elite out-of-state schools he dreamed of attending and asking if we would have to complete a FAFSA. I blinked. This kid was obviously intelligent and had done his homework. He had a short list of schools. He could list the characteristics of each one that he found especially attractive. He knew the FAFSA existed. He was doing the best he could with what he had—and what he had was very little.

Southie2

Photo by Valéria M. Souza

“OK,” I probed, “what’s your GPA?”

“Like 2-point-something.”

I sighed. “OK—-that’s not high enough for the schools you’ve listed. So we’re going to have to do something a little bit strategic. Let me know what you think: first we get you into a lower-tier public school or community college here in Mass. I know you want to go out of state, but your GPA is not high enough yet. So you do a year at one of those lower-tier schools and you get straight As, and then we rig it so you can transfer out to one of your dream schools.”

“Straight As?”

“Straight As. You can be poor and brilliant or rich and mediocre, but you can’t be poor and mediocre. It just doesn’t work that way.”

He nodded in agreement. “I feel you. Straight As.”

“You’re going to have to work hard.”

There was a long pause. He fiddled with his marijuana and looked down. I felt my heart twisting. Not out of pity. Out of deep sadness because of all the people who had failed this kid. This bright, driven, earnest kid.

“Will you help me get into college?” he asked.

The request was so simple. A hand reaching across a divide, grasping. Hoping for someone to grab it and not let go. I remembered my own trajectory, long and far. I felt another twist in my chest for this boy who was just like I had been, once upon a time. I remembered filling out the FAFSA by myself at the kitchen counter in my Mom’s condo. I remembered trying to write a persuasive letter to the Financial Aid Office that included the phrase “onerous mortgage payments.” I remembered taking the SAT twice and with zero preparation beforehand. I remembered applying to only one school—NYU—because I wanted to go there and because nobody had introduced me to the concept of “the safety school.”

I placed my hands on two horizontal, parallel bars and pushed, lifting myself upwards ever so slightly, my feet maybe 3 inches off the ground. I still had a lot of work to do; my upper body strength was total shit. Need to build muscle, I thought, and lowered my body back down to the ground: “Yes. I will help you get into college.”

With those words, he was like a child in front of whom I’d just set a birthday cake. His eyes burned, two lit candles.

“You’ve done this before, haven’t you?”

“It’s my job.”

“You’ve gotten other people into college before.”

“There’s a name for this,” I said. “It’s called ‘being an advisor.'”

“You’re my advisor now?”

“I am your advisor.”

It was spontaneous. He threw his arms around me. He hugged me tight, pressing his fingertips into my vertebrae. I hugged back.

He didn’t want to let go. We had to exchange emails and cell numbers. He had to make sure he had the right information. He could not lose track of me.

“I promise you, I’m not going anywhere.”

Still, he had to make sure.

“I’ve wanted to go to college since I was in high school and I tried—I tried—but nobody could ever explain it to me. My family, they’re good people but they just don’t know anything about it. They never went to college. I tried asking people for help and nobody could ever help me. You’re the first person who has ever known how to help me get into college. I can’t lose you.”

“I know what that’s like. It’s hard. But I promise you, I’m not going to disappear. So let’s do this. Let’s get you into college.”

Grinning.

“Tell you what: you get me into college and I’ll train you.” The kid flexed, showing me biceps, triceps, rippling shoulder muscles. Granted, he was 22 and a boy—both advantages in terms of general fitness and strength—but he clearly trained. “I’ll train you.”

I extended my hand in the darkness to seal the deal. We shook.

“Deal.”

“Deal. You gotta problem with push-ups?”

“Nope.”

“Pull-ups?”

“Nope.”

“You gonna complain?”

“Nope. I am willing to work hard. You’ll see. I’ll work hard to build muscle and you work hard to get into college. And if we both put in the work, it might just go our way.”

“That’s right,” he said. “That’s right.”

Southie3

Photo by Valéria M. Souza

[***FIRST DRAFT: THURSDAY, MAY 15th, 2014. 19:09H EDT***]

 

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , | 9 Comments

On Sexism in Urbex.

In a mission strategically planned to coincide with Chinese New Year, Russian rooftoppers Vadim Makhorov (aka Vadim Mahora) and Vitaly Raskalov (aka Vitaly Raskalovym) infiltrated and scaled the world’s second tallest building, the 650-meter Shanghai Tower. Their brilliant ascent was documented in photos and video taken by both climbers:

But while Makhorov and Raskalov may have set a new bar in terms of both their daring and the sheer height they managed to achieve in February 2014, they are not alone in practicing their craft. “Rooftopping” or “skywalking,” which has grown in popularity over the past several years, originated in Moscow and has since spread to other major metropolitan areas including Toronto, London, and Sydney.

Eleven months ago, urbexer and parkour enthusiast James Kingston gained international recognition for a video he uploaded to YouTube entitled “POV Crane Climb in Southampton, UK with James Kingston – GoPro”:

Meanwhile, Bradley Garrett–who is currently on trial in the U.K. for his “place-hacking” exploits–has been profiled in a glossy GQ spread which also printed some photos taken during his ascents of cranes, bridges, and buildings worldwide.

As an urbexer myself, I respect and admire the work of these and many other, lesser-known explorers. I love urbex and consider all of these guys to be my colleagues–even the ones I haven’t (yet) met or worked directly with….

….which is why I am dismayed to report that urbex as a subculture is arguably as male-dominated and blatantly sexist as the video game industry.

I have always been aware of this somewhere, in the back of my head. It’s not like this is some new, earth-shattering revelation for me. It explains why, in online forums, I intentionally use “masculine”-sounding handles and don’t ever correct interlocutors who assume (based on my Internet persona) that I am a man. I intentionally seek to pass as male online because I know it benefits me; I’m taken seriously when I’m assumed to be a man. Today, though, I saw something that forced me to confront how sexist the urbex world can be. While flipping through rooftopping videos in order to study the performances of other urbexers and get a feel for their different climbing techniques, I encountered this:

Dated 2012, it depicts the then 20-year-old Marina Bezrukova walking precariously along the baby-blue ledge of an imposing Moscow high-rise. Immediately intrigued by the footage because of the rarity of female urbexers in general and rooftoppers in particular, I decided to do a bit of Googling and find out more about Marina. To say that the results of my Google search were depressing would be putting it mildly.

Newsweek author Michael Carroll’s account of Marina’s rooftop expedition reads:

In August 2012, Marina Bezrukova, a 20-year-old maths student, became the world’s best-known female skywalker after strapping on a head-mounted video camera and balancing 125-metres from a Moscow ledge. Her footage caught dramatic views of the city below, as well as her ample curves (bold emphasis mine).

The “best-known” female rooftopper is apparently known not just for the feat she performed–unlike, say, Makhorov, Raskalov, Kingston, and Garrett–but for her “ample curves.” Sadly, the Newsweek write-up is one of the classiest. Yes, I said “classiest.”

For comparison, here is the International Business Times‘ take on Bezrukova’s accomplishment:

The YouTube video of a Russian “skywalker” with a strategically placed video camera has erupted on the Internet. The video, which depicts a young woman balancing on a beam high in the air, gained so much notoriety because the camera is focused just as much on her cleavage as on the scary balancing act.

Marina Bezrukova, who now might be better known as the “breastwalker,” is a member of the Moscow-based rope jumping team MADS, according to the description under the video on YouTube. She’s shown walking several dozens of meters on a narrow beam outside a Moscow highrise apartment complex. The video is just the latest in the “skywalkers” series MADS has released, but certainly the one that has gained the most attention.

Beruzkova reportedly said that she joined the rope-jumping team because she wanted to overcome her fear of heights and learn how to stay in control.

The video was first posted on LiveLeak.com on Monday and has already been viewed almost 200,000 times and garnered more than 1,000 comments. Not surprisingly, the reaction around the Internet has been overwhelmingly positive.

“Nice aerial view, literally,” commented one YouTube user. “I never saw the ground,” added someone else (bold emphasis mine).

The title of that article from the International Business Times, by the way, is “Russian ‘Skywalker’ Films Cleavage As She Balances On Roof, Known As ‘Breastwalker.'”

A YouTube search of Marina’s full name reveals that she is consistently labeled, in multiple languages, as “the blond” or “the blond Russian.” One person who re-posted her video on another site bluntly re-named it “The View is Nice on More than One Level.”

Contrast descriptions of Marina with those of her male counterparts. In the Newsweek article, Mahora, Raskalov, and other young men are identified first and foremost by the feats they perform:

The 24-year-old Russian [Mahora] from Novosibirsk became the first person to reach the 632-meter top of one of the world’s tallest towers last February. His ascent was strictly unauthorised. He outwitted security guards to make the two-hour climb, then took out his smartphone and posted his video online from above the clouds (bold emphasis mine).

And:

Without safety equipment, they scale the summits of skyscrapers to perform high-adrenaline balancing acts. Using the stairs, they climb the last 50 metres or so on the outside of the structure (bold emphasis mine).

And:

Not surprisingly the online posts emphasise the risks they face in performing their stunts. One daredevil living in Moscow, Max Polazov, is now a professional photographer who made his name by capturing stomach-churning selfies of himself performing handstands hundreds of metres above the Moscow streets (bold emphasis mine).

When and if their attire is mentioned, it is to underscore their bravery and/or skill, as when Carroll of Newsweek writes: “Wearing ordinary clothing and trainers, they take nothing more than a smartphone or a head-mounted video camera. They often work in pairs, taking turns to pose in extraordinary positions at dizzying heights” (bold emphasis mine). The “ordinary” quality of their clothing belies their hidden abilities and sets them apart from professionally trained “stuntmen,” making their accomplishments all the more impressive.

In another instance, the Newsweek author uses the young men’s clothing choices as a way of illustrating their cunning and intelligence:

They see themselves as urban free-climbers who use clever ploys to outwit the authorities in their race to the top. One tactic is to pose as entrepreneurs and book a business meeting with staff on the upper levels of the targeted building. Once the meeting ends, they take the lift to the top, peel off their smart suits to reveal ordinary clothes underneath and head for the nearest window (bold emphasis mine).

Again, the function of “ordinary” in this sartorial context is to act as a counterpoint which emphasizes exactly what is not “ordinary”–but rather very much extraordinary–about the skywalkers. Nowhere in the article do we learn about any of the Russian men’s hair color, eye color, physical build, or potential sex appeal. [1]

While we do learn about Bradley Garrett’s appearance in the GQ article chronicling his exploits, Garrett is not overtly sexualized, nor is his credibility (as either an explorer or an academic) compromised by the manner in which he is portrayed:

Despite his scholarly bona fides—his doctoral work in geography at Royal Holloway, University of London had garnered wide acclaim—Garrett scarcely looks the part of an academic, neither tweedy nor fusty. Thirty-two years old, with a trimmed goatee and a mop of straight brown hair hanging over black plastic frames, he grew up in Southern California and ran a skate shop before deciding to pursue a doctorate. His face, which is frequently lit up in mischievous, eyebrow-raised delight, still bears the pocks of over a dozen piercings he dispensed with in the interests of maintaining some veneer of academic respectability (bold emphasis mine).

In fact, the effect is the opposite. Garrett not “look[ing] the part of an academic” is a plus, since (unlike most academics?) he isn’t “tweedy” or “fusty.” The details about his “trimmed goatee” and “mop” of hair as well as “black plastic frames” and “pocks of over a dozen piercings” combine to create an image of a very cool professor: this professor is punk rock. He’s a hipster. This is the professor every millenial everywhere would want to have. Ironic, cool. A brand unto himself. He’s got an interesting backstory, but also knows how to maintain “academic respectability.” He can simultaneously be all of these things–punk and professor, explorer and intellectual–because he is a well-educated white man. He can be all of these things and still command the respect of his academic colleagues and his fellow urbexers. He is the modern-day version of Luís Vaz de Camões’ intrepid Portuguese explorers forever glorified in The Lusiads. He is, in short, a perfect Renaissance balance of arms and letters.

The photos in GQ fit the hipster-prof image of Garrett so deftly constructed by Matthew Power, the piece’s author. There is Garrett, perched dynamically atop a crane. There he is again in an array of action shots: climbing, standing, looking down over cities. His male buddies stand atop things as well, their arms extended joyfully into the air. They act. They occupy space. They are men.

Though GQ is a bit more even-handed than Newsweek or the International Business Times–not going out of its way (seemingly, at first glance) to sexualize female explorers–I can’t help but realize that the only woman profiled, an urbexer going by the pseudonym of “Helen,” is singled out as:

[...] a strawberry-blonde 23-year-old photographer from northeast England, who goes by the nickname Urban Fox. Helen loved climbing bridges more than anything: Her website showed a nighttime self-portrait, taken high atop the Manhattan Bridge, posed au naturel. 

Because it’s evidently not enough to inform the readership of GQ once that Helen has posed naked, the article’s primary photograph of her also bears the caption: “Helen, who once posed in the buff atop the Manhattan Bridge, explores Paris’s catacombs.”

Got that, everyone? SHE WAS NAKED.

It’s also worthwhile to examine the characterization of Helen quoted above in its original context, which is as part of a segment designed to introduce the reader to the cast of characters with whom Garrett runs:

I was crammed into the backseat with several visiting explorers: A computer programmer from France named Marc who goes by the nom de Urbex Explo; Luca, a 28-year-old intensive-care doctor from Italy with a penchant for subterranean exploration; and Helen*, a strawberry-blonde 23-year-old photographer from northeast England, who goes by the nickname Urban Fox. Helen loved climbing bridges more than anything: Her website showed a nighttime self-portrait, taken high atop the Manhattan Bridge, posed au naturel. Given that our first adventure was subterranean, its only obvious omission was the group’s underground guru, Greg—nicknamed Otter after going headfirst into a sewer. Otter had an almost Aspergerian level of knowledge covering the hundreds of miles of sewer tunnels, storm drainages, and underground rivers that snake beneath London. The rest of the crew joke that he’s a “drainspotter.” He had been arrested in the sweep that nabbed Garrett and had a prior court order banning him from exploring in London.

Again, notice the manner in which the male explorers are depicted as compared to the lone female explorer. They are defined by their profession, country of origin, interests, knowledge, and skills. None except Helen is described physically (i.e. – hair color) or explicitly sexualized in any way. The author makes a point of including Helen’s nude self-portrait–a titillating detail he could just as easily have omitted.

The cherry on top of the sundae is that the aforementioned nude photo is buried pretty deeply in “Helen’s” website. Good luck finding it. (I did after around 30 minutes of looking.) The GQ author clearly had to search for and highlight it–deliberately favoring it over the vast range of non-nude alternatives on “Helen’s” site.

To more clearly explicate the differences between portrayals of male and female urbexers, behold one of the most famous stills from James Kingston‘s “POV crane” video:

DCIM100GOPRO

Photo credit: James Kingston

Not to be crude, but what if as an experiment we framed this image in much the same manner as Marina Bezrukova’s video footage and stills have been framed over the past two years? Bezrukova made one video, but Kingston’s website actually contains many, many stills from this same perspective.

I’ll adapt the International Business Times passage I cited earlier, since it’s a convenient template:

The YouTube video of a British “skywalker” with a strategically placed video camera has erupted on the Internet. The video, which depicts a young man hanging by one arm from a crane high in the air, gained so much notoriety because the camera is focused just as much on his package as on the scary balancing act.

James Kingston, who now might be better known as the “dickwalker,” is an avid practitioner of parkour, according to the description under the video on YouTube. He’s shown hanging from a crane 50 meters in the air. The video is just the latest in the “skywalkers” series on YouTube but certainly the one that has gained the most attention.

Kingston reportedly began “rooftopping” because he wanted to overcome his fear of heights.

The video was first posted on YouTube in June 2013 and has already been viewed more than 2 million times and garnered more than 4,000 comments. Not surprisingly, the reaction around the Internet has been overwhelmingly positive.

“Nice aerial view, literally,” commented one YouTube user. “I never saw the ground,” added someone else.

Here’s my re-make of the Newsweek quotation as well: “His footage caught dramatic views of the city below, as well as his ample package.”

Finally, the GQ take on the situation might read something like: “James [was] a sandy brunette 22-year-old photographer from Southampton. He loved climbing cranes more than anything: His website showed numerous self-portraits, taken high atop cranes and bridges, focused squarely on his crotch area.”

Perhaps to sexualize Kingston’s clothing, we could incorporate a comment like: “His skinny jeans perfectly accentuated every aspect of his package and also clung tightly to his slender and shapely legs.”

I’ve made my point.

It would be bad enough if sexism in urbex were confined to the sport/hobby/practice being male-dominated to begin with and encouraging sexualized descriptions of and commentary on the few women urbexers, but unfortunately that’s not the case.

Many male urbexers seem to view women as props or accessories rather than as fellow explorers. Rolling Stone notes: “For safety, roofers climb in pairs and often with a girl in tow. ‘The best team to go up is two lads and a girl,’ says Vasilisa. ‘Two can help each other and the girl can soften the situation. We say, we have an anniversary. We think something up. It really helps.'”

The girl is “in tow” and her role is to “soften the situation” if the boys get caught. The sample excuse provided is also telling: “We have an anniversary”–of course, a romantic interlude. Naturally the (male) police or (male) security guard(s) will go easier on the boys (and the girl “in tow”) if they believe that the ascension is an attempt at a romantic gesture–something the boy is doing to impress “the girl.” A date, really. “The girl’s” role is passive almost to the extent of being non-participatory. She is, literally, nothing more than a “get-out-of-jail-free” card for the boys, who are the only true, active participants of rooftopping and the heroes of each and every climb.

Interestingly, there seems to be a tradition among male urbexers of taking nude or semi-nude photos of women in abandoned buildings and other “risky” locations. Googling “nude urbex” will bring up a starter kit of images for the curious.

When women and girls are not being asked to act as “models, they are often photographed in remarkably docile postures and scenarios versus their male counterparts. I leave you with these two sets of images of “rooftoppers,” both culled from Kirill Vselensky’s Instagram account:

Women “rooftoppers”:

http://instagram.com/p/hVwGTBoBf1/

http://instagram.com/p/j0LVXEoBf9/

http://instagram.com/p/mp7HKVoBav/

http://instagram.com/p/m52h43IBbh/

http://instagram.com/p/nQ1gIUoBbN/

Men “rooftoppers”:

http://instagram.com/p/mZbYMtIBeZ/

http://instagram.com/p/mQxNoVoBQI/

http://instagram.com/p/mSdk-6IBa5/

http://instagram.com/p/mA7ALFIBVL/

http://instagram.com/p/j_f2LMIBVd/

***FIRST DRAFT – MONDAY, MAY 13th, 2014. 00:16H EDT***

Notes

1 - I’m not counting the one brief instance in which Max Polazov is quoted in the first-person as saying: “[...] I have been athletic my entire life, so I know my capabilities.” Polazov himself selects the adjective “athletic”; it’s not being applied to him by the author of the article or anyone else. Further, Polazov clearly prides himself on being “athletic,” in that he follows the description with an assertion of his intellectual and physical prowess: “[...] I know my capabilities.”

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 5 Comments

How I Was Recruited to (and Rejected from) the CIA.

During the two years that I was so sick with MS as to be functionally bedridden the majority of the time, I had a lot of opportunities to think. About my life, for instance. About potentially viable alternate lives. If I could do anything, what would I do? The answer was clear: I would do the opposite of what I had been doing for the past four years. I would leave academia. I would leave my house. I’d avoid pursuing a PhD. I’d experience anything whatsoever in favor of the complete and utter stagnation my existence had become.

So, fuck it, I applied to the CIA. Most people, I suspect, apply directly for “Clandestine Service” (aka to be a spy), but knowing I had MS and would never under any circumstances pass the medical exam, I decided to low-ball it and apply as a Foreign Language Instructor. I was fluent in Portuguese–one of their “target languages”–and had plenty of experience teaching. I held a Master’s degree and had just entered my PhD program. So why not? I filled out an online application, similar to the one here.

I really didn’t expect anything to come of this. I knew that my chances were slim to none, since the CIA processes around 10,000 applications per month. But what did I have to lose? Nothing, really. I wasn’t going anywhere, wasn’t doing anything. I was in a bed. So, fuck it.

Some time later (weeks? months? — memory fails me) I received two consecutive voice mails from the same person who identified himself as “Jason, a Federal Recruiter” and indicated he’d “like to talk to [me].” I was given a phone number to call and I did so in public, from Harvard Square. “Jason” (I think it’s safe to assume that none of these names are ever real), asked me some interview questions, including one about why I thought I’d be a good fit for the Agency.

Apparently I passed muster, because “Jason” enthusiastically mentioned something about “taking [me] up in a helicopter and throwing [me] out of it,” which I correctly deduced was his way of both conveying his support for me as a candidate and informing me that he wanted to advance my application to the next stage.

I also recall, vaguely, that “Jason” had some very innocuous looking Yahoo! email address (I’d post screen shots, but I deleted the Hotmail account I was using at the time and to which his emails were sent). We exchanged a couple of messages, and one of his expressed the Agency’s vigorous passion for The Economist. In fact, I distinctly remember reading the line: “The Economist. We love The Economist.”

“Jason” had wanted me to get a head start on my Agency-related reading, you see. He also indicated that I’d be receiving some materials in the mail and to follow the instructions enclosed. Soon enough, this nondescript-looking brown envelope arrived in my mailbox:

Large_Envelope_App_Materials

Inside it contained a cover letter, an assigned CIA reading list (no, really), and a bunch of forms for me to fill out:

Post_Phone_Interview_Letter

[Um. I advise against calling that number and asking for "Rhoda." The number likely isn't even active any more, and if it is you're just going to unnecessarily excite the Feds. Also, "Rhoda" is definitely not this person's real name.]

CIA_READING_LIST

I love that there is a reading list. The only thing better than there being a reading list is the fact that it looks so plausibly unofficial–as though hastily slapped together by an awkward 12-year-old boy with too much time on his hands and unauthorized access to the photocopier at his Mom’s office. There is absolutely no way anyone will ever believe this is the official CIA reading list given to prospective recruits. Then again, that’s probably the point. [1]

The “personal résumé” consisted of around 15 pages of information (when counting all of the pages containing my typed answers to essay questions, as well as the “writing skills” assessment). Some of the more compelling short answer questions included:

  • Why do you wish to work for this organization?
  • What is your principle asset?
  • What is your principle shortcoming?
  • How would you describe yourself? (One page or more)

I won’t grace you with the responses from my 28-year-old, Solumedrol-brained self. For “Writing Skills,” the following prompt appeared: So that we may assess your writing skills, write an essay (about 500 words) on a subject of major current international interest.

After casting about various periodicals for promising topics, I settled on the (then) recent presidential election and political unrest in Zimbabwe and proceeded to dutifully compose my assigned 500 words detailing the situation as it stood in 2008 and interweaving some brief historical, cultural, economic, and political analyses of conditions I felt contributed to Mugabe being able to maintain his long-term stranglehold over the country. (Mugabe is still in power as of the writing of this blog post.) Actually, the essay was not bad–and I say this now as a college professor re-reading my own work as a young grad student. Given the word-count constraints and the fact that I started with zero knowledge of Zimbabwe, really: not bad. I’m guessing the CIA wasn’t a big fan of my Endnotes and meticulously organized Works Cited list in MLA format, but then again I’ll never know. They don’t exactly give feedback.

Another fun part of the application was the online IQ/personality test:

Personality_IQ_Test

At the time they were using this website/company for testing. What I remember about this examination was that it was extremely long and quite boring. The personality portions of the test are easy enough to rig—mainly they try to trip you up by asking the same questions repeatedly using different wording (and obviously they will notice any inconsistencies in your answers), and some of the questions were clearly designed to weed out sociopaths or people with other types of personality disorders. Example: “I like torturing animals.” –> Strongly Agree / Agree / Neutral / Disagree / Strongly Disagree.

I’m pretty sure I bombed the IQ test, largely because I suck at math and especially at mentally manipulating 3-D objects in space, and the test seemed to contain a lot of that. Oops….

Once all the paperwork was done, I was to FedEx it back to a facility in Reston, Virginia using a prepaid mailing form also provided in the manila envelope. I did indeed complete the materials and send them back:

Materials_Mailback

I knew when I prepared this extensive application that I would never get a job offer from the CIA. I knew that an MS diagnosis was an automatic medical disqualifier. I knew this from the moment I applied online to be a Foreign Language Instructor (completely not expecting to get recruited as a spy!). I knew that even if I were deemed desirable in every other way (unlikely just because of how competitive these jobs are), I would ultimately be disqualified because I had a chronic, incurable neurological illness. Yet knowing this, I pressed on with the application.

Why?

Why the hell not? I wanted to see how far I could get. My goal was to make it to the on-site interview and polygraph examination in D.C., which would have been the next stage had I managed to advance beyond the contents of that fateful manila packet.

Alas: a couple of months later, I received my official-unofficial CIA rejection letter in a slim white envelope bearing the same return address as the application packet I’d received earlier:

ENVELOPE

CIA_REJECTION_LETTER

So close, yet so far away. Their loss: I would have made a most excellent spy, even with MS.

***1ST DRAFT: SATURDAY, MAY 10th, 2014. 17:23H EDT***

Notes

1 – Also, note that all my materials are from like 6 years ago. I assume the Agency has changed things up since then. In fact, some recent reports suggest they’ve dispensed with the return mailing addresses shown here and are now using “Yoder and Young” (a fake cover firm) for correspondence purposes. Either way, I always found it amusing that the zip code 20505–a zip code used exclusively by the CIA–is displayed prominently on all correspondence from my application process. Cover blown by the USPS much?

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , | 7 Comments

Toward a Pedagogy of Urbex

Urban exploration, or “urbexing,” (sometimes abbreviated as “UE”) is the act of exploring structures in the built environment, particularly abandoned buildings, although it may also include sewers, storm drains, caves, and other man-made dwellings. For those who engage in it, it borders on an obsession. For those who don’t, it’s probably baffling: why on earth would people actively seek to enter decaying, and—in many cases—dangerous structures? Why go where it’s off-limits to go? What’s the appeal of trucking around in asbestos, dust, mold, animal feces, and God-knows-what-else?

In this post, I explore urbex as not merely a culture, hobby, or sport (though it is arguably all of these things and more), but as a radically different kind of pedagogy—one that offers an alternative to traditional forms of education upheld by (and increasingly embroiled in) late neoliberal capitalist economies.

I’ll approach what I’m calling a “pedagogy of urbex” by highlighting six areas of deep learning that I view as inextricable from the activity of urbexing, as well as how these processes are manifested within the learning and teaching framework of urban exploring:

1. CURATORSHIP:

A graffitied piano in the now demolished Bethlehem Lutheran Church in St. Louis, MO. Photo by Valéria M. Souza

A graffitied piano in the now demolished Bethlehem Lutheran Church in St. Louis, MO. // Photo by Valéria M. Souza

Urbexing is more than just entering structures, exploring them, and taking photographs or video. It is, properly speaking, a form of curatorship.

Most urbexers, in addition to constantly seeking out new venues for exploration, have a regular circuit of locations to which they return to repeatedly over weeks, months, or even years. More than just “visiting” (although this is certainly one aspect of the appeal and I’ve heard some people report that they do develop a strong attachment to certain spaces), this practice of return allows explorers to periodically check on the buildings they most care about—to make sure all is in order or, if not, to at least document changes that have taken place. A prime example is St. Mary’s Infirmary, where urbexers have created photographic records of the removal of entire staircases by “scrappers.[1] These explorers act as the living witnesses to the lives and deaths of buildings.

Many of us abide by the slogan: Take nothing but pictures, leave nothing but footprints, and this policy dictates how we care for the sites we visit. A solid urbexer does not cause any damage to a building and may even make minor repairs while present (such as replacing an item previously on a shelf that has been knocked onto the floor, for instance).

What’s remarkable about the photo above is that the graffiti artist not only enhanced the aesthetic value of the piano by adding the carefully placed, spray-painted eyes, but also replaced the hymnbook where s/he had found it after completing the artwork. This kind of attention to detail and careful consideration of buildings and their contents perfectly encapsulates the spirit of curatorship shared by ethical explorers.

*

2. TEAMWORK, LEADERSHIP, PROBLEM-SOLVING & CRITICAL THINKING:

An explorer, encouraged by teammates both above and below, descends a ladder during a team expedition somewhere in the Rust Belt. // Photo by Valéria M. Souza

While some people choose to explore alone, others prefer to work in pairs or small groups. For those of us who do like company, urbexing offers unique opportunities to experiment in different roles—both as part of a team and as a leader.

Urbexing often requires teamwork. When half a staircase is missing, what do you do? Wait—where does it lead, anyway? How badly do you all want to get down there? Perhaps most importantly: what materials do you currently have at your disposal?

With a team, you not only have the manpower to lift, push, move, pull, and assemble heavy (or complex) pieces, you also have the collective brainpower to suss out a wide range of different solutions to any given problem. One person may suggest using pieces of wood as makeshift steps, while another may realize that the stack of ladders lying nearby, discarded, would be a perfect substitute for a staircase. You never know what might work. In this sense, urbexing is also “labwork”—it’s about puzzles, problem-solving, and creativity. Because every member of an urbex team is likely to have different strengths and weaknesses, in addition to bringing different skill sets to the table, everyone enjoys opportunities to lead (teach) and to follow (learn).

*

3. PLAY, RISK & DISCOVERY: 

Jeff (a fellow urbexer): “It’s like a playground for grown-ups!”
Me: “It is a playground for grown-ups!!”

I call this "IRL Chutes & Ladders." // Photo by Valéria M. Souza

I call this “IRL Chutes & Ladders.” // Photo by Valéria M. Souza

For me—having grown up in the 1980s—urbexing evokes fond memories of playing outside. Of the best parts of being a kid. Recently, The Atlantic ran a terrific article by Hanna Rosin entitled “The Overprotected Kid.” It was about a unique kind of playground called “The Land,” located in North Wales, where children are allowed unstructured, unsupervised play that involves elements of danger and risk-taking (including, yes, playing with fire).

The kind of idea that sends helicopter parents off into a tizzy, “The Land” is what used to just be called “childhood.” I grew up in New England with an Irish Catholic mother—one who worked me hard from a young age but also felt children should spend the majority of their free time outside playing, not inside annoying their parents. My mother did not interfere in sibling conflicts or supervise play dates. She did not schedule “activities” for my brother and I. She did not micromanage, thankfully.

My fondest memories of childhood are of being kicked out of the house after breakfast each summer day and being told sternly to only come home for lunch, and then again before dark. As kids, my brother and I—along with the rest of the children in our neighborhood—were expected to find ways to entertain ourselves, resolve conflict, and refrain from bothering adults. As a result, we had intimate knowledge of the woods that stretched for acres behind our homes. We built treehouses. We climbed trees, invented games. Sometimes we got bullied or fought or broke bones. We got up again. We learned. We played.

In fact, the excerpt of Rosin’s article that most resonates with me deals with Roger Hart, a researcher who sought to map a “geography of children” in rural England in the 1970s:

The children spent immense amounts of time on their own, creating imaginary landscapes their parents sometimes knew nothing about. The parents played no role in their coming together—”it is through cycling around that the older boys chance to fall into games with each other,” Hart observed. The forts they built were not praised and cooed over by their parents, because their parents almost never saw them.

[...]

The kids took special pride, Hart noted, in “knowing how to get places,” and in finding shortcuts that adults wouldn’t normally use.

This reads like the childhood I remember, and urbex is the only activity I’ve ever found that even comes close to taking me back there. To me and countless other explorers, urbex is delightful precisely because it allows opportunities for unstructured and “unsafe” play. There is no urbex environment that is “controlled” (in the sense of “specifically designed for purpose X”) or mundane (in the sense of being predictable). In a world that is increasingly over-gentrified, with buildings and neighborhoods rendered box-like and homogeneous and carefully surveilled by private security firms and ever-proliferating affixed cameras of all shapes and sizes, urbex architecture remains the one gorgeous, awe-inspiring wildcard of the urban landscape.

There is no “defensive architecture” in abandoned buildings. There are no anti-urination devices; no anti-pigeon devices; no anti-skating fixtures; no anti-sitting (aka “anti-homeless”) benches. It seems slightly bizarre to write this, but abandoned buildings—the buildings left to rot, the ones nobody wants anymore—are paradoxically more suited to unfettered, undisturbed human activity than virtually any other structure in the 21st-century built environment.

Even when visiting the same building multiple times, explorers will rarely find it in exactly the same condition in which they last observed it. Perhaps another urbexer has stopped by in the meantime and built a new makeshift staircase or slide. Perhaps someone has revealed another secret entrance. Perhaps there has been a lot of rain and suddenly the basement is flooded. You never know until you’re right there in it…and that’s half the fun.

Urbexing is the closest thing to being a character in a video game—and yet it’s physically challenging and active in a way that playing video games is not. Can you squeeze through this hole in the ground? How high of a ladder can you ascend? Is this makeshift staircase safe? How good is your balance? The decisions you make in urbex, unlike those you make while playing Super Meat Boy or Fez, have real-life consequences. There is always a chance you could get injured or die while urbexing, but at least you’ll have died while living.

*

4. SKILL-BUILDING:

...not scary at all, amirite? // Photo by Valéria M. Souza

…not scary at all, amirite? // Photo by Valéria M. Souza

Related to items 2 and 3 on this list, skill-building is a core feature of urbex education. Urbexing, while obviously risky, provides opportunities for civic participation and related forms of mastery that are lacking (or have been greatly diminished) within all spheres of neoliberal capitalist society. Within the culture of urbex—if one can say there is a single “culture”—participants still enjoy the ability to receive instructional scaffolding as well as active mentoring from more experienced explorers. Unlike in capitalist work environments, where individual workers may be ignored, overlooked, or abused by superiors, in urbex people’s unique assets are recognized and valued. Because urbexing is based around a culture of friendship and learning instead of one of hierarchy and competition for limited resources, more experienced participants are often willing to “invest” in less experienced ones (colloquially called “n00bs”)—provided the less experienced participants show sufficient promise and interest in the hobby/sport/culture. Urbex is an environment where people can interact unmediated by capitalist logics of spending, earning, and consumption. I’ll return to this point shortly….

*

5. AESTHETICS & THE SUBLIME:

Me to J. (another fellow explorer): “The only time I ever believe 100% that there is a God is when I urbex. That’s the only time I feel God.”

The now-demolished Bethlehem Lutheran Church in St. Louis, MO // Photo by Valéria M. Souza

The now-demolished Bethlehem Lutheran Church in St. Louis, MO // Photo by Valéria M. Souza

There is such beauty here. There is beauty in finding a particularly elegant method of entry into a building. In the posture of a friend as he descends a ladder. In the play of light and shadow inside a hallway. In the position of a Bible, just so, on the floor of an abandoned church. The silence: the absolute, perfect silence that does not exist in the outside world. That world above, below, beside, all around us. That world from which we are temporarily hidden. I have seen beautiful things—things no one else (or hardly anyone else) will ever see.

That spark of fear when faced with a sudden, precipitous drop or an especially shaky floor: ah, the sublime of exploration. I’ve found such beauty here. The first time I stood inside Bethlehem Lutheran Church, I felt my jaw drop. I experience that feeling in virtually ever place I enter that is abandoned. And ultimately, I would rather be in a decaying building than in an art museum.

The thing about urbex is that it’s not an aesthetics for or of the privileged—which is to say, there is no concept of Kantian “aesthetic disinterestedness” or “distance” in urbex. “Aesthetic disinterestedness” or “distance,” as I read it, implies passivity: it is the passivity of the art museum and the stagnant spectator who visits it. You are not in the painting; you merely gaze (disinterestedly, of course) at the painting. From several feet away.

Urbexian [2] aesthetics is both an aesthetics of immersion and one of profound interest: you are in the building. You occupy it. You are not separate from it. You are part of the building. From within, you observe. But you don’t merely observe. For this aesthetics is decidedly not ocularcentric. In urbex you smell, taste, hear, feel, see. You affect the building and it, in turn, affects you. It is all-encompassing and very much immediate. You are not sheltered from this beauty that could easily kill you: instead, you are intertwined with it.

I think Kant got it wrong—or at minimum, he really missed out.

*

6. HISTORY & POLITICS:

Parallel curbs (the older one is closer to the grass) running down the street near the location of the demolished Pruitt-Igoe, a housing project first occupied in 1954 in St. Louis, MO // Photo by Valéria M. Souza

Parallel curbs running down the street near Pruitt-Igoe (demolished), a housing project first occupied in 1954 in St. Louis, MO // Photo by Valéria M. Souza

Make no mistake, urbexing is a political act. The trial of Bradley Garrett, which began earlier this week in London, makes clear the extent to which urban exploration (Garrett prefers the term “place-hacking,” which I dislike) is a politically-charged activity that challenges accepted notions of public vs. private space. Most of the areas that urbexers infiltrate (whether in the U.S., the U.K., or elsewhere) are technically “private property,” with the irony being that such properties are designated “private” and then systematically neglected, ignored, and allowed to rot, sometimes without ever being officially demolished. [3]

In his astute analysis of the charges leveled against Garrett, published in the Evening Standard, Will Self notes:

[...] the aggressiveness of the authorities’ response reflects a deeper level of anxiety about the city and the way we all live in it. For the most part we behave ourselves — we walk this way and not that, we stand on the right and go up the stairs on the left. Our movements about London are closely circumscribed, and while we may imagine ourselves to be free, the truth is that the vast majority of our journeys are undertaken for commercial imperatives: we travel either to work or to spend.

All about us during our daily existence we are presented with buildings we cannot enter, fences we cannot climb and thoroughfares it would be foolhardy to cross. We are disbarred from some places because we don’t have the money — and from others because we don’t have the power. The city promises us everything, but it will deliver only a bit.

The place-hackers draw our attention to how physically and commercially circumscribed our urban existence really is. Some of the defendants were involved in a daring ascent of the Shard while it was still under construction; others have trespassed in the great Modernist ruin of Battersea Power Station. In all cases, whether going up, down, or around, the place-hackers demonstrate a willingness to truly experience the city as it is, rather than be satisfied with the London that only comes with a price tag.

Besides troubling and blurring the line between public and private, urbexing represents a fundamental refusal to engage in consumer behavior: it is an activity that does not involve shopping, spending, buying, selling—in short, participating in the neoliberal capitalist economy. The interesting thing for me is that, because within late neoliberal capitalism citizenship is reduced to consumption (that is, one is only constituted as a citizen by and through the act of consumption), urbexing is not only a simple unwillingness to participate in this system, but also a working blueprint containing alternatives to it. Urbex, as one of the few remaining social and cultural activities not explicitly inserted within and sanctioned by the paradigm of neoliberal capitalism, introduces the radical idea of something other than neoliberal capitalism. In this sense, one could say that urban exploring is a mode of critically (re)thinking the status quo.

It is no coincidence, then, that urbexing could be called “the sport of the poor.” At least within the U.S. it is most popular in the Rust Belt and parts of the Deep South. It is both a symptom of and a response to poverty, for urbexing necessarily relies upon the architecture of poverty (an abundance of buildings and other structures that are neither inhabited, nor re-sold, nor kept up, nor demolished — all largely due to a lack of funding at individual, local, and state levels). People likely to engage in urbexing probably tend to come from lower-class backgrounds: can you picture the Kardashians doing this? What about the over-protected children of comfortably middle-class helicopter parents? No? Exactly.

Urbexing offers poor people (or lower-class people, or lower middle-class people) the opportunity to see “new places”—it opens up horizons of possibility to those who (like myself) would otherwise be unable to experience either novel places or the novelty of placeness that comes with entering into previously unknown territory. When you don’t have the money to travel to other states or countries, urbexing permits access to recreation and to wonder. It is travel without money—but it is not tourism because it is diametrically opposed to the patterns of consumption associated with the leisure class (and, of course, with the upper classes).

At the same time, urbexing affords a chance to “time travel,” in that every expedition serves as a potential history lesson. To visit the ruins of the Pruitt-Igoe housing project is far more enriching and meaningful when one visits with prior knowledge of its history (and, by extension, the histories of racism, segregation, and poverty in St. Louis and in the United States more broadly speaking). Many expert urbexers are veritable encyclopedias of historical, geographical, and cultural knowledge, and are often eager to pass this knowledge on to n00bs. While walking the city, concealed, one learns its history. And, most importantly of all, one participates in its democracy through the political activity of urbex.

Notes

1 – “Scrappers” are utter sleazebags (in my opinion) because they loot and pillage buildings for the sole purpose of profit.

2 – Juuuuust coined this. #nailedit

3 – In other words, this is accumulation and possession of property merely for the sake of accumulating and possessing it (i.e. – capitalism).

[***FIRST DRAFT: SATURDAY, MAY 3RD, 2014 @ 22:24H CDT***]

Photo by Valéria M. Souza

Photo by Valéria M. Souza

 

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 12 Comments