“When I Walk,” by Jason DaSilva.

If you watch nothing else over the next month, watch this. It’s streaming online for free until 8/22/14.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , | Leave a comment

Consider the Fig Tree: My Summer Reading the Bible

Image credit gci.org

This summer I’m reading the Bible.

Although I am using a number of translations for reference—including the King James version (in English and in Portuguese), the New Oxford Annotated Bible, etc.—I’m relying primarily on the English Standard Version (ESV), a revamped edition of the 1971 Revised Standard Version which enjoys massive popularity among evangelical Christians. [1]

I’ve chosen the ESV because the translation is interesting and because it seems to be so very beloved by a wide cross-section of readers, and I continue to pour over it despite the fact that I sometimes come across editorial additions with which I profoundly disagree, such as the Student Study Bible‘s appendix commentary on homosexuality, a form of alleged “sexual immorality” which—you guessed it—the Bible’s editorial board roundly condemns. [2]

My approach to the text thus far has been non-linear. I read the Psalms and Proverbs, then skipped into Job, Mark, Matthew, and now Luke. After Luke, I’ll continue on to whichever sections look appealing, and I’ll do this repeatedly until I’ve plowed through, pondered over, puzzled about, highlighted, dog-eared, and copied notes into the entire book.

What I’ve found fascinating so far, though, are people’s reactions to my project: the simple, unassuming task of reading one very large, very old book. Devout Christians, many of whom are so well-versed in the Bible that they can quote and cross-reference it from memory—a truly impressive feat—tend to disapprove of my natural inclination to question the meaning(s) of passages and my refusal to read the text literally. If I ask what a particular passage “means,” I’m provided “the meaning” but with the caveat that this meaning is not open to interpretation, probing, or discussion. Critical reading is seen as blasphemous or disrespectful to the text, even to Christianity itself.

For me, coming from an academic background, the practice of “close reading”—the intense, critical, and deeply scrutinizing study of a text—is the highest compliment a person can pay to any given text. I’m never satisfied with “the” (singular) meaning because the Bible, like all truly great works, is a richly layered, intricately complex masterpiece that lends itself to innumerable interpretations. My sustained reading of the Bible, in conjunction with my insistence on reading it critically, are the ways in which I in fact demonstrate my profound respect for the text.

Non-believers react just as strongly to my project as devout Christians do. I have been asked, by academic non-Christians and non-academic non-Christians alike, why I would want to read the Bible. I have heard otherwise well-educated people proudly proclaim that they have never read (and never will read) the Bible. I’ve invited friends I know to be avid readers to join me in studying the text—in hopes of discussing it with them—only to receive the astonishingly defensive response: “If I do read it, I will not accept it as doctrine.” Then there are the people who give me prolonged side-eye while muttering: “…just don’t drink the Kool-Aid!” or similar.

What non-believers fail to recognize is that their replies, no less than those of devout Christians, implicitly frame the Bible as a “magical” text. What I mean by this is that members of both groups, regardless of their respective ideological positions, respond in a manner which unmasks their conviction in the text as possessed of supernatural powers. For Christians, the Bible is a magical text. It is the direct word of God and meant to be followed as doctrine. Among non-believers, though, the issue appears more complicated—they simultaneously disavow the Bible, minimizing its importance, and take great pains to articulate their disbelief as well as their hope that I will not come to believe. If they did not view the text as somehow “magical,” it would not be necessary to stress their unbelief so vehemently, nor to “warn” me to “be careful” when reading, lest I “drink the Kool-Aid.”

Other people’s reactions aside, my decision to read the Bible was not at all fraught with conflict. I am trained as a scholar of literature and have always had a penchant for extraordinarily laborious texts: my M.A. thesis was on The Lusiads, a sixteenth-century Portuguese epic poem similar in difficulty to the works of Homer, Virgil, Shakespeare, and Milton, and my doctoral dissertation examined representations and the aesthetics of disability in Grande Sertão: Veredas, a twentieth-century Brazilian novel by “the Lusophone James Joyce,” João Guimarães Rosa—arguably the most challenging book ever written in Portuguese. [3] Having already tackled (in a language other than my native English) two of the most tricky works ever published, the Bible struck me as a perfectly logical next step.

And make no mistake, the Bible is an utterly captivating text. Not only is it the main influence for pretty much all of Western literature (including my cherished Camões and Guimarães Rosa), it is also an impressive repository of traditions and genres, featuring everything from poetry and law to history and epistles. Like any true masterpiece, it never goes out of date; I find it amazing that parts of a book written thousands of years ago are still relevant today—no less so than sections of The Lusiads (published over 400 years ago) and Grande Sertão: Veredas (published nearly 60 years ago).

I doubt that I’ll ever quite fit in with either devout Christians or with “non-believers” (except perhaps with those who study “the Bible as literature”). On one hand, I respect the Bible too much to read it literally or to just passively accept any one person’s (or pastor’s, or congregation’s, or denomination’s) given interpretation. On the other, I refuse—like many non-Christians—to simply avoid and ignore the text because it is controversial or because it has been and continues to be (ab)used by many people for the purpose of promoting their own ideological and political agendas.

If the Bible is indeed a divine work, then the being responsible for its creation is probably far more intelligent than any mere human. What this means is that there could not possibly be “only one correct” interpretation of the Bible. I am deeply suspicious of anyone who claims this to be the case, for the mark of a brilliant text—any brilliant text—is its sheer depth and richness. Brilliant, classic texts do not produce “one correct reading.” They generate hundreds or thousands of years of overlapping, often conflicting readings. They are responsible for passionate disagreement and furious bouts of intellectual slap-boxing. Jesus himself—(part?) human—spoke in parables and constantly encouraged his disciples to question their preconceived notions, so why would any sacred and omnipotent being limit itself to simplistic explanations for humankind and for the universe? The attempt to impose “one correct reading” strikes me as an indication of human limitation—not some “fault” inherent in the Bible proper.

If the Bible is not a divine work, then it is most likely the result of dozens of people collaborating over time to weave together a collection of stories, trivia, legal standards, religious traditions, geography, poetry, and so forth. If this is the case, the material contained within probably originated in oral form and was then passed down for generations until finally being transcribed. This would make it, essentially, some of the oldest literature of humankind. For that reason alone it is worth reading.

[***FIRST DRAFT: Tuesday, July 8, 2014. 20:51H EDT***]

Notes

1 – Search for #ESV and #BibleStudy on Twitter and Instagram to get an idea of what I mean.

2 – The passage in the text actually leans more towards a “love the ‘sinner’/hate the ‘sin,’” approach, but this ideology is inherently problematic and generally ends up boiling down to: “hate the ‘sinner’ while pretending not to hate the ‘sinner’ and conveniently substituting some other word for the word ‘hate,’” as devout Christian Micah J. Murray convincingly maintains in this moving blog post about his relationship with his brother.

3 – Part of what I argue in my dissertation is that Grande Sertão: Veredas is not actually “in Portuguese,” but rather in an artificial language similar to Homeric Greek; it employs Brazilian Portuguese as a base but departs sufficiently from this base—incorporating elements of dozens of other languages and freely mixing archaic and colloquial registers—to warrant being classified as something other than Portuguese. It is telling that even native speakers of Brazilian Portuguese have been known to bemoan the novel as “written in Hungarian” due to its “foreign” syntactical and semantic patterns (Hansen 119, trans. mine).

Works Cited

Hansen, João Adolfo. O o: a ficção da literatura em Grande Sertão: Veredas. São Paulo: Hedra, 2000. Print.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 4 Comments

Dancing with Doctors: The Humanity of Physicians

I. Null set

fc_nullset_41726_lg

 

The first time I saw fear on a doctor’s face was also the first time I met with the physician who would become my permanent PCP. I’d come into the office—a neighborhood health center affiliated with a public hospital near Boston, Massachusetts—complaining of widespread numbness and some muscle weakness lasting approximately three weeks.

During those three weeks I had been turned away from a local private hospital twice, with firm instructions from the attending E.R. physician “not to come back unless [I lost] control of [my] bladder or bowels.”

I had in fact now “lost control” of my bowels—although I didn’t see it that way because my problem was not incontinence. I have not shit my pants, I reasoned, and therefore I will not return to the hospital.

This primary care doctor was the only one available at the health center the day I went in, and because I was approaching a nuclear level of anger I told the receptionist: “I really do not fucking care. I am seeing a doctor today. I’m not leaving until I see a doctor.”

I remember thinking: It doesn’t matter anyway. Whoever this fucking doctor is, I am going to fucking hate her. I don’t care.

In the examination room I prepared myself psychologically for a giant, screaming fight. I will probably hate this doctor, but she is going to fucking listen to me, I decided. My body was wound up, muscular, tight as a cat’s. Everything bristled. One way or another, this doctor was getting punched in the face.

In she walked, introduced herself (Hi—I’m Dr. B.), shook my hand—and did something astonishing.

She listened.

She listened.

Not because I “made” her listen—as I’d imagined I’d have to—but because she is apparently the kind of doctor who listens.

I told her that I could not feel my left leg or my right foot. I told her that I had been manually disimpacting myself for over a week.

In case you don’t know what “manually disimpacting” means: I had been using a latex-gloved hand to manually remove shit from my own ass for over a week. Yeah—that happened, in real life. The problem, I explained, was that I could feel when I had to go to the bathroom, but the muscle—whatever muscle it was—that would usually push the shit out was not working. Like trying to clench a fist and nothing happening. Like when you sleep funny on your arm and cut off the circulation. I was smart enough to understand that you cannot just leave shit in your bowels for days on end (having an R.N. for a mother means that you passively absorb some basic medical knowledge as you grow up), so I’d been strapping on gloves and, in my best show of clinical detachment, methodically extracting the shit with my right hand. Every day.

“I didn’t go back to the E.R. because the Attending was pretty stern and I don’t think, technically speaking, that this counts as ‘losing control of [my] bowels’ since I am not, like, shitting my pants…”

Dr. B. furrowed her brow in what I read as a silent disapproval of the anonymous E.R. colleague—the most exquisitely fleeting micro-expression of Wow, what the fuck?—before resuming the Neutral Doctor Face.™

“OK,” she said, “I’m going to need to do a rectal exam.”

She explained how this would work and asked that I lie on my side in a fetal position on the exam table. I did.

“OK, I’d like you to squeeze like you’re trying to hold in a bowel movement.”

I squeezed, knowing that I was squeezing but unable to actually feel anything, since that region of my body was completely numb.

“OK, you can start squeezing now.”

“Oh. Um. I am squeezing. I mean I have been squeezing for, like, 5 minutes. Since the first time you asked.”

This concluded the exam. I sat up and got dressed while she stepped out of the room to clean up.

When she came back, her face was the color of a starched bed sheet. I studied it carefully, searching. Professional still, but with a glimmer of something else in her eyes. Fear? Fear.

“We need to get you to a neurologist,” she said. “Now.”

The referral sheet read: ∅ rectal tone.

***

II. Puncture

Puncture

The neurologist was named Dr. K.—an Irishman who went on become an epileptologist at MGH. He asked my permission to include an intern in the exam, and I gave it.

The intern and I were exactly the same age: 27. She was going to become a Psychiatrist and, if memory serves, was in the last week or so of her internship.

Much of this exam remains a blur, but a couple of moments in particular are unforgettable.

When the neurologist tested my reflexes with a hammer, something strange happened. My left leg kicked up so high that it startled me, extending in front of my body at a 90-degree angle.

“Woah,” I said, caught off guard.

The intern and neurologist exchanged glances. Neutral Doctor Faces™, but their eyes communicated something I couldn’t quite parse. I studied them.

“Is that….” the intern started.

The neurologist’s response was understated but clear. He raised his right hand—palm flat—at her while dipping his chin in the briefest of nods: Yes—that is what we both know it is, but don’t say anything out loud right now.

This was a conversation made up of silences, pauses, beats. My eyes met theirs, quiet. We all knew something that no one was verbalizing.

I gave permission for the intern to try her hand at a lumbar puncture. On me.

Again I found myself on an exam table in a fetal position. Knees tight to chest. The intern and the neurologist sat side by side. I don’t remember who prepared the needle or inserted it. I don’t remember any pain or fear. I remember repeating in my head: knees to chest, knees to chest, knees to chest, knees to chest….stay still, stay still, knees to chest, knees to chest, knees to chest….stay still….

The intern was struggling to get the needle positioned correctly, and in the process it was brushing lightly against a bundle of nearby nerves known as the cauda equina, or “horses’s tail”—so named for its equine appearance.

I felt my body sizzle, every limb hot oil. Stay still-stay still-stay still-stay still….

I screamed. Not some girly scream. Not a whimper. A deep-bellied, powerful scream that expanded, projecting outward for rooms and rooms.

I felt electricity—some kind of pain I’d never felt before. It wasn’t stabbing or deep, throbbing or ripping, hot or sharp. It felt electric, frayed, shocked. I hallucinated wires, motherboards, joysticks. The pain was somehow everywhere and nowhere; all-consuming and non-specific. Again, I screamed.

The neurologist was calm and mentioned something to the intern about asking me where I felt the pain.

“Can you tell me,” she asked, her voice trembling, “where the pain is?”

I tried to assess where the pain “was.” There was no “was.” There were wires, cords, power sockets. Finally I shouted: “IT’S EVERYWHERE!” because it was.

I sensed that the intern was shaken.

The neurologist took over, retrieved the spinal fluid. I would be admitted to the hospital, given an MRI, and diagnosed with MS. For two days post lumbar puncture I had a headache so bad it felt as though my skull had been split down the middle with a croquet mallet and had to use a bedpan to piss since sitting up made the headache worse and the headache made standing impossible.

But in that interstitial space between the lumbar puncture and my admittance to the hospital, there was an hour of perfect stillness. Alone in the exam room, I lay flat on the bed, covered in blankets. “Lie flat on your back and you won’t feel the headache,” a nurse advised.

Lying flat on my back, I heard the movement of wood across tile. It was the intern. Pushing a chair.

“Would it be OK if I sat with you?” she asked.

“Of course,” I replied.

The intern sat, her expression somber.

“Hey,” I said.

“Yeah?”

“Hey—you did a good job with the lumbar puncture. I know it was your first time and I could tell you were scared.”

“I’m sorry I couldn’t get the needle positioned. I know you were in pain.”

“It’s OK. I am glad you got to try. I actually think that was harder on you than it was on me.”

“Why’s that?”

“Because I felt physical pain, sure, but you had to perform the procedure knowing that you were causing me pain. That’s more difficult.”

We talked some more about things other than lumbar punctures, and she stayed with me until I was admitted.

We were both 27 years old.

***

III. Dancing

OLYMPUS DIGITAL CAMERA

The neurological exam is like a dance. Elegant in its simplicity, artful in its expressiveness. 

I met Dr. M. when she was still a resident at a hospital in Brookline. I made an appointment at the clinic where she worked because the neurologists there specialized in MS, and I was told that if anyone were qualified to vet a particular treatment, it was them. At the time I was interested in exploring a pioneering new treatment at Johns Hopkins, and I arrived in Brookline carrying stacks of printed articles, information, and extensive research stuffed into my NorthFace backpack.

During the appointment, I posed a question to which Dr. M. did not know the answer and was struck by the way she handled it: “It may seem a bit odd to hear this from a doctor, but I actually don’t know. Would you mind if I brought in one of my supervisors? I think she’ll be able to answer your question.”

I thought: This is definitely who I want to be my neurologist from now on.

Dr. M.—no longer a resident and now wrangling med students of her own—is still my neurologist.

What makes the neurological exam unique is that the primary “instruments” involved are human. They are bodies—that of the physician, and that of the patient.

The neurologist will ask the patient to shrug their shoulders and then say something like: “Don’t let me push down.” The doctor then attempts to push the patient’s shoulders down, while the patient actively resists. This type of assessment is used for varying muscle groups and nerves. In testing the XI cranial nerve, the physician places an open palm around the patient’s jawline and asks the patient to push against his or her hand. As always, the neurologist provides resistance, making it difficult for the patient to turn their head and in the process testing the patient’s nerve function and muscle strength.

In examining motor function, the doctor repeats similar procedures—attempting to push and pull various muscle groups in the arms and legs, always using the force and resistance of his or her own body as a counterweight. [1]

The exam is a kind of conversation: you push, I push back. You pull, I pull back.

“Palms down. Open your fingers. OK—don’t let me close them. Strong arms—OK, don’t let me pull them apart.”

There is, of course, another conversation that takes place, interwoven with the corporeal dialogue of the exam.

Years ago during a Romberg test I jokingly said something about Dr. M. “letting [me] fall.” It wasn’t so much that I thought she would let me fall (I didn’t) as that I don’t trust people in general. I never said that outright though. I just joshed that she was going to let me fall. And Dr. M., spotting me, kidded back: “Oh, you don’t trust anyone, do you? No, you don’t.”

She was right—but I hadn’t told her that. She had intuited it based on our interactions.

Over time, slowly, I did come to trust her.

The few true “tools” that are used in the neurological exam are both simple and quirky. My favorite is the tuning fork, which is used to test sensation, especially on the hands and feet. The neurologist will strike the fork, causing it to vibrate, and then place the base of the instrument against the skin of the patient’s big toe (or foot/hands/fingers).

“Tell me when you can’t feel it anymore.”

“OK, now.”

Here, too, this is primarily an interaction between two human bodies—for the only way the doctor knows if the fork has stopped vibrating or not is by placing the base of it against his or her own skin (typically also the top of the foot or hand) in order to determine the patient’s range of sensation.

“OK, now. I can’t feel it anymore.”

Pause, test.

“Is it still going?”

“A little bit, but barely.”

“Am I secondary progressive yet? OK—I can still feel that.”

“No, you’re definitely not secondary progressive yet. That I know for sure. Tell me when it stops.”

“Good—that’s a relief. OK: it just stopped.”

This is how we dance, in layers of conversation. You push, I push back. You talk, I listen; I talk, you listen. Back and forth it goes: you read me, I read you.

We are beautifully, messily, perfectly human.


[***VERY ROUGH FIRST DRAFT: SATURDAY, JUNE 21st, 2014. 00:45H EDT***]

***

Notes

1 – If you’re interested in viewing some examples of neurological exams, a longer one can be seen here; a “quick” version of an exam here; and a series of descriptions and photographs here (scroll to bottom for subsequent parts of the exam in the same format). The precise tests performed and objects used (or not) will vary from physician to physician. This is partially a question of “style”: just as no two professors deliver a lecture in the same way, no two doctors perform a neurological exam identically, for there are a variety of different methods and tests that can be performed in order to obtain the same information. It also depends on which areas a neurologist may be focused during any given appointment. For instance, a neurologist may only test olfactory function if the patient has complained of diminished ability to smell, or may only do a Romberg’s test if the patient has a history of issues with proprioception.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , | 6 Comments

Complicating Disability Studies’ Relationship to Medicine

One of Disability Studies’ major hang-ups is its default position with respect to the field of medicine and—by extension—with medical practitioners. The adversarial stance of DS towards medicine (and doctors) stems largely from the former’s repudiation of the medical model of disability, according to which—as defined by Disability Studies scholars—individual disabled people are identified as “problems” to be “fixed” or “cured.” [1]

The graphic below, borrowed from the website of the Democracy Disability and Society Group, nicely illustrates the medical model of disability as theorized by DS scholars and activists:

medical-model

Image credit Democracy Disability and Society Group (ddsg.org).

Before I dive into my discussion on DS’s positions vis-a-vis “the medical model,” I’d like to clarify that in my own work I make no distinction between “impairment” and “disability,” preferring instead to utilize “disability” to designate the complex matrix of physical/material and socio-cultural phenomena that together produce conditions of disablement for some people. [2]

The main issues that I have with Disability Studies’ framing of “the medical model” in its current incarnation is that it presumes the following:

  • Medicine and physicians are always paternalistic.
  • Recipients of “medical care” are always “passive” and “disempowered.”
  • There is no gray area between the extremes of “cure” and “do nothing” when it comes to medicine.
  • All “medical” care is bad.

It is worth noting that the definition of “medical model”—a term coined by psychiatrist R.D. Laing to describe the working model for training physicians and from which the related “medical model of disability” derives—is one articulated in the 1970s. It therefore bears little resemblance to working models employed by physicians in the 21st-century, especially newer generations of doctors who have moved away from paternalistic attitudes and tend to view them as outdated and ineffective. [3]

The Democracy Disability and Society Group graphic includes both “impairments” (aka “disabilities”) and “chronic illness,” but I’m puzzled as to why they occupy separate categories considering chronic illnesses are in fact disabilities. A disability (again, the graphic uses “impairment” to denote what I call “disability”) is quite simply a mode of functioning that differs from that of the majority of people. For instance: if the majority of people have 2 legs, then having only 1 leg is a “disability” because it involves a physical form (and consequently a mode of ambulation) that differs from that of the majority of the population. If most people do not perceive sights and sounds as overstimulating but someone with Autism does, then Autism is a disability because it involves sensory/cognitive processing modes that differ from those of the majority. It logically follows that if most people have immune systems characterized by a common baseline level of inflammation, people with immune systems characterized by higher-than-average inflammation levels (manifesting in a variety of conditions with names like MS, Rheumatoid Arthritis, Chron’s, etc.) are configured immunologically in a way that differs from the majority of the population and consequently must operate differently from their immunologically “standard” counterparts. In other words: yes, chronic illness (defined as “ongoing immunological inflammation that differs from that found in the majority of the population”) is a disability.

A couple of factors contribute to the “classical” separation within DS between “chronic illness” and “disability.” As shown in the graphic, disability is traditionally viewed as a “physical, mental, [or] sensory” difference, but overwhelmingly “mere” physical differences are prized, with the “ideal” disabled person being an “otherwise healthy” individual with a motor impairment (i.e. – missing limb, spinal cord injury, war trauma, etc.) necessitating either a wheelchair or prosthesis. Within the hierarchy of disability—yes, there is a hierarchy—Deaf and blind people are also prized, since they are “otherwise healthy.” [4] A quick Google image search of the keyword “disability,” while admittedly not scientifically rigorous, provides a terrific example of the hierarchy of disability at play.

My proposal is that this emphasis on “health” as the standard by which people are included or excluded as “disabled” is as outdated as the paternalistic style of medical practice. By emphasizing the image of disability as “mere” physical variation in “otherwise healthy” individuals, Disability Studies is very problematically helping to enshrine the ideal of “health” as well as colluding in the over-arching cultural rhetoric of “health as morality,” wherein immunological variation is code for “immorality” and even “inferiority.” By clinging to mainstream ideals of “health,” Disability Studies works to achieve greater equality for some disabled people by actively oppressing others. For a field allegedly committed to social justice and equality, upholding this kind of hierarchy of oppression is unacceptable.

Because chronic illnesses are many times imperceptible [5], they tend to be overlooked by the general public (including the DS community), and this lack of perception seems to be the second key determinant—besides the prevailing rhetoric of “health”—in their exclusion from disability and Disability Studies. Everyone knows when a paraplegic person enters the room: he’s using a wheelchair. The Deaf person, in signing, not only communicates but also performs his or her Deafness. The blind person with a cane or dark glasses is identifiable as blind. Being identifiable, even by laypeople, as disabled is important to the validation of “disability identity” precisely because of DS’s internalization of cultural ideals of “health.” Disability Studies’ idealization of “health” and its emphasis on perceptible forms of disability are inextricably intertwined.

In contrast with “classically” acknowledged forms of disability like Deafness, blindness, using a wheelchair or prosthesis, etc., chronic illnesses are often not perceptible to the general public. The crucial point here is that chronic illnesses are frequently only perceived (and perceptible) by *medical* professionals—and even then indirectly, via analysis of complex physical exams, blood work, and so forth. They are thus prone to being reflexively (if incorrectly) “medicalized” by default and rejected by DS scholars and activists as “something other than disability.”

It is both poignant and ironic that, while people with perceptible disabilities are more likely to suffer discrimination and exclusion by the non-disabled public by virtue of their disabilities being perceptible, people with imperceptible disabilities (such as chronic illnesses) are routinely excluded from Disability Studies as “other-than-disabled” or “non-disabled” for (in part) the opposite reason. [6]

Disability Studies’ rejection of “the medical model,” combined with immunologically disabled people’s configuration or placement within that model, contribute to conditions that foster the exclusion of chronically ill people from disability and from DS. DS “needs” to reject chronically ill people because it “needs” to reject “the medical model,” and chronically ill people are stubbornly enmeshed within that model. Chronically ill people are treated by the field as “the problem” in need of “cure” or “fixing”—and this “cure” or “fix” is accomplished through segregation, which takes the form of exclusion from the category of “disability.” Oh what a tangled web we weave when nearly an entire field uses the very same working model it claims to loathe as a virtual blueprint for casting off certain members of its own group! [7] 

Instead of rejecting chronic illness as “not disability” simply because it doesn’t fit into the established paradigm of “the medical model of disability” as formulated by Disability Studies scholars and activists, what if we flipped the lens? What if we asked what recognizing chronic illness as a disability could potentially do for our existing understanding of “the medical model of disability”?

One of the first shifts that would occur would pertain to our views on medicine, medical care, and physician-patient relationships. The experiences of people with chronic illnesses (aka “immunological disabilities”) in the realm of medicine often bear little resemblance to the invariably negative and fatalistic views of medicine propagated by leading DS scholars. For starters, since chronic illnesses are not “curable,” there tends to be minimal—if any—fixation on the notion of “cure” on the part of the physician. When and if an insistence on “cure” does occur, it is generally on the part of the chronically ill person, and my argument would be that it is because that particular person has been indoctrinated into the rhetoric of “cure” by organizations like the National MS Society, the Arthritis Foundation, etc. (and on a larger scale, by contemporary society’s worship of “health”). This is no different than an individual paraplegic person expressing his/her desire to not be paraplegic, or an individual blind person maintaining that they would prefer to be sighted. What is different is that chronically ill people receive far less support from the general public should they choose not to oppose the rhetoric of “cure,” coupled with far more (organizational and social) pressure to adhere to this harmful rhetoric. If charities and organizations such as the NMSS and the AF continue to foster the idea that chronic illness is an “evil” and that “cure” is the only solution, then many chronically ill people will continue to succumb to pressure to internalize these views, even if it proves disempowering and unproductive.

The relationships between chronically ill (aka “immunologically disabled”) people and their physicians are typically long-term ones that emphasize continuity of care, partnership, interdependence, and support. Far from being “passive recipients” of care, we are engaged participants in a dynamic that contributes to our own care and that of others. Far from having “cure” (or even “treatment”) imposed on us, we are empowered to provide input regarding how we would like to approach our disability (and how we would like others, including our doctors, to approach it). Notice that I deliberately use terms like “care” and “approach to” instead of “cure” or “fix.” The latter terms simply fail to describe my experience within the context of medicine, and so I avoid them.

An immunomodulatory drug—the type of drug most people with immunological disabilities use—is best viewed as a prosthesis. In The End of Normal: Identity in a Biocultural Era, Lennard Davis affirms: “A drug would be a prosthesis if it restored or imitated some primary state that appears to be natural and useful” (64). Davis makes this statement in the context of his argument that SSRIs are not “chemical prostheses” for depression, since happiness is not a “primary state” of being and since there is compelling evidence to suggest that SSRIs do not actually work (Davis 55-60). His assertion is relevant to my position in this blog post since, unlike SSRIs, immunomodulatory drugs do “restor[e] or imitat[e] some primary state” (levels of immunological inflammation and patterns of immunological behavior more consistent with those of people without autoimmune conditions) that “appea[r] to be natural and useful” (“natural” in the sense that these altered levels and patterns are consistent with those of people without autoimmune conditions, and “useful” in that they restore—to one an extent or another—”normal” immunological function in individuals with altered patterns of immune activity). Like a paraplegic deciding which model of wheelchair to use or an amputee picking the perfect prosthesis, we with chronic immunological conditions have input into which (if any) immunomodulator to use. If the chosen prosthesis (wheelchair, artificial limb, chemical compound) turns out to be ineffective or uncomfortable, we can choose a different one.

Interestingly, because specialists who care for patients with a particular condition (like Multiple Sclerosis or Chron’s) often maintain active research agendas that focus on the condition in which they specialize, their relationships with patients are best characterized as mutually interdependent. The physician needs the patient (or at least some patients) to consent to participating in clinical trials and providing data that will facilitate the physician’s own research, while the patient needs the physician to not only periodically assess his or her function, but also to prescribe (or provide access to) what are in effect chemical prosthetics that enable “normal” function.

The fact that these chemical prostheses are not accessible without recourse to a physician is arbitrary. By this I mean that it is not difficult to imagine an alternate capitalist universe in which 3D printers (with which wheelchair users can now print portable ramps) or even Braille are made for “limited use only” and controlled as tightly as immunomodulatory drugs are now. Wheelchair users got lucky in that they don’t require a new prescription every 30 days and a “co-pay” (imagine a monthly “user’s fee” for a wheelchair) to access the adaptive technology that is their wheelchair or 3D printer. Blind people got lucky in that they don’t require “prior authorization” to use Braille. There is nothing “special” about immunomodulatory drugs—meaning, nothing inherent in the drugs themselves or even the delivery system—that somehow makes them “medical” in contrast to so-called “non-medical” tech like 3D printers, Braille, and wheelchairs. It just worked out that groups of people figured out how to manufacture, control, and ultimately profit off of immunomodulatory drugs before they figured out how to do the same with Braille or 3D printers. Or maybe they figured out ways to make immunomodulatory drugs more profitable than Braille or 3D printers. It doesn’t matter. My point is that immunological prostheses are no more “inherently medical” than any other prostheses. They became medicalized because certain people figured out how to profit off of them by tying them into the established medical system. This is utterly random.

Given the randomness of the system in place; the evolving role of physicians (with shifts toward “patient-centered care” instead of “paternalistic medicine” and relationships of mutual interdependence between both parties rather than unilateral dependence running from patient to physician only); and medicine’s accepted position as an intermediary which, for some disabled people, controls access to certain types of chemical prostheses that have been arbitrarily classified as “medical,” it seems to me that it might be high time to question and, indeed, to complicate Disability Studies’ relationship to medicine. To move forward with such a paradigm shift, the field needs to stop medicalizing chronic illness. It needs to stop labeling people with chronic illnesses (immunological disabilities) as a “problem” in need of “curing” or “fixing” through exclusion from the category of “disability.” It needs to take another look at the so-called “medical model”—one it mimics in its treatment of the chronically ill while simultaneously decrying as “undesirable” for all other disabled people. To do this, the field will need to confront its existing hierarchy of disability and seek to trouble the notion that a disability must be perceptible to laypeople in order to “count.” But most importantly, Disability Studies will need to acknowledge that its “medical model of disability” no longer corresponds to the out-dated “medical model” of medicine on which it is based—and that the widening gap between the two threatens to quash the growth of the field.

[***FIRST DRAFT: WEDNESDAY, JUNE 11th, 2014. 23:01H EDT***]

Notes

1 – I specifically add the clumsy verbiage “as defined by Disability Studies scholars” to emphasize that medical professionals themselves would be unlikely to identify with this view of their own profession. As such, “the medical model of disability” needs to be understood within the context of its formulation by DS scholars and activists. The “model” is not neutral or objective; it is a specific framing of the field of medicine and of medical professionals by people with disabilities and/or their allies, many of whom aggressively oppose any kind of “medical” intervention.

For further reading and some helpful diagrams illustrating differences between “medical” and “social” models of disability, please consult the following pages:

http://ddsg.org.uk/taxi/medical-model.html

http://ddsg.org.uk/taxi/social-model.html

http://ukdisabilityhistorymonth.com/the-social-model/2011/9/13/understanding-the-social-model-of-disability-the-medical-mod.html

2 - For an expanded discussion of my views on the “impairment/disability binary,” see this thread and this document (especially pages 2-3 and notes on page 20).

3 – The NY Times piece is by a cardiologist who discusses grappling with tensions between paternalism and autonomy, and the Forbes article is by a physician criticizing what she refers to as “dinosaur physicians”—that is “old guard” M.D.s who still practice rigidly paternalistic medicine.

4 – Many Deaf people do not view themselves as disabled, since Deafness can also be conceptualized as a cultural and linguistic difference rather than a “disability” per se.

5 – “(Im)perceptible disabilities” is a phrase coined by Stephanie Kerschbaum as a preferable alternative to the ocularcentric “(in)visible disabilities.”

6 – “In part” because DS’s enshrinement of “health” should not be underestimated as a motivating factor in the exclusion of chronically ill people, either.

7 – When scholars within DS do write about medicine, they tend to focus on eugenics, end-of-life care, and assisted suicide, thereby perpetuating the stereotype that medicine equals “sickness and death only.” See recent work by Lennard Davis (The End of Normal: Identity in a Biocultural Era, 2013), especially Chapter 7 and Tom Shakespeare (Disability Rights and Wrongs, 2006), especially Part II.

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 22 Comments

Triggernometry Redux: The “Trigger Warning” as Speech Act

Hello-Im-a-victim

An addendum to my earlier post on “trigger warnings,” inspired by a very late night discussion on Facebook:

**

The “trigger warning” can be viewed as a speech act. Considered as such, the act it performs is indirectly declarative; it (pro)claims for oneself and/or others the identity of “victim.” Because in the United States, in particular, the identity of “victim” is culturally enshrined, the deployment of the “trigger warning” is in essence an assertion of “moral superiority.” (It functions much like “not having privilege,” as described in Gawker’s playful online series “The Privilege Tournament”).

The (paradoxically privileged) status of “victim” confers upon its owner(s) the (unquestioned and unquestionable, because “sacred”) right to exert control over narratives (including the speech of other people, especially “non-victims”)—–a right understood as unimpeachable owing to the (pro)claimed, privileged status of “victim” and the authority this status bestows.

This is what George Will meant when he stated that victimhood is a privileged status, and this is just about the only thing he got right in his op-ed. He didn’t mean (or say) it was a privilege to be raped. He said that the status of “victim” comes with certain privileges. And this is what he meant. His greatest taboo, of course, was in exposing the culture of victimhood as one of power and in pointing out that the position of “victim”—-at least in contemporary U.S. society—-is one of power.

In other words, Will’s “transgression” consists of naming the power that the label “victim” intends to occlude, and upon whose occlusion the maintenance of that power depends. In exposing both the underlying mechanisms of power at play and their occlusion, Will’s op-ed threatens to subvert the authority of “victimhood.” It is primarily for this reason that he is currently being skewered online, although no one skewering him is openly admitting that this is the reason—-for doing so would force his critics to even more clearly detail the power structures underpinning the culture of “victimhood.”

[***DRAFT: WEDNESDAY, JUNE 11th, 2014. 17:44 EDT***]

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Triggernometry

tumblr_static_tw-sign6

So “trigger warnings” are back in the news again.

I’ve been reading along for months while concertedly refraining from making any sort of public comment on the discussion, but now I feel kind of obligated since it keeps raging on and I’ve already participated with a certain degree of vigor on closed forums and in Facebook feeds.

This post will consist of two parts: the first in which I express my personal views on “trigger warnings,” and the second in which I offer a brief cultural analysis of the “trigger warning” in hopes of shifting the collective conversation in a new direction.

PART I.

One of the people who spearheaded the resurrection of “trigger warnings”–specifically their use on college campuses–is a sophomore named Bailey Loverin who attends UC Santa Barbara. Loverin has articulated her arguments in favor of implementing campus-wide policies apropos of “trigger warnings” on such national platforms as the NY Times and USA Today:

From music to movies, content and trigger warnings are everywhere. We accept them as a societal standard. 

With these introductory sentences, the author concedes that the impetus behind her support of “trigger warnings” on syllabi stems, at least in part, from having grown up in a society in which “warning labels” appear before films, on music albums, on food, and so on. Ms. Loverin is so used to the ubiquitous presence of warning labels that extending the presence of these labels even further seems not only “natural,” but positive.

“Warning labels” in the United States are a relatively recent trend which began in 1938 under the Federal Food, Drug and, Cosmetic Act. Although they started with food, they quickly spread to tobacco, alcohol, and then finally to music in the late eighties and early nineties. The current ratings system employed for movies also has its roots in the 1990s.

What Loverin does not acknowledge in her opening paragraphs is that the reason these content warnings began to proliferate was due to the uptick in frivolous lawsuits in the U.S. and the desire of companies to engage in what is essentially “defensive advertising”—strategically warning “consumers” about any and all possible risks associated with their products or services beforehand so that said “consumers” cannot sue companies for millions of dollars, claiming the companies “failed to warn” them of any particular risk factor.

A recent frivolous lawsuit provides a classic example of this phenomenon (and makes me wonder if we’ll soon see a new set of “warning labels” on sneakers): a Portland pimp, Sirgiorgio Clardy, sued Nike for 100 million dollars after being convicted and sentenced to 100 years in prison for beating to death a john who had refused to pay him for one of his prostitute’s services. Clardy’s argument?

[...] Nike, Chairman Phil Knight and other executives failed to warn consumers that the shoes could be used as a weapon to cause serious injury or death.

Clardy’s lawsuit against Nike is pending.

Regarding this aspect of Loverin’s apology for the “trigger warning,” I am inclined to agree with Tressie McMillan Cottom, who writes:

[...] the “student-customer” movement is the soft power arm of the neo-liberal corporatization of higher education. No one should ever be uncomfortable because students do not pay to feel things like confusion or anger. That sounds very rational until we consider how the student-customer model doesn’t silence power so much as it stifles any discourse about how power acts on people.

You can read McMillan Cottom’s full post on the subject here.

What bothers me about the “trigger warning” is this: it implies that it is my responsibility, as a speaker and writer, to preemptively modulate the emotional and psychological responses of anyone who might hear or read my words—rather than the responsibility of those individuals to learn how to modulate and/or regulate their own emotional responses to my words (and to the world in general).

More importantly, though, it seems to me that the mass deployment of the “trigger warning” threatens to perpetuate a cycle of victimization and helplessness: people are allowed to bypass material that might disturb them emotionally or psychologically, and thus potentially avoid ever learning how to modulate their own thoughts, reactions, and emotions when confronted unexpectedly with disturbing stimuli.  In this sense, “trigger warnings” are the helicopter parents of language: in seeking to protect, they inadvertently enable large numbers of people to remain walking wounds of unhealed trauma.

In fact much of the available literature on trauma and PTSD advocates against the kind of maladaptive coping mechanism to which the “trigger warning” caters. One particularly apt passage of the Handbook of PTSD: Science and Practice (2010), flatly states:

Negative reinforcement of fear through behavioral avoidance is the primary process that is postulated to sustain, and even promote, the maladaptive fear response. Typical behavior avoidance manifested by traumatized individuals includes avoidance of stimuli associated with the traumatized event, not disclosing or discussing the traumatic event with others, social isolation, and dissociation. (41)

Translated into plain English, this quotation says: “Avoiding stimuli associated with a trauma as a result of fear leads to the perpetuation of both the avoidant response and the fear.” Or, even simpler: “Avoiding triggers perpetuates trauma and the ugly feelings associated with it.”

So much for the declarations of Loverin and others that “trigger warnings” “avert trauma.” Not only do they not “avert trauma,” they may actually serve to perpetuate the trauma and associated feelings of panic, in addition to stalling the healing process, which can only be initiated and sustained by confronting the trauma.

Much like well-meaning but overbearing parents who think they are doing right by their children when they refuse to let them play outside or intrusively moderate their children’s fights, “trigger warnings” do more harm than good to the very population they aim to “protect.”

And while Loverin alleges that “["Trigger warnings" are not] an excuse to avoid challenging subjects; instead, they offer students with post-traumatic stress disorder control over the situation so that they can interact with difficult material,” it is difficult for me to see how the function of a “trigger warning” is anything but an invitation to do precisely that—avoid the subject matter, leave the classroom, and engage in other maladaptive coping strategies.

Exploiting the trope of the “mad student” so familiar from recent media reports and capably analyzed by scholar Margaret Price in her monograph Mad at School: Rhetorics of Mental Disability and Academic Life (2011), Loverin then goes on, in her USA Today op-ed, to paint the following grim picture of the “traumatized student”:

If students are suddenly confronted by material that makes them ill, black out or react violently, they are effectively prevented from learning. If their reaction happens in the classroom, they’ve halted the learning environment. No professor is going to teach over the rape victim who stumbles out in hysterics or the veteran who drops under a chair shouting.

Furthermore, seeing these reactions will leave other students shaken and hesitant to engage. With a trigger warning, a student can prepare to deal with the content. (bold emphasis mine)

Here, again, it is possible to see how proponents of the “trigger warning” are advocating for strategies of trauma avoidance—both on the part of students with PTSD, on the part of faculty and staff, and on the part of students without PTSD who share classroom space with those with PTSD. “Trigger warnings,” according to Loverin, will cut down on classroom outbursts and avoid “disturbing” everyone involved. It is not at all difficult to see the specters of Eric Harris and Dylan Klebold or Kip Kinkel or Seung-Hui Cho lurking between the lines of Loverin’s text.

It is as though Loverin is suggesting that one kind of “trigger warning” will help prevent another, more gruesome “trigger warning”—that of the school shooting. While this type of neat and tidy logic may be very appealing to administrators, it is largely fallacious since the reasons for school shootings have very little to do with PTSD and “trigger warnings” and a lot to do with, basically, the availability of guns and our enshrinement of a culture of violence in the United States.

A claim I’ve heard repeated in various blog posts and op-eds by those in favor of the “trigger warning” stands out: namely, that post-traumatic or distressed reactions by students “hinder” or “prevent” learning. (Loverin takes it a step further, citing “halted learning environments” for both the student experiencing PTSD and others present in the classroom. Interestingly, her description flirts with the idea that witnessing another’s trauma is in and of itself a form of trauma—an argument parallel to the one which asserts that, for any victim of a past trauma, witnessing evidence of similar trauma in the present is always already traumatic.) When I read the passage above, though, I see something quite different: I see an opportunity to engage with the classroom (students and events) in real-time and to use that engagement to promote learning. I believe, in short, that pain can be a site of learning both for those who experience it and those who bear witness to it.

I am not in favor, obviously, of inflicting pain for the sake of inflicting it—that would be sadism. What I am suggesting is that it’s OK for classrooms to be messy, human places where messy, human reactions occur, and that I think it’s better for us to engage with them as they transpire than attempt to curtail them before they can take place. I do not buy the assertion that incidents such as those Loverin describes “prevent learning.”

One aspect of Loverin’s piece which I find compelling is her focus on the concept of “control.” She reiterates a couple of times that trauma victims need to feel “control”—indeed, mastery of trauma entails regaining this feeling. Where we disagree is about how that mastery should unfold and over what—or whom—that control should be exerted. My position is that mastery of trauma is best achieved by confronting trauma rather than seeking to avoid it and that learning to modulate one’s own emotions in a diverse array of settings and when faced with a wide range of subject matter is a good way to regain a sense of “control.” Seeking to exert control over course content or classroom discussions (or other people) for the sake of (unhealthily) avoiding one’s trauma is not.

Which brings me to another observation: whenever I have seen demands for “trigger warnings” deployed, they seem to be deployed by whomever wishes to regulate either a conversational topic or the manner in which it is being articulated. That is, I see “trigger warnings” being used to strategically to silence some voices. I’m reminded again of Tressie McMillan Cottom’s “student-customer” model, since the question of who is attempting to exert control over the discourse has a lot to do with social class (and probably race as well).

I once read somewhere: “Being rich means being able to choose what one does and does not experience in life.” We could modify this statement to read: “The richer you are, the more control you have over what you do and do not experience in life.” It is reasonable to assume that places like Oberlin CollegeUC Santa Barbara, and Rutgers—three institutions of higher learning embroiled in debates about “trigger warnings”—are by and large populated by students from comfortably upper-middle-class families (or above). [1]

These students—more so than poor students—see themselves as “consumers,” which makes sense since the more disposable income you and your family have, the more you engage in patterns of consumption and, more importantly, the more you experience consumer choice. To give a quick, concrete example of this phenomenon at work: if you’re poor and going food shopping you typically go to the cheapest grocery store around and look for the least expensive food item available (like Ramen). Your range of “choice” becomes limited to whatever is cheapest or—on a good day—to several equally as cheap items. Conversely, if you’re upper-middle-class or wealthy, you have the ability to exercise choice over which supermarket you will shop at and then, once there, over which products you will purchase and, within any given food category, which brands you will select. Your horizon of choice is noticeably greater than that of someone with a fraction of your income, so you experience “choice” at every level of your shopping process. You grow accustomed to “choice.”

With “trigger warnings,” students are applying “consumer choice” models to education. This is not necessarily problematic in and of itself and, as some have pointed out, may even be beneficial in empowering students to participate actively in shaping their own learning. The quandary arises when one begins to consider who exactly is exerting their “right” to “consumer choice” through the arm of “trigger warnings.”

In the real lives of people not privileged enough to selectively choose what they will and will not be exposed to, “trigger warnings” do not exist. And it seems to me that we are currently more interested in protecting some students from mention of trauma than we are in protecting others from actual trauma. In a climate where, just yesterday, Johns Hopkins University suspended an entire fraternity for, among other crimes, gang rape, we appear more invested in “protecting” students with PTSD from reminders of past trauma than we do in protecting all students from lived experiences of trauma. In the process, we may also be discouraging students who do experience trauma on campus or while enrolled in our institutions from speaking or writing about their experiences, for fear of “triggering” their peers.

We are creating an environment where speaking, naming, or showing trauma is becoming more taboo than actually traumatizing another human being through an act of violence—and this is a problem, particularly for students from less-privileged socio-economic backgrounds who may leave our classrooms and encounter repeated, ongoing violence at home and in their communities. These students often cannot “choose to avoid” or even “prepare themselves beforehand” for repeated encounters with trauma, for it is happening all around them—to them—on a daily basis. We are coming dangerously close to fostering a culture of silence around trauma that threatens to perhaps “protect”—temporarily, for avoidance is not an effective long-term strategy for dealing with trauma—more privileged students while both failing to protect and silencing less privileged ones. Only if you are privileged enough to experience an end to your lived trauma do you have the time—the luxury, the choice—of insisting that literary and cultural objects reminiscent of your original trauma bear “warning labels.” Only if your lived trauma is not relentless does it even occur to you that you might be able to avoid confronting it (despite the fact that all evidence shows that failure to confront trauma is detrimental to recovery).

Unless you are fortunate enough to exert the kind of control over the rest of your life that you would propose to exert over potentially “triggering” material, avoiding that material in the (more or less) safe space of a classroom will in no way prepare you for what you will encounter after you graduate. On the contrary, you will likely be forced to deal with unanticipated “triggers” on a regular basis—at your job, in your neighborhood, when you travel. The question of “trigger warnings” then evolves into one about whether you’d rather learn how to modulate a panic attack in class or in a boardroom, at the university or the next time you’re deployed for military duty. My take on this is that the classroom and university—where stakes are still relatively low and support is available—would be preferable training grounds for learning how to successfully process trauma.

PART II.

disability_symbols_161

I’d like to contemplate the possibility that demands for “trigger warnings” may not be what they seem, at face value, to be. Up to this point, I’ve dissected Bailey Loverin’s op-ed about these “warnings” and formulated some of my personal objections and challenges to the concept of “trigger warnings” as they intersect with issues of disability and class.

From a Disability Studies perspective, it is reasonable to ask not only whether “trigger warnings” do more harm than good (as I did above, in Part I), but also what it is that we do when we maintain, like David Perry does in “Should Shakespeare come with a warning label?,” that:

The classroom is not a therapist’s clinic [...] Moreover, it’s a decision for a patient and a therapist or doctor to decide and advise a university, rather than for faculty or administrators to decide for themselves.

I’m not really sure that we can have it both ways. If “the classroom is not a therapist’s clinic” and the decision about when, how, and where a student should or should not be exposed to subject matter is “for [...] a therapist and doctor to decide and advise a university,” then why are we even talking about implementing blanket policies on “trigger warnings” in university environments? (Perry himself is not arguing in favor of these blanket policies, but instead indicating that our existing systems of ADA accommodations policies can and should adequately address the needs of students with PTSD, and I am generally inclined to agree with him.)

I quote Perry at this juncture because I have read similar sentiments in tweets and Facebook posts by academics over the past several months—minus Perry’s astute qualification that our existing disability policies can and should sufficiently address the concerns of students like Loverin. For those academics who clamor “we are not therapists” but also support blanket “trigger warning” policies: your position appears internally contradictory.

Also from a Disability Studies perspective, it is worth pondering the advantages and/or drawbacks of such blanket policies. Does a failure to implement them effectively “medicalize” PTSD in a way that would be considered undesirable within the larger framework of Disability Studies? In other words, when we reject blanket policies on “trigger warnings” and instead direct students towards individualized solutions (via therapists and doctors, medication, and ADA accommodations), are we in essence “medicalizing” PTSD–and by extension disability in general? What might this question reveal to us about relationships between (mental) illness and disability as perceived by DS scholars? By the public?

What fascinates me about the idea of over-arching “trigger warning” policies is that, whereas ADA accommodations are tailored towards individual students—with all students enrolled in a given school presumed non-disabled until and unless they declare themselves disabled by requesting accommodations [2]—”trigger warning” policies operate via the inverse principle. They preemptively assume all students are in fact traumatized (or vulnerable to the effects of PTSD). Thus, from a purely theoretical point of view, blanket “trigger warning” policies are quite progressive since they assume disabilitynot able-bodied/mindedness—as the default state. In so doing, the policies fall more in line with “social model” approaches to disability; they identify the problem as residing in society instead of in the bodies/minds of disabled individuals, with these blanket policies acting as the ideological equivalent of an adaptive or assistive technology. If all this is true, then what we’re witnessing is a potentially revolutionary paradigm shift in the way we view mental/psychological disability.

The two types of trauma victims that blanket “trigger warning” policies are cited as “protecting” include soldiers and rape victims. I question why we would be engaged in a discussion now, as a society, about whether or not we wish to move forward with the paradigm shift I’ve just described. Temporarily putting aside my arguments about the “student-consumer,” etc. — why now? I wonder if the desire for “trigger warnings” communicates something about us on a macro level, as a culture. For if, as I have insisted, we as a culture tend to avoid facing trauma—we suppress it, silence it—and if “trigger warnings” are about exerting control (however maladaptive the strategy may be), then perhaps we as a culture are struggling to modulate and control our own large-scale trauma: our nation’s legacy of violence.

When I re-read Ms. Loverlin’s stereotypes of the “hysterical” rape victim and the “shouting” soldier along with that of the student-witnesses who become “shaken and hesitant to engage,” my mind pans reflexively through a Rolodex of events: 9/11; the wars in Iraq and Afghanistan; the financial crisis of 2008; years of gun violence in schools; the Marathon bombings; mass incarceration of U.S. citizens; natural disasters; rape on college campuses.

I remember that students of Ms. Loverin’s age have, for all intents and purposes, never known a world without war, natural disaster, gun violence, terrorism. And I wonder if the ongoing debate surrounding “trigger warnings” might actually be about something far greater, albeit unspoken—an expression of our students’ desire to try and mitigate collective cultural traumas. An attempt, if you will, to exert some control.

[***FIRST DRAFT: TUESDAY, MAY 20th, 2014. 23:45H EDT***]

**

Notes

1 – A complete breakdown of data (including reported family income) for UC – Santa Barbara students is accessible here, in .PDF formatIf anyone can find data on Oberlin, please do contact me; I did some fishing but was unable to find anything like “average family income” for students enrolled. Here (also in .PDF format) is some info. on demographics at Rutgers, with a breakdown by campus within the Rutgers system as well. Apparently (thanks, David!) one indirect measure of student/parent income is the percentage of students at a given institution who receive Pell Grants. Information for any institution about the percentage of its students who receive Pell Grants can be accessed here. In 2012, 31% of Rutgers students received Pell Grants. According to the figures posted in the U.S. News report, this would place Rutgers somewhere in the middle socioeconomically; far more students at Rutgers receive Pell Grants than at Oberlin, yet more students at UC Santa Barbara (whose overall student body is far from impoverished) receive Pell Grants than at Rutgers.

2 - That is, the very framework of “accommodations” presumes a “default” of able-bodiedness.

"The Falling Man," by Richard Drew.

“The Falling Man,” by Richard Drew.

 

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | 29 Comments

“Can You Get Me Into College?” – Midnight in Southie

Southie1

Photo by Valéria M. Souza

It was midnight and we sat on the jungle gym of a South Boston playground designated as being “for ages 8-12″ and “requiring upper body strength and coordination.”

We both had some degree of “upper body strength and coordination,” but neither of us was 8-12.

The young man, who had abandoned his skateboard nearby to come talk to me, interrupted my vaguely clumsy acrobatics on the monkey bars to ask: “Yo, what are you doing? Like, why are you on here?”

I dropped to the ground.

“I saw you skateboarding,” I said.

“Yeah—-so?”

The retort was a bit defensive, challenging. Did he think I was a cop or something? “No, I mean—-I don’t care. I just wanted to ask you: do you skate here at night? Do people bother you? Like: tell you to leave? Or is this place chill? That’s all….”

Instantly he relaxed. His shoulders dropped as he shrugged, open-palmed. “Oh, no—it’s cool. Nobody ever bothers us. They’ll tell us to leave during the day, but at night nobody cares.”

“So, like, you think I could come here a few times a week and climb and nobody would bother me?”

“Yeah, for sure. No one’s going to give you a hard time.”

“Cool—thanks.”

“But why are you climbing?”

“I’m training. Practicing.”

“For what?”

I smiled. Silence.

“C’mon—you’re not gonna tell me?”

“I can’t,” I replied.

“Are you gonna climb a mountain?”

“Maybe. Maybe I am.”

“You’re not gonna climb a mountain, I can tell. Are you like sponsored by Red Bull or something?”

“Haha—no. I am most definitely not sponsored by Red Bull or anyone else.”

We faced each other on one of the metal platforms in the playground.

“Do you mind if I smoke a bowl?” he asked.

“I’d rather you not.”

“OK—-I won’t then. How old are you?”

“How old do you think I am?”

“Like 20-something.”

“I’m 34. What about you?”

“22. Listen—OK—can I ask you a question then?”

“Sure.”

“How do you feel about, like, dating younger people? Like would you date someone my age?”

“I would not,” I answered calmly. “To me that’s waaay too young. I’m a college professor. My students are 18-22. That would be like dating a student. That’s really weird, and I would never do it.”

Suddenly he stood up, his body a lightning bolt striking the air between us. Gone was the casual, off-hand questioning. Gone was the interest in smoking a bowl. “Wait. You’re a college professor?”

“Yeah—-here: give me your phone.” I Googled myself, then loaded the faculty page from the university where I worked. “Here, that’s me. Read.”

He read. He looked at my faculty picture, then at me. Again at my faculty picture, then back at me.

“I need to talk to you,” he insisted, handing the phone back to his friend with terse instructions to bookmark that page, yo—the one she’s on. “How do you like….get into college?”

I squinted, unsure of what he meant. A specific college? College in general? Which aspect of “getting in”? This was a far cry from some of the elite universities at which I’d taught—places where students were already richer, savvier, and better-traveled at 18 than I’d be at 80. Those kids attended Milton Academy and Phillips Exeter and had schedules of meticulously planned extracurricular activities and spoke fluent Mandarin. Or fluent French. Those kids had SAT prep and could afford to do unpaid internships because their parents were rich and they didn’t need to work for money. Those kids—so smart and cosmopolitan and sure of themselves—were so different from me. From us.

“What do you mean?”

“I mean like….the whole process. Look. No one in my family has ever gone to college. Nobody knows what to do. The counselors at my high school didn’t help us. I try to research and I know which schools I want to get into, but I don’t know the process.”

“Wow. OK—-well, you’re right. It is a process. There are a lot of steps involved. Hmmm. OK. We’ve got to fill out applications and financial aid stuff and…”

He interrupted, rattling off a list of four or five elite out-of-state schools he dreamed of attending and asking if we would have to complete a FAFSA. I blinked. This kid was obviously intelligent and had done his homework. He had a short list of schools. He could list the characteristics of each one that he found especially attractive. He knew the FAFSA existed. He was doing the best he could with what he had—and what he had was very little.

Southie2

Photo by Valéria M. Souza

“OK,” I probed, “what’s your GPA?”

“Like 2-point-something.”

I sighed. “OK—-that’s not high enough for the schools you’ve listed. So we’re going to have to do something a little bit strategic. Let me know what you think: first we get you into a lower-tier public school or community college here in Mass. I know you want to go out of state, but your GPA is not high enough yet. So you do a year at one of those lower-tier schools and you get straight As, and then we rig it so you can transfer out to one of your dream schools.”

“Straight As?”

“Straight As. You can be poor and brilliant or rich and mediocre, but you can’t be poor and mediocre. It just doesn’t work that way.”

He nodded in agreement. “I feel you. Straight As.”

“You’re going to have to work hard.”

There was a long pause. He fiddled with his marijuana and looked down. I felt my heart twisting. Not out of pity. Out of deep sadness because of all the people who had failed this kid. This bright, driven, earnest kid.

“Will you help me get into college?” he asked.

The request was so simple. A hand reaching across a divide, grasping. Hoping for someone to grab it and not let go. I remembered my own trajectory, long and far. I felt another twist in my chest for this boy who was just like I had been, once upon a time. I remembered filling out the FAFSA by myself at the kitchen counter in my Mom’s condo. I remembered trying to write a persuasive letter to the Financial Aid Office that included the phrase “onerous mortgage payments.” I remembered taking the SAT twice and with zero preparation beforehand. I remembered applying to only one school—NYU—because I wanted to go there and because nobody had introduced me to the concept of “the safety school.”

I placed my hands on two horizontal, parallel bars and pushed, lifting myself upwards ever so slightly, my feet maybe 3 inches off the ground. I still had a lot of work to do; my upper body strength was total shit. Need to build muscle, I thought, and lowered my body back down to the ground: “Yes. I will help you get into college.”

With those words, he was like a child in front of whom I’d just set a birthday cake. His eyes burned, two lit candles.

“You’ve done this before, haven’t you?”

“It’s my job.”

“You’ve gotten other people into college before.”

“There’s a name for this,” I said. “It’s called ‘being an advisor.’”

“You’re my advisor now?”

“I am your advisor.”

It was spontaneous. He threw his arms around me. He hugged me tight, pressing his fingertips into my vertebrae. I hugged back.

He didn’t want to let go. We had to exchange emails and cell numbers. He had to make sure he had the right information. He could not lose track of me.

“I promise you, I’m not going anywhere.”

Still, he had to make sure.

“I’ve wanted to go to college since I was in high school and I tried—I tried—but nobody could ever explain it to me. My family, they’re good people but they just don’t know anything about it. They never went to college. I tried asking people for help and nobody could ever help me. You’re the first person who has ever known how to help me get into college. I can’t lose you.”

“I know what that’s like. It’s hard. But I promise you, I’m not going to disappear. So let’s do this. Let’s get you into college.”

Grinning.

“Tell you what: you get me into college and I’ll train you.” The kid flexed, showing me biceps, triceps, rippling shoulder muscles. Granted, he was 22 and a boy—both advantages in terms of general fitness and strength—but he clearly trained. “I’ll train you.”

I extended my hand in the darkness to seal the deal. We shook.

“Deal.”

“Deal. You gotta problem with push-ups?”

“Nope.”

“Pull-ups?”

“Nope.”

“You gonna complain?”

“Nope. I am willing to work hard. You’ll see. I’ll work hard to build muscle and you work hard to get into college. And if we both put in the work, it might just go our way.”

“That’s right,” he said. “That’s right.”

Southie3

Photo by Valéria M. Souza

[***FIRST DRAFT: THURSDAY, MAY 15th, 2014. 19:09H EDT***]

 

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , , , , | 9 Comments