Making Contact: Touch Screens and Antimaterialism

Below is the text of the presentation I gave at the Conference on College Composition and Communication in Indianapolis. It’s part of a larger project so I’m hoping this little bit makes some sense. Basically arguing that touch is never knowledge. It is always distant contact as an undoing and unfinishing. Digital touch screen interfaces want us to forget that, but we shouldn’t. Taking my cue here mainly from Jane Bennett, but also from Karen Barad, Henry David Thoreau, and Kathleen Stewart.

I want to begin with this passage, but I am not going to tell you where it’s from until after I read it.

Clear your mind.

Imagine gazing into a pond of crystal clear water.
Picture bright, playful koi swimming through its shallow depths.
So close… Can you touch them?

You run your fingers across the cool surface of the pond.
Water ripples away from your touch.
The koi, disturbed, dart away.
Only to quickly forget and swim close to you once more…

This passage is not an invitation to meditate, though a bit of meditation might do us all good at this point in the conference. It’s also not some amateur poetry I whipped up as an evocative opener. This is really a description of the Koi Pond app for iPhone, iPod touch, and iPad.

This is an image depicting the Koi Pond app during use.

Koi Pond is a simple but beautiful work of interactive media, created by a group of professional game designers in their spare time. (They call themselves the Blimp Pilots. Not a ’90s alt-rock band even though it really sounds like it.) Between the app’s name and the description I just read, you probably get the basic idea. I can customize my pond and select weather effects such as rain and wind. And of course I can interact with the koi by touching “the cool surface of the pond.” The fish skitter away when I make abrupt swipes, but they gather around my touch when I patiently rest my fingertip on the screen.

In the brief time I have today, I am hoping that Koi Pond can serve as a quick but concrete introduction to the account of materiality we find in Vibrant Matter (2010), a book by the political philosopher Jane Bennett. [Quick note: even though Vibrant Matter was published in 2010, I will also be referring to an earlier 2004 article in which she gives an early articulation of her argument.] Bennett’s project is to:

Preferring anthropomorphism over anthropocentrism, the vital materialist does not give much credence to hierarchical “soul vitalism,” the belief that “life is special, but we humans are the most special” (2010, 88). Her term
functions as a reminder to watch for soulness or liveliness apart from human intervention or activity, and to appreciate that liveliness rather than try to fully understand or demystify it. On their own, and on their own terms, things “command attention” and exceed comprehensive explanation (4).

Interacting with Koi Pond is not unlike the way Bennett wants us to encounter things. As we try to touch the fish in Koi Pond, they “dart away.” We are “so close” to them, yet they respond in surprising ways. Likewise, for Bennett and other new materialists, to “touch” the thing is never an appropriation or possession. It’s more like the distant “contact” of picking up the signal of a ship in the dark. A fragile awareness or sensing—a proximity that produces not knowledge but rather the ability to be responsive to things. Barad’s word for this ethics is
response-ability (2012).
Contact with things leaves room for surprise and for a “cultivated, patient, sensory attentiveness to nonhuman forces” (Bennett 2010, xiv). Thoreau is one of Bennett’s philosophical models (or touch-stones, har har) for this attentive contact which leads to a sense of wonder, mystery, and self-other-inquiry.

“Who are we? where are we?” Thoreau asks after making contact with “rocks, trees, wind on our cheeks! the solid earth! the actual world!”

But Koi Pond, as the leading example in my “vital materialism 101″ course, has its limits, since in many ways the app works against thorough sensory attentiveness. There is a different kind of materiality at work in Koi Pond and many similar apps, such as Zen Garden and painting apps like Brushes or Bamboo Paper for the iPad and Android tablets, and for this reason, I am not certain that Bennett’s thing-power is capaciousness enough to approach the specific materiality of digital composing spaces.

Recall that Koi Pond first asks us to “imagine” and “picture” the experience of touching koi. Unintentionally or not, the app’s developers hint at the fact that our contact with these fish is metaphorical or speculative. In the story-world of the app, I extend my finger towards water. But if I reverse my perspective and look out from the other side of the screen—from the koi’s point of view on the technical side—I realize that I am actually touching a sheet of glass, about 10mm thick, and I am not disturbing pond water but electrons as I conduct heat and electricity from my body to the device’s display. Interacting with the koi pond is still touching, but it’s a different kind of touch than what Thoreau experienced in the Maine Woods, or what I experienced during my walk near Lake Michigan last week, which is where I took these photographs. There is also a displacement and distancing at work in the koi pond. I change the electrical properties of the screen every time I touch it: “The [capacitive touch screen] records this change as data, and it uses mathematical algorithms to translate the data into an understanding of where your fingers are” (Wilson). Yet I am distant from that algorithm. I am at a remove from an essential facet of my device’s materiality. In desiring intimacy with screens, we simultaneously desire distance from them as things in the Bennett sense of the word.
intimacy with screens, distance from things
This two-sided desire is what Bill Buxton pinpoints when, during a recent interview, he spoke for his team of user-centered touch screen designers at Microsoft Research (the people behind the Surface tablet). He said
If you're aware there's a computer there, we've failed.
This is a striking statement that reveals the potential of touch screen interfaces to become not only immaterial, as they already are to most users, but worse antimaterial. Stronger than immateriality, antimateriality actively devalues matter. For the vital materialist, antimaterialism is not simply ignorance of matter but an accelerated sweeping-aside or aggressive disposal of matter and its messy complexities. If, as Matt Kirschenbaum (2008) writes, “Computers have been intentionally and deliberately engineered to produce the illusion of immateriality” (135), touch screens amplify this illusion.
image of my shadow on the ground
While Bennett uses the term antimaterial to describe wasteful consumer culture, I’d like to suggest that we carry over the concept to better understand our writing technologies from both sides of the screen. As touch screens become more common and practical amongst students, will we be ready to counter antimaterialism when fingers touch language rather than fish? Now, we do have a precedent for answering this question. A rich and substantial body of scholarship has already attended to materiality of composing practices. Jody Shipka, Christina Haas, T. Hugh Crawford, Anne Wysocki, Jamie Bianco, and others, including the people here with me today, have attended to the materials and materiality of composing practices. They have explored the role that objects—not always or even usually alphabetic writing technologies—play in the composing process, as companions and guides. Materialist composition theory tries to account for the ways that paper, computer code, keyboards, or even ballet shoes shape communicative or expressive acts without defining them. A vital materialist theory of composing may further help us see that the trajectory of effortless interface design, in concert with revenue-generating innovations like Facebook’s “frictionless sharing,” is detrimental to digital writing instruction and invention.

Compared to typing or using a mouse, which both take more or less skill, there is something natural or innate about touch as a mode of interaction. This app simply wouldn’t work on a desktop computer. The mediation of a mouse and keyboard would eliminate the sensation of touching the interface itself, and this sensation of contact is precisely the content of Koi Pond. “Can you click them?” doesn’t have quite the same ring to it as “… Can you touch them?”
can you click them vs. can you touch them
Psychologists have called touch “a first language,” and studies have shown that touch is the fastest, most accurate, and most instinctive form of communication available to humans—more versatile than both voice and gesture (Chillot). For infants, touch is an essential source of information and even for adults, touching has proven benefits. One study tracked touching amongst NBA players, finding that teammates’ on-court touching, such as chest bumps, high fives, back slaps, was proportionate to wins (Chillot).

Perhaps then it comes as no surprise that not only are touch screens everywhere, but that we also expect them to be everywhere. Some of you might be familiar with this video that went viral in 2011:

In the video, a baby effortlessly operates an iPad while appearing to get frustrated or confused by the static images in a glossy magazine. Similarly, it’s not uncommon to see fellow air travelers attempt a swipe at the TV screens inset into the seat backs. Heck, even a cat will try to bat an iPad—especially when koi are involved. In the koi pond, there is a closeness with the objects of attention, a closeness that we don’t have with the keyboard/mouse. Perhaps there is something in all of us that desires that closeness. If so, the desire is highly marketable. Steve Jobs, for instance, introduced the iPad in January 2010 by saying
it's so much more intimate than a laptop (Steve Jobs quote from 2010 iPad launch)
and the Nabi 2 tablet for children is currently being advertised with the plug:
It's more than a tablet. It's a friend (Nabi 2 tablet TV commercial slogan)
But touch is not only intimate and friendly. It is also threatening, clumsy and… well, icky. There’s a reason Lysol wipes come in easy-dispensing containers, but there’s also a reason that so few research studies exist to illustrate the effects of touch screen tablets on the writing process. Not reading. I am talking about the writing process specifically. For written communication, it seems, touch screens just aren’t suited to the job. A study conducted at IUPUI in 2011 explicitly discusses the difficulty many students face writing on a touch screen. One participant reported that “The [iPad's] interface and keyboard would slow me down significantly during writing” (Morrone, Gosney, and Engel).

Recently, I exchanged a few emails with Kezia Ruiz, who teaches writing at a community college with a Bring Your Own Device (BYOD) policy. Kezia reports that her students compose papers on iPhones and tablets because these devices are cheaper than laptops. Yet, they struggle with logistical problems like formatting, as you can imagine. In this case, the touch screen hardly seems “more intimate” than the laptop. It adds a layer of material roughness to what is supposed to be a smooth, user-friendly device. Divides of class and access continue to trouble optimistic accounts of digital pedagogy, and Kezia’s observations about her students point to the differences between using a tablet for writing vs. using a tablet for consuming media and playing games. As a side note, these are not unlike the concerns that Audre Lorde raised in the 1980s, when she wrote that

“It is the most secret, requires the least physical labor, the least material, and the one which can be done between shifts, in the hospital pantry, on the subway, and on scraps of surplus paper” (Lorde 116). Similarly, Kezia describes material circumstances that make a tablet or smart phone a necessary site of composition, despite the drawbacks of these devices for composing.[1]

So on one hand the touch screen offers us intimacy and closeness, which is what we likely wish to achieve in our relationship to readers. But, on the other hand the touch screen distances us from the writing process and prevents students, like those Kezia describes, from “getting serious about writing.” There is indeed something toy-like about the touch screen, and the iPad’s release event in 2010 actually trumpets this as a feature: “You can go through huge quantities of email and it’s just fun because you’re doing it all with your hands,” declares Apple’s VP Phil Schiller during a demo. Like touch itself, interfaces are both intimate and violent. That is the source of their power. Open arms and friendly faces are really compressions of material variance. Somewhat counter-intuitively, the more intimate we become with touch screen tablets and phones, the harder it is to perceive the range of their materiality. The further we fall from these devices’ status as actants with “trajectories, propensities, or tendencies of their own” (Bennett viii), the harder it is to realize that, when it comes to touch screens,
they feel us as much as we feel them

Works Cited
Barad, Karen. “On Touching—the Inhuman That Therefore I Am.” The Politics of Materiality. Ed. Susanne Witzgall.

Bennett, Jane. “The Force of Things: Steps toward an Ecology of Matter.” Political Theory 32.3 (2004): 347-372. Web. 19 Mar. 2014.

—. Vibrant Matter: A Political Ecology of Things. Duke University Press, 2009. Print.

Chillot, Rick. “The Power of Touch.” Psychology Today. 11 Mar. 2013. Web. 19 Mar. 2014.

Ion, Florence. “Finger-free phones, full body gesturing, and our “touchscreen” future” Ars Technica. 1 May 2013. Web. 9 Mar. 2014.

Kirschenbaum, Matthew G. Mechanisms: New Media and the Forensic Imagination. Cambridge: MIT Press, 2008. Print.

Lorde, Audre. “Our Difference Is Our Strength.” Sister Outsider: Essays and Speeches. Watsonville: Crossing Press, 1984. 315-319. Print.

“An iPad Is a Magazine that Does Not Work.” YouTube. 6 Oct. 2011. Web. 14 Mar. 2014.

Morrone, Anastasia, John Gosney, and Sarah Engel. “Empowering Students and Instructors: Reflections on the Effectiveness of iPads for Teaching and Learning.” EDUCAUSE. Web. 2012.

Ruiz, Kezia. Email communication.

“iPad Introduction – Apple Special Event January 27th, 2010.” YouTube Playlist, Part 1 of 10. Web. 22 Mar. 2014.

Thoreau, Henry David. The Portable Thoreau. Penguin, 2012. Print.

Wilson, Tracy. “HowStuffWorks ‘The iPod Touch Screen.’” HowStuffWorks. 11 Sept. 2007. Web. 16 Mar. 2014.

[1] This point is definitely debatable. During Randall Monty’s CCCC presentation this year, on transnational and border students’ use of mobile technology, members of the audience engaged his class in real time, through the hashtag #RC2PA. I learned that many of his students don’t mind composing on smart phones, and some prefer it. A particularly compelling example is the student who gets stuck on the bridge one day during his cross-border commute from Mexico. The mobile phone then becomes as convenient and important as the scraps of paper Lorde mentions.

Posted in commentary, professional stuff | Tagged , , , , , , , , , , | 2 Comments

Reasserting Thing-Power: Roughness as a Response to Antimaterialism

Here’s the talk I gave at the 2013 DH Forum at the University of Kansas today. It was a well organized conference. Many thanks to all who attended, tweeted, and asked such great questions. Here are my slides, if you’re into that kind of thing.

In a typical scene of multitasking—i.e. procrastinating one thing by half-working on another thing simultaneously—I was drafting this presentation while prepping for a class I’m teaching this semester. In an unexpected moment of clarity, the two came together in this passage from Walt Whitman’s “A Backward Glance O’er Travel’d Roads,” first published in 1888:

My Book and I—what a period we have presumed to span! those thirty years from 1850 to ‘80—and America in them! Proud, proud indeed may we be, if we have cull’d enough of that period in its own spirit to worthily waft a few breaths of it to the future! (382)

The line “My Book and I” really caught my attention. I couldn’t get over the question, how would we have to revise this line if we were to imagine Whitman—or a student in one of our classes—speaking it today? My Blog and I? My E-portfolio and I? Or—and this sounds dreadful to me—my Facebook and I? Whitman’s passage makes me wonder about issues like digital longevity, preservation, and any medium’s capacity to bear history forward. But mostly I am struck by the way the passage expresses such a complete identification of a writer with his text as a living, breathing, independent, at times gnarly and surprising thing. The book is not the vehicle, but rather it’s the companion sitting in the passenger seat, sometimes taking a turn to drive now and then. Moreover, the passage makes me wonder—and worry—about the current state of composing in and for digital media.

If Whitman had grown up with a Commodore 64 instead of typesetting and ink, he surely would have been at the center of trends and creative practice in new media. Whitman self-published and literally produced his books, maintaining involvement with them from start to finish. His conception of the composing process is material, through and through. In Re-Scripting Walt Whitman, Ed Folsom and Kenneth Price capture this claim in precise terms: “Ideas, Whitman’s poems insist, pass from one person to another not in some ethereal process, but through the bodies of texts, through the muscular operations of tongues and hands and eyes, through the material objects of books.” This thoroughly materialist approach to writing fits Katherine Hayles’ criteria for media-specific literary practice. If we take into account the broader scope of Whitman’s lifelong writing project, what Folsom and Price call an “extraordinary conflation of book and identity,” Hayles’ term “technotext” seems apt. Composer, composing medium, and composition: a “dynamically interacting whole” (Hayles “Print Is Flat” 86).

But what would it take for Whitman to achieve that technotextual conflation and make the same statement today—“My Book and I”—with a digital medium taking the place of the capital-B Book?[1] It would take knowing a markup language and maybe one or two programming languages. And/or it would take familiarity with a software platform and an operating system to run it. And/or it might also take a devotion to staying up-to-date with changing web standards and new mobile devices. I could envision Whitman being quite fond of commenting his code—he would mark up his HTML body as if it were his own. Yet—and imagine he continuously revised and updated the same digital text over and over for thirty years—it would take an incredible amount of technical skill, support, and ambition.[2] For Whitman—or here I could even say students in any writing-intensive class—getting the digital material apparatus to embody the process of its own creation would be / is difficult. It’s hard work made even harder not only by (1) institutions and programs that aren’t structured to afford the time or resources to teaching coding or new media practice in-depth, but also (2) a trend in digital culture towards preferring a smooth user experience, even if that means making the back-end functionality hostile to users who want to understand how it works. The first factor is important, but so is the second, and that’s the one I want to focus on today.

The receding presence of digital materiality, specifically in web applications that see users as consumers (Amazon, but also Facebook), is largely a result of antimaterial sentiment intertwined with digital devices that are explicitly designed and marketed to not be noticed. I take the term “antimateriality” from the political philosopher Jane Bennett, who argues convincingly that the separation of inert, passive object from purposeful, active human subject is neither ethical nor accurate. Her 2010 book Vibrant Matter champions “a vitality intrinsic to matter itself” (10). For the vital materialist, antimaterialism is not simply ignorance of matter but an accelerated sweeping-aside or aggressive disposal of matter and its messy complexities. She invokes antimaterialism in the context of consumer culture: “It is hard to discern, much less acknowledge the material dignity of the thing when your [...] thoughts are scrambled by the miles of shelving at a superstore. [...] Too much stuff in too quick succession equals the fast ride from object to trash” (351). Through the lens of what she calls “thing-power” objects or debris (“stuff to ignore” [4]) become lively things (“stuff that [commands] attention” [4] or fear).

While Bennett’s subject matter is metal crystals, fatty acids, and electrical power grids, it’s easy to see how the concept of antimaterialism can translate to digital culture. Think of the app store, where for 99 cents or $1.99 or even for nothing, you could have a little program that does its job and nothing more or less. If you don’t like the app or it doesn’t do what you want, you can just delete it or forget it and choose a different app from the vast inventory. For a number of years, I’ve been in the habit of jotting down striking quotes from tech industry TV commercials, and here’s one I wrote down during an iPhone ad in 2008: “iPhone: solving life’s dilemma’s one app at a time.” This appification of computers (i.e. the process of turning a complex and error-prone technology into a Swiss Army knife) is a marketing sensibility that sells readymade fixes for problems, trumpets ease over struggle, and speeds along the recession of digital materiality in the process.[3]

Ease and forgetability are somewhat paradoxically the clearest beacons of complication and obfuscation. Latour is my touchstone for this point, especially when he describes the automatic door-closer as a technology that disappears until it fails to work on a cold February day. That’s when someone must affix a note reminding new arrivals to close the door behind them because the trusty device “is on strike” (Latour, “Missing Masses” 153). As Jentery Sayers paraphrases the general argument of Latour and other theorists concerned with nonhuman agency and critical media awareness, “with imperceptibility comes the naturalization of ideologies, where only input and output matter.” Imperceptibility, which often qualifies as ease in our use of interfaces, is an enemy of creative practice and pedagogy. Ease conceals opportunities for intervention and resistance. Such opportunities are actually easier to identify when a particular medium has rough edges and takes something other than ease-of-use as its primary goal. In this way then, we might think of rough media as actually being more teachable than easy, squeaky clean, well functioning media. Contours, bugs, and surprises yield a deeper and less hierarchical critical engagement than we might otherwise discover in things that “just work” seamlessly and solve dilemmas. Thinking with thing-power helps us remember that all of our writing technologies have a little self-aware ROOMBA in them.

Let me put this another way—and this might be stating the obvious: it’s easier to see the materiality of paper. Not just to see it, but to experience it—to smell it, stain it, live through it. If you’ve ever gotten a deep paper cut, that slice is given but a tiny red measure of you is also received by the paper. Think of your oldest book. The book you took with you on your semester abroad, the book you packed and unpacked with each new apartment lease, the book you’ve repaired or let fall apart. I know this is starting to sound nostalgic, and I tend to be a nostalgic person – but I want to insist that this is more than nostalgia. It is a radical identification of the agency of writing materials and it is the claim that Whitman’s degree of identification with a text is simply harder to achieve in digital code than it is in script. Because “most users will simply not have entrée to the mechanisms governing their interactions with [...] electronic information” (Kirschenbaum 58), the access point for the functionality of paper is just a much bigger target than it is for computers.[4]

It wasn’t always that way. Prior to the 1970s, the general public would have almost immediately associated computers or “electronic brains” with heaving, noisy, hot physical matter. The word “digital” went through a gradual process of becoming tied to a sense of fluidity and ease more than to a sense of roughness and presence. To make this point, I’ll briefly trace the word “digital” as it unfolded from hands to computers in journalistic accounts.

Mentions in 1920s and 30s American newspapers hew close to the Latin etymology of “digital,” referring to the dexterity of fingers: a pianist’s maneuvers during a successful (or abysmal) recital, for example. This 1937 article features a photo of a bridal party guest giving “digital attention to her coiffure,” according to the caption. dhforum2013.008 In the 1940s, the military began publicly revealing bits and pieces of the computing technology it had used during the second world war, and computing pioneers emerged from behind the scenes to be interviewed by The New York Times and other papers. At this point, “digital” was beginning to take root in the public imaginary as binary data—fast, efficient, easier than adding machines and paper. Yet, importantly, it was still tough to ignore the materiality of computing. In a 1946 article, the ENIAC, a tremendously large early computer that occupied an entire room, was depicted alongside human “programmers” or “attendants” adjusting wires and controls. The article reports on speed right alongside brooding complexity and architectural, creature-like features: “The Eniac has some 40 panels nine feet high, which bristle with control and indicating material” (Kennedy). dhforum2013.009

However, as computers got smaller and as the field of interface design established itself as a profession and area of study, the principle of immateriality as a user experience emerged. Most historians locate the beginning of this emergence in the mid-1960s when the focus of computer scientists expanded from military applications to encompass “affordable interactive computing” (Abbate 38). In the 1950s, “the interactive style of computing made possible by random access disk memory would force IBM, as well as the rest of the computer industry, to redefine itself” (Ceruzzi qtd. in Kirschenbaum 77). In the mid-60s, interactive computing meant time-sharing, the technique of distributing a single computer’s processing power across multiple users. Time sharing was more efficient than batch processing, which required paper punched cards and magnetic tape and which distanced programmers from the actual computer. According to Janet Abbate, “time sharing was seen by its proponents as the innovation that would liberate computers from their punched cards and allow direct and easy interaction with the machine” (35).

Donald Davies’ notion of a national packet switching network in the UK furthered the goal of interactivity and user friendliness. Davies wanted his network to appeal to business workers and recreational users, and he “was one of many researchers who hoped to improve the user friendliness of computers” (Abbate 34).[5] This meant that communication, to be as easy and immediate as possible, had to allow “the user [to be able to] ignore the complexities” of the computer’s operation, as Davies writes in 1966 (qtd. in Abbate 38). The operating system further “[removed] the inscriptive act from the direct oversight of the human user, screening it first by the command line and then by a graphical user interface,” according to Kirschenbaum (84). As he and other historically minded media scholars have illustrated, the deliberate engineering of immateriality resulted in the user growing apart from operrations (often called “black-boxing”).

What we now refer to as “digital culture” bears few meaningful reminders of the paper, wood, tangles of wires, “18,000 vacuum tubes [and] thirty tons” (Kennedy 1949) which all were once unforgettably, irrepressibly, and even impressively in the picture. While paper insists on its materiality and it’s not good at hiding its age or covering over mistreatment, most modern digital applications are generally really good at the opposite: they are designed so that the user doesn’t think about materiality—indeed, proprietary software makes it difficult or impossible to feel close to the code. Facebook’s code is intentionally distancing—it uses confounding class names that combine strings of numbers and letters that make no sense to outsiders. Networked digital applications are also designed to be resilient. The practice of “patching” hints at this self-healing process.

In saying that we can more readily approach the materiality of paper than we can the materiality of digital composition, that’s not to say that paper is a more natural or more human medium than a website. In the first edition of Leaves of Grass, in fact, we find a sentiment that’s utterly opposed to the “conflation of book and identity” which Folsom and Price describe as a central quality of Whitman’s work:

I was chilled with the cold types and cylinder and wet paper between us.
I pass so poorly with paper and types . . . . I must pass with the contact of bodies and souls.

These lines stayed put more or less unaltered through four revisions of what would eventually become “A Song for Occupations” in 1881, when the poet took the lines out entirely – although the “chilled” line did acquire parentheses in 1867. These lines betray a desire for a warmth of immediate nearness that could somehow pierce the cold interface. The paper book, as a support for writing, is indeed a contrivance, a screen, “a little machine for two hands” to use Derrida’s great phrase (PM 50). Whitman’s authorial persona in these lines acknowledges that. But somehow, with time and age perhaps, things changed for him. Plainly, this is not the same attitude that confronts us in the 1888 essay “A Backward Glance.”

The Whitman chilled by the mediating presence of “cylinder and wet paper” is a Whitman who’s stumbling over friction in the means. In her MLA address last year, Bethany Nowviskie made a distinction between friction in the means and friction in the materials. Friction in the means is “disenfranchising resistance [...] unhealthily located in a toolset.” Friction in the materials, in contrast, is “positive resistance”—the kind “that makes art.” Those who encounter friction in the means are mere users, detached or somehow removed from full engagement with the creative process. Those who encounter friction in the materials have an active, generative capacity to get their hands dirty with the labor of production. For Nowviskie, these two types of resistance translate to two potential futures for the digital humanities: (1) DH as “a generative research activity in its own right,” and (2) “commodity tool-use for the classroom.”

What does this latter future look like? Last October, a DMCA violation notice from Pearson over a single piece of content turned into the equivalent of a kill-switch for the Edublogs network. After receiving Pearson’s take-down notice, web hosting firm ServerBeach, which hosts not only Edublogs but also some content for, pulled the plug on Edublogs. The whole blogging network was down for about an hour during the day in the U.S., or 3AM in Australia where Edublogs is based. As the system admin and CTO at Edublogs told Ars Technica, “[we] watched, in horror, live as our Web servers were shut down one-by-one.” Clearly, when 1.45 million blogs become 404 error pages, we’ve got friction in the means. The source of resistance, for Edublogs employees and users, was literally inaccessible. “It’s pretty hard to believe that a hosting provider would [...] take an entire network offline over one piece of content,” said intellectual property attorney Evan Brown. It might be hard to believe, but it’s also hard to understand unless you’re familiar with copyright law, web server caches, database lookup, the difficulty of human communication via email, and other factors that probably have nothing to do with the post you were writing… which just vanished into thin air.

So yes, it can help to learn to program. It can help to roll your own website. It can help to have your own server. All these things are full of DIY goodness. But it’s also important to acknowledge and actively foster learning situations that nurture risk-taking and are hostile to material forgetfulness and uncritical tool use. Because you never D.I. 100% Y. If we take seriously the tenets of thing-power, getting closer to materiality is not about reclaiming control and enforcing one’s subjectivity in the midst of technological struggle. Rather, “the ethical aim becomes to distribute value more generously” (Bennett 13) and to acknowledge the agency inherent in our so-called tools. Different from resistance in the materials and resistance in the means, what Bennett calls “thing-powers of resistance” (13) bypass the user/maker divide and reimbricate human and nonhuman actors in a network of relations. The resistance of thing-power shines a certain slant of light on digital materiality, and the lesson I learn is that whether we assume a role of maker or user, whether we program or are programmed, there’s no escaping the invasion of various actants that we cannot control nor predict. If it hadn’t been a DMCA take-down notice, it could have been a power loss and failed backup generator (which wiped out UWM’s website for almost an entire day last spring) or “the site ran out of memory” (the explanation given for UWM’s CMS landing page triggering a 404 error for an hour on the first day of classes this semester).

To wrap up, I want to ask: why did Whitman remove the “wet paper” lines in 1881, at around the same time he composed the richly materialist “A Backward Glance”? Although I don’t claim to be a Whitman scholar and I haven’t pored over his manuscripts, I can speculate that as his text endured and accrued a life of its own over the drafts and decades, Whitman began to acknowledge “contact” not just between writers and readers (“I must pass with the contact of bodies”), but also between writers and writing technologies. I believe the stumbling block of means—that is, the wet paper which is akin to the flawed interface or rough user experience—became an allowance or portal for agency in the materials. What he experienced is not a transfer of control or conversion from “passive tool-user” to “capable artisan” (Nowviskie). The removal of those lines doesn’t signify, I don’t think, his discovery of a maker’s agency or a reclamation of control. Whitman had always been a maker. Rather, what I see is a concession—near the end of his life, a coming to terms with the lively agency present in his writing materials: “thing-powers of resistance” (Bennett 13). The question “How do we extricate ourselves from Edublogs and commodity tool-use?” quickly becomes “How do we humanize ourselves in the face of cold types and wet paper?” Yet, both questions make little sense in a world where humans are humans because they interact with objects.

I am not saying be suspicious of easy interfaces. I am saying be captivated by them, in the way that a child’s first inclination upon receiving a multi-part toy for Christmas is to take it apart. (At least, that was the first thing I always did, to my grandmother’s dismay.) Respond with surprise, engagement, and a touch of abandon—not distance and suspicion.

[1] In making such a big deal about Whitman’s capital-B book, I know I am treading dangerously close to a print-centric ideology of the divine Book of Nature—the ideology of gathering and boundedness—of a linear, ordered, and planned universe. Here Book means law. The absolute, ubiquitous book. Or a dialectic circling toward resolution: “the onto-encyclopedic or neo-Hegelian model of the great total book, the book of absolute knowledge linking its own infinite dispersion to itself, in a circle” (PM 15). Elsewhere, Derrida has identified the distinction between capital-B Book and little-b book as a contrast between “divine or natural writing and the human and laborious, finite and artificial inscription” (G 15). Writing, Derrida points out, here is a metaphor: “that is to say, a natural, eternal, and universal writing, the system of signified truth, which is recognized in its dignity” (G 15). That doesn’t really characterize the capital-B Book that Whitman’s talking about. His book is not about order, resolution, totality, or dignity. The “human and laborious” is a more fitting description. His poems continuously revel in disorder, incoherence, and—famously—contradiction.

[2] A conversation about the New York Times webtext “Snow Fall” on the TechRhet listserv (2012) captures some of the technical demands of truly innovative and boundary-pushing digital production. My point is that I’m guessing Whitman would not have been content to publish Leaves in a traditional way, digital or otherwise, if he were at the forefront of experimentation in today’s culture. And in trading book for computer, he would have encountered a host of logistical difficulties. As Hayles (2004) notes, “Whereas computers struggle to remain viable for a decade, books maintain backward compatibility for hundreds of years” (84).

[3] I owe this point to Jonathan Zittrain. In The Future of the Internet and How to Stop It he makes a distinction between sterile media, which includes computers that are treated as appliances, and generative media that invite tinkering.

[4] For example, it takes Kirschenbaum nearly 50 pages of close reading and deep, sleuth-like research to root out the forensic materiality of the early text-based computer game Mystery House. He draws on specialized knowledge and a willingness to participate in a discourse that might be unfamiliar to humanist readers. Such effort is not readily encouraged—in fact, it is increasingly discouraged—in everyday digital media use, but also in many humanities classrooms.

[5] This attitude marked a departure from previous experiments with packet switching in the U.S. When he was introduced to Paul Baran’s work on packet switching (Baran called this the “Distributed Adaptive Message Block Network” [Abbate 29]), Davies felt the Cold War logic of “highly connected networks” was unnecessary: “If the watchword for Baran was survivability, the priority for Davies was interactive computing” (Abbate 34). In following, Davies spent a great deal of time designing the interface for his communications network: “The NPL designers therefore focused mainly on providing as easy-to-use terminal interface to the network” (Abbate 42).

Works Cited
Abbate, Janet. Inventing the Internet. Cambridge: MIT Press, 2000. Print.

Bennett, Jane. “The Force of Things: Steps toward an Ecology of Matter” Political Theory 32.3 (2004): 347-372. JSTOR. Web. 10 Sep. 2013.

- – -. Vibrant Matter: A Political Ecology of Things. Durham: Duke UP, 2010. Print.

Brodkin, Jon. “How a Single DMCA Notice Took Down 1.45 Million Education Blogs.” Ars Technica (15 Oct. 2012). Web. 10 Sep. 2013.

Folsom, Ed and Kenneth Price. “Re-Scripting Walt Whitman.” The Walt Whitman Archive. n.d. Web. 10 Sep. 2013.

Hayles, N. Katherine. Writing Machines. Cambridge: MIT Press, 2002. Print.

Kirschenbaum, Matthew. Mechanisms. Cambridge: MIT Press, 2008. Print.

Latour, Bruno.“Where Are the Missing Masses? The Sociology of a Few Mundane Artifacts.” Shaping Technology/Building Society: Studies in Sociotechnical Change. Cambridge: MIT Press, 1992. 225–258. PDF file.

Nowviskie, Bethany. “Resistance in the Materials.” Modern Language Association Convention. Boston, MA. 2013. 4 Jan. 2013. Web. 10 Sep. 2013.

Sayers, Jentery. Dissertation abstract. Dec. 2010. Web. 10 Sep. 2013.

University of Pennsylvania. “John W. Mauchly and the Development of the ENIAC Computer: Technical Description of the ENIAC.” Van Pelt Library. n.d. Web. 10 Sep. 2013.

Whitman, Walt. The Portable Walt Whitman. ed. Michael Warner. New York: Penguin, 2003. Print.

Posted in commentary, professional stuff | Tagged , , , , , | Leave a comment

Lit Misbehaving: a special session at #MLA14

I’m really happy that a session I (well, WE) proposed for the 2014 MLA convention has been accepted. It was a decent amount of work, so it’s a relief to see that the idea will materialize! Keep an eye out for “Lit Misbehaving: Responding to New and Changing Modes of Creative Production” if you’re at MLA this year. I’d love to see you there. Here’s the full proposal:

Through engaging and performing with new modes of digital creativity and authorship, the speakers on this panel seek to continue the ongoing work of venturing into potentially unfamiliar realms of textual production. Hybrid print/video essays, games created in online communities, and algorithmic Twitter updates: these are undisciplined texts. Their misbehavior asks us to reevaluate assumptions about what it means to have agency and a voice in nascent and vulnerable digital authoring spaces. Panels at last year’s MLA Convention and recent digital humanities discussions have attended to online publishing and computer-based methods of analysis and research. However, the evolution of creative expression takes many forms that (because they might feel different or discomforting) risk being overlooked by scholars of literature and media. Furthermore, theorizing about emerging digital modes rarely happens in these very modes. By bringing new media artistic and literary practice together with alternative scholarship, this session hopes to disturb traditional print-centric notions of how texts and their authors can and should express themselves. In turn, such disturbance may unsettle genre conventions, publishing processes, readerly expectations, and gender norms, and so give credence to much-debated principles of the digital humanities: DIY making, hacking, play, and experimentation.

Tensions in discussions of the digital humanities reflect the shifting boundaries evoked by and materialized through new modes of expression. In an opinion piece published in The New York Times in 2012, Stanley Fish traces a rigid line between traditional critical work and digital humanities work, concluding with a concatenation of oppositions “between what is relevant and what is noise, between what is serious and what is mere play.” Fish’s distinctions, though contentious, are useful because they articulate resonant and often unspoken biases that leak into thinking about texts. An important goal of this session is to bring marginalized and underrepresented voices (human or otherwise) into focus, not by disciplining them, but by celebrating their misbehavior and putting them in dialogue with past literary and artistic experiments.

The distinction between “what is serious and what is mere play” quakes when we examine the Twine platform: a tool for building choice-driven stories easily publishable to the web. Twine’s code-free interface combined with its flexibility of extension by more experienced coders make it a novice-friendly tool, and its free open-source status removes it from some of the associations of privilege that more traditional and expensive game development tools bear. As Anastasia Salter argues, works built in Twine hearken back to early electronic literature, such as Hypercard and Eastgate hypertext novels, but their relationship with these established digital forms is not straightforward. Twine creations have emerged into a communal space outside of both “indie” gaming and electronic literature. Several of these works (particularly Nora Last’s “Here’s Your Rape” and Anna Anthropy’s “Escape from the lesbian gaze”) invite a feminist reading, both in the marginalized voices and experiences they bring to play and in the very agency expressed through their creation. The “Twine revolution” rejects any limitations about appropriate subjects for play and games, fusing forms and techniques from electronic literature with narratives that reconstruct the female game character through text, shifting agency to new and previously unheard voices.

Daniel Anderson turns to another expressive form with an uncertain status: the hybrid essay. Continuing his work as a researcher and practitioner of digital and cross-modal composition, Anderson recently has experimented with essays that feature videos composed in concert with text intended for print publication. The experimentation tests the boundaries between print and video modes, highlighting a challenge associated with transformative digital texts: the implicit nature of their arguments. Texts featuring digital affect (Murray), resonating with ambience (Rickert; Morton), or representing digital ontologies (Bogost) may strike us as odd simply because they are cast through unfamiliar materials. Discomfort with digital texts, Anderson suggests, marks less a failure to deliver a message and more a successful intervention in the unspoken academic approaches codified through the communicative ease of print. As a case study, Anderson takes a forthcoming, published hybrid print/video essay to bring to light tensions between implicit and explicit registers of expression across print and digital modes. His presentation will feature scholarly/poetic video artifacts, ultimately proposing (and enacting) an alt-scholarship conveyed through visual, sonic, and verbal registers.

Extending the session’s focus on marginalized voices and contentious or peripheral modes of text production, Zach Whalen takes up a topic that has received almost no critical attention: Twitter-based literary creativity. Returning to Fish’s distinction “between what is relevant and what is noise,” Whalen’s treatment of tweetbots and text art shows that the noise itself is often composed and reflects networked instantiations that resist typical hierarchies and binaries. From @horse_ebooks to @stealth_mountain, many of the voices filling timelines are produced by algorithms, generating text in complex collaboration with datasets and distant authors. Complementary projects like @crashtxt explore the boundaries of expression in 140 characters—as well as some unstated ideological boundaries—by inviting anyone to create tweets from a restricted palette of non-Western and non-alphanumeric glyphs. These fragile, visual texts rely on Unicode characters that may not be supported by all browsers and devices, and such fragility underscores their aesthetics with the same machine-dependence inherent to _ebooks-style bots. From constraint-based production to Oulipean and Dadaist expressions of machine-driven aesthetics, Whalen excavates Twitter as a textual and typographic performance space, tracing antecedents in the resistive literary epistemology of electronic literature.

Stuart Moulthrop, the session respondent, will facilitate audience discussion. Moulthrop writes and teaches about gaming and digital media, and he has been a central figure in electronic literature for more than two decades. His perspective will help us think through the question of how new and changing modes of creative practice and scholarship are unsettling the boundary lines that Fish and others reference when determining worth. This session is committed to acts of opening, to interdisciplinary experimentation, to validating fresh objects of study, to alternative modes of scholarly production, and (perhaps necessarily) even to unruliness.

Panelists and working titles of their presentations:

  • Anastasia Salter (Assistant Professor of Science, Information Arts and Technologies, University of Baltimore) “Bonfires, Lesbians, Depression and Rape: Twine, Feminist Voices and Agency in Game Narratives”
  • Daniel Anderson (Professor of English and Comparative Literature, Director of the Studio for Instructional Technology and English Studies, University of North Carolina) “Turn up the Opacity: Discussing Discomfort with Digital Modes”
  • Zach Whalen (Assistant Professor of English, Linguistics and Communication, University of Mary Washington) “_ebooks, Typography and Twitter Art”
Posted in professional stuff | Tagged , , , , , , | Leave a comment

Between WordPress and a Hard Place

I’m sharing the text of the presentation I gave today at CCCC. The title is “Between WordPress and a Hard Place,” and I was part of a panel offering critical perspectives on the Course Management System (CMS) in higher education. I wrote this to be spoken, so please forgive any coarse generalizing or informal prose! I’d love to know your thoughts.

The title of my talk implies a dilemma. A difficult situation. A tough call.  In fact, my partial migration away from the university-owned Course Management System (CMS) has indeed been a tough call, and I’ve paid the price of many hours spent searching for and familiarizing myself with alternatives. My students have the paid the price of test-driving the Frankenstein-ish CMS set-ups I’ve decided on each semester.  In the first-year writing and introductory literature courses I teach, both online and face-to-face, I’ve been willing to experiment with different technologies and juggle commitments to my employers, my students, and myself — yet I continue to feel caught “in between” these commitments.

These two tweets I composed while planning my courses last winter reflect one of the many quandaries I’ve encountered over the years: take a risk, or play it safe? More control, or less control? Pledge to open source, or pledge to making my life easier?  I feel torn between WordPress (or WP) and the university CMS (usually proprietary), caught between these two value-laden technologies. The edupunk and “hacking the academy” movements have tried to rally educators to choose the values associated with WordPress and essentially abandon those associated with the university CMS. But are those choices really so clear-cut? And how do college instructors–especially those with contingent status–negotiate conundrums and conflicting commitments amidst calls to “edupunk your CMS,” and more broadly, calls to be tech savvy trailblazers?

Educational technology guru Jim Groom, who coined the term “edupunk” has been notoriously slippery about pinning down a definition, but he does compare edupunk to the DIY movement and the home gardening/farming movement. [1] The New York Times defines edupunks as “high-tech do-it-yourself educators who skirt traditional structures,” and Wired magazine recently defined edupunk as “avoiding mainstream teaching tools like Powerpoint and Blackboard.”  The edupunk movement has been celebrated as a form of activism, but I argue that it actually works on some level to reify the tensions I feel and oversimplify the issues.

If there’s one thing I’ve learned, and if there’s one thing I hope you’ll take away from my talk, it’s the perhaps less-than-profound realization that any CMS is both a gateway and a gatekeeper. A rock isn’t desirable or fun, unless you find yourself in a heated match of paper-scissors-rock, during which your opponent opts for scissors. A hard place is equally un-fun, unless you’ve just completed a terrifying skydive from 14,000 feet. In other words, both rocks and hard places have problems.  Of course there are more than just two alternatives. But for the purpose of narrowing down my discussion, I want to focus on what I’ll call the WordPress vs. “hard place” debate, which is what I have the most experience with.  I broadly divide out two types of course management systems: proprietary and open source, with the former rhetorically framed as promoting closed, cookie-cutter learning and the latter as promoting open, generative learning. On the side of WordPress, I locate systems like Moodle, Sakai, and Drupal, and on “the hard place” side I locate the “university CMS” which for many people might be Blackboard but for me it’s Desire 2 Learn (or D2L), Blackboard’s major competitor. Allow me to bracket Ning and PB Works here, since they are neither open source nor university-controlled–and, as I’ll discuss in a minute, the two CMS types I’ve just delineated don’t actually bifurcate so neatly anyway.

I want to stress that the redundancy or synonymy of the two options implied in my title (a rock and a hard place) says something more than simply “no CMS is perfect.” Both WordPress and the university CMS are two breeds of the same species: technology that both creates and contains possibilities. In this way, then, they are comparable to any learning technology that has come before. They both mediate knowledge and shape the process through which individuals collect and construct and co-construct knowledge.

The claim I’m making here is more than theoretical flag waving; it’s a critical realization that the relationship I have with my course’s web space is a choice. I am not choosing a channel nor utility, like selecting a cell phone service provider, but rather I am declaring my pedagogical relationship to knowledge. When I began teaching college writing, I treated the CMS as a mere utility or information channel. To set the scene: it’s 2004. The Blackboard precursor WebCT (web course tool) is en vogue. I jump right in, awkwardly manipulating and customizing navigation options, swooning over WebCT’s cumbersome easiness,  and feeling proud of my adventuresome spirit all the while. I am seriously amazed at the convenience of having an online space to assist me with administrative duties of teaching.  But gradually I begin to realize through my own observations, but also through becoming attuned to a surge of discussion on Twitter and academic blogs around 2008, that my course site is not just a supplement to “real” classroom activities.   I begin to intuit McLuhan’s dictum that “The use of any kind of medium or extension of man alters the patterns of interdependence among people, as it alters the ratios among our senses” (McLuhan 90). Both WordPress and the university CMS are tools that foreshadow and forestall possibilities for writing online, whether we are conscious of it or not. And in this way, both WP and the university CMS are much more than tools.

Here are some of the characteristics or values that we could associate with wordpress in opposition to hard places.

1. Proprietary vs. open source (free), which leads to another “open”…

There is a lot to be said here about the legal and ethical tangles that hazily differentiate proprietary and open source software. One problem is the common pairing of “free” and “open source.” These are actually different software movements with different histories, and neither one means that the software is free in the sense of no price. The free software movement is at root a code of ethics, whereas open source is a license and a development model. [2] Jim Groom and the founder of the free software movement, Richard Stallman, have both convincingly shown that to presuppose any affinity between “open source” and ethical software or effective pedagogy is a mistake. Stallman asserts that a given program or web platform “might be open source and use the open source development model, but it won’t be free software [unless it] respects the freedom of the users that actually run it.”  Groom echoes Stallman’s distinction by pointing out that although Sakai and Moodle adopt “an open source model,” still “they smack of an outdated model of ownership, control, and management—which makes them administrative tools, not learning tools.” Like a wolf in sheep’s clothing, open source platforms like Moodle and Sakai can be deceptively good at embodying icky values.

Thus, it becomes important to know what brand of mediation we’re sponsoring when we choose a supposedly open source CMS. I’ve talked to many people who assume that open source means zero cost, even though the labors of volunteers on the open source Debian project, for example, would total about 19 billion dollars by one recent (Feb. 2012) estimate. I’ve also talked to people who can’t say for sure if Ning and PB Works are open source or not, even though they use these platforms for many of the reasons that lie behind the free and open source software movements.

2. closed (private) vs. open (public) learning, supported by…

One critique of the university CMS is that it locks up student writing in an artificial space. I think that’s true, but I would add (and Jenn Marlow mentioned this too in her presentation) that it also locks up the instructor’s writing: my discussion comments, prompts, assignment sheets, and resources. This aspect of choosing a CMS is particularly relevant to the transitional status of graduate students. As Dave Parry points out, “If you are hosting your own [CMS], i.e. not on the university’s servers, you own your course material, making it easier to take with you when you go.” Moreover, since I’ll be looking for jobs with a digital-humanities-type focus, I want to demonstrate my competency with web sites other than D2L and Blackboard.  If edupunk is “about a culture, a way of thinking, a philosophy” [3], then one reason to keep my materials public on my own website where anyone can access them, is to not only to create an archive of courses I’ve taught, but also to perform the values I express in my teaching statement.

3. commercial vs. community led development, resulting in projects that are…

The commercial vs. community led development binary problematically grafts onto other pairings from my list here. The open source CMS then becomes generalized to mean participatory and inclined toward student engagement and collaboration, while the proprietary platform becomes a silent, teacher-centric study hall. That value transfer is reductive and not universally true.

4. standardized vs. tailored / custom / DIY, often appealing to the…

Darin Payne captures the standardized vs. tailored binary in his 2005 College English article when he writes “Because Blackboard is a one-size-fits-all product for mass consumption, the assessment practices it enables are not tailored for writing classes” (499). But, more broadly than assessment practices, many feel that the ontology of the whole CMS package–its DNA–is contrary to the critical thinking and creativity that many humanities courses value. But, to some extent, writing classes have always had some one-size-fits-all elements. For example, the technology of a curriculum itself.  The goals and outcomes we use to assess final portfolios at my institution are arguably “a one-size-fits-all product for mass consumption,” as all instructors and students across hundreds of English 101 and 102 sections must shape their classes around these goals. What would it mean for a graduate teaching assistant to commit to a tailored / custom / DIY CMS approach within a rather standardized writing program like UWM’s? Again, there is a tension here that often goes unacknowledged. In this respect, the edupunk and hacking the academy movements have been somewhat contradictory in attesting that “it’s not the technology but what we do with it that matters,” yet at the same time issuing a full-on assault against any proprietary CMS.

5. web novice vs. web expert

In this last binary of web novice vs. web expert, which we might align with student vs. teacher,  I see myself caught in between, as a student/teacher hybrid. I am also not in the novice category of instructors, who “happily use the high–tech CMS as a glorified copy machine” as Lisa Lane puts it.  But I am also not in a position (at least, without giving myself serious stress) to take big risks like debugging quirky wordpress plug-ins, as Joseph Ugoretz describes doing in Part 2 of a great series of blog posts on using WordPress (and only WordPress) for an online-only course he taught.

The ideological trope of the risk-taking hacker and tech-savvy academic gains a great deal of power when it’s mapped onto other values and characteristics on my list here. In her 2011 article in Computers and Composition, Virginia Anderson clearly defines the problems with the ideological trope of the tech-savvy academic who “hacks through the wilderness toward grand vistas” — this trailblazer whose “primary duty [is to open] new territory with the goal of seeing what’s there” — after all, the explorer’s task is not to hold the hands of those who follow but to give them somewhere to follow to” (136). She argues that class-based benefits of time, experience, and (to some extent) financial resources determine who is in a position to be a web expert and “edupunk their CMS” in the first place.

Her critique of instructors on the “bright side of the [digital] divide” (125) adds another layer of complexity to my schema: class. The very predicament of being caught between WordPress and a hard place seems like a luxury. I think of Anderson’s concept of taxation when I read Parry’s promotion of WordPress as a CMS. While WordPress is not exactly difficult, it does take some level of expertise to install the plug-ins and make the structural changes that will adapt the platform to be a CMS. In his well-circulated ProfHacker post which I mentioned earlier, he concedes that while “WordPress has a learning curve, once you invest a little bit of time it is actually exponentially easier to use than Blackboard.” Anderson might be wary of the implications behind this encouragement to “just invest a little time.”  She calls this a “tax”  which is a toll, “however small, imposed by technological culture as it asks users to invest more and more time and effort to perform necessary tasks. Each change, with the commitment it demands, carries a message about the value of a user’s time”  (132).

The complex issues I’ve sketched here ought to reveal the troublesome shadow cast by any rationale that rules out one type of technology only because it does not seem to exhibit a certain set of values. In affiliating the university CMS (what I’ve been calling the “hard place”) with values on the left side, it becomes easy–too easy–to associate WordPress (or similar platforms) with values on the right side.  But both options manage learning, despite Matt Gold’s overly broad claim that anything called a Learning Management System starts off on the wrong foot because “Learning is not something that can be ‘managed’ via a ‘system.’” After all, even if the instructor has designed and customized a system–even if he/she built it from the ground up–it is still a system that is managing learning in some way,

whether we’re talking about this paper or that paper,
this wordpress or that wordpress,
this blackboard or that blackboard.

Even a non-alphabetic oral culture deploys learning technologies: story-lines, cliches, rhymes, and rhetorical devices are, in a sense, oral learning management systems. [4] They are not by default less intrusive or less interfering because they are non-digital. [5] One of the consequences of the backlash against the corporatized, mass-marketed CMS is a radical and, in many cases, knee-jerk response to anything affiliated with values on the left side of the binary schema I laid out in the previous slides

So, where does that leave me? Recently, I have settled on a hybrid system for both my online and face-to-face classes: I use a self-hosted WordPress installation or a PB Works wiki for weekly announcements, mini-lectures, assignment guidelines, a hyperlinked version of my syllabus, and miscellaneous tidbits, jokes, or photos. I use the proprietary university CMS for things that are not exactly my property: recording student grades, attendance, and uploading readings with a copyright. In my online classes, I have not yet found a suitable open-source “web two point oh-ey” alternative for discussions, so last semester those happened behind closed doors. This semester, I am using Ning for discussions. It’s not true that a CMS divided against itself cannot stand. But it is harder to support. And it is harder for students to figure out.

Ultimately, I believe this sort of hybrid learning management system satisfies a number of my goals and puts me in a position of straddling some of these ideologies as I and my students work betwixt and between WordPress and the proprietary CMS.

[1] This is according to Groom’s now-classic post, “The Glass Bees” and “Permapunk.” It’s important to note that more recently (Feb. 2011), the popular adoption of the term “edupunk” and its acitivist tone has driven Groom to announce his break-up with the word.

[2] This is certainly up for debate, and I know there is disagreement between members of each community regarding how to define the difference between free software and open source software (or if it’s worth defining a difference). The statement here is my cursory summary of Stallman’s somewhat vague definition in “Why Open Source misses the point of Free Software.”

[3] I take this phrasing from D’Arcy Norman’s post on edupunk.

[4] See Ong, Walter J. Orality and Literacy: The Technologizing of the Word. New York: Methuen, 1982. Print.

[5] This idea is an off-shoot of Carlo Scannella’s comment on Gold’s post. Scannella writes: “It seems to me, way back, before the days of computers, we always had a “learning managing system.” It was a classroom, and the lesson planner, and the folders my teachers kept all our essays in, and all the other artifacts that made up the learning experience.” Gold does address his comment, but I don’t feel he gives it the attention it deserves. I think the issue Scannella raises is really key and deserves more attention.

Posted in commentary | Leave a comment

Final Paper

Here is my final paper, entitled “Mediating Memories” (PDF 1.9MB). You can also read through this Scribd document if you don’t want to download the PDF.

Posted in ENG742 | 3 Comments

Final Project: A Re-mediation (and Re-vision) of my Term Paper

Hi classmates. Here is the link to my web-text. If possible, view the site on Firefox, Safari, or Chrome. I have not tested it on Internet Explorer. Also, if you observe a warning that a pop-up window was blocked, please temporarily disable your pop-up blocker.

Mediated Memories: Final Project

Posted in ENG742 | 3 Comments

An Imaginary Lecture on Media Culture

Week 13: Post 2 of 3
Imagine that you were to give a lecture on media culture to an undergrad class. To work toward such a lecture, identify two or three repeated themes that run through some majority of our class readings: in your writing, identify the themes and trace each through the readings, chronologically, identifying shifts and changes and contemplating why there would be such shifts. Discuss why you chose the themes you did: why does each theme stand out for you such that you think it should be emphasized in a lecture to undergrads, and how should the theme shape their thinking about (their engagements with) media?

I would focus on the issue of media, influence, and behavior. I see three different levels of influence: [1] media determine human behavior, [2] media influence behavior but don’t determine it, and [3] media simply support behavior with minimal influence. I would probably take Facebook as a specific example, since this is a platform almost all undergraduates use, and I think all the theories we have read could apply to Facebook in some way. I would want the audience to understand how a medium like Facebook is all three things at once: it does in some ways totally limit or determine our behavior; it does in some ways influence our behavior, while still giving choices; it does in some ways act as a tool we use and control. We could look at Facebook through all three overlapping lenses, like a Venn diagram.

The earliest work we read was Walter Benjamin. He is a really great example to look at for the theme I chose. He certainly recognizes changes in behavior due to photography and cinema as technologies of mass reproduction. His main question is about the spectator and how the desire for an “equipment-free aspect of reality” (35) changes art and our relationship with tradition. Once the nature of art changes in response to mass reproducibility, I think Benjamin is saying that other things (war, attention, space) must also be changing. He describes new behavior patterns of the masses as “symptoms” (41), as though the technology is an ailment. However, Benjamin is not a clear determinist, since he does imply that watching movies is a necessary part of being trained to live in a capitalist society. Obviously, he worries when war becomes aestheticized, and when aura is not just abolished in original artworks, but is also abolished in human life. Adorno and Horkheimer are next, but they seem more deterministic in their theory of the culture industry. As Ingrid and I explored their theory in our collaborative analysis, they seem very disappointed in the way that media shape human behavior, and they don’t see many bright spots. Debord contends with a similar theme, in discussing the “society of the spectacle” and how humans succomb to representations. In many ways, I see connections here with Barthes and mythology. The intersection between Debord and Barthes is language. When human language is leveraged as a distorting technology, and humans no longer have control over their language as it relates to specific practices/actions, then their behavior becomes totally controlled by media. This seems to tbe the argument in both Debord and Barthes, though I find Barthes more convincing since he gives more concrete examples and diagrams the break-down (or appropriation) of Saussure’s sign. We can literally see how the influence happens, and how behavior might be constructed in and through media representations.

Then, as the chronology continues, there are more openings for the idea that media influence but don’t exactly determine human behavior. I think Enzensberger and Baudrillard fit in this category. Raymond Williams explicitly works against techno-determinism; he sees media as “applied technology or a set of emphases and reponses within the determining limits and pressures of industrial capitalist society” (299). This is a complex statement that delicately walks a line between the two extremes [1] and [2] I laid out in the beginning of my post. Williams provides a nice counter-weight to Kittler, who certainly believes that media determine behavior. His analysis of typewriter literacy clearly places the agency with the machine subject, and human cognition and behavior collapses into the mold of the technological structure.

McLuhan would be an in-between theorist. Absolutely, he has shades of determinism, but I disagree with clean-cut claims that assign him to the techno-determinist camp. According to him, the media that we use—“extensions” of ourselves—are powerful influences, but we can change the course. He warns that we live in a state of “inattention and unawareness of the situation” (92) of media. Technological prostheses alter sense perceptions, restructure human consciousness, shape social norms, and disrupt political and economic patterns. McLuhan encourages us to start seeing again and to understand media in order to gain some control over the way technology arranges human activity and (re)defines what it means to be human. McLuhan’s apparent technological determinism is tempered by the hope of breaking the spell of numbness and redeeming human autonomy. Media change thus amounts to a power struggle between humans and their technologies. But humans can participate in this struggle.

Posted in ENG742 | 1 Comment

Media and Mediation

Week 13: Post 1 of 3
When you use “media” now, what do you understand by the term? What do you understand by “mediation”? How do you understand technology relative to media?

My response to this prompt will necessarily be shaped by some of the readings I have done for Richard Grusin’s “Theories of Mediation” class this semester, so I should say that first. It has been useful to take both classes (“Media Culture” and “Theories of Mediation”) at the same time, since the former focuses more on the cultural implications of media, and the latter focuses on different theories of the actual process of mediation. In other words, media culture seems to be about how people use media and are also influenced by media. In this class, we have examined the disruptive effects of media when they are first introduced into society, and then how society absorbs or copes with the effects. Looking back at the Frankfurt school theorists, they were trying to understand how mass production and capitalism was becoming a principle or invisible pattern that mediated human relationships. In the readings on mass media such as the television and radio, again we looked at how the medium changed a pattern of interaction (directing it towards more passivity and isolation) and in many ways restructured the individual’s relation to reality. However, some of the readings tried to theorize how the process of mediation actually happens. How is it that the media shape reality, and how do they actually come to affect us? McLuhan and Barthes really tried to understand the mediation process itself, rather than observing cultural phenomena and drawing conclusions. (I know this is broad, but I do see some distinction here.) I use the word “media” now to describe mass media such as television and networked media such as the Internet.

Technology, on the other hand, is more about machines – devices with working parts. I see the Internet as media and the computer as technology. Also, I think “media” implies cultural circulation and ripples amongst humans. “Technology” on the other hand could be something animals use, so I do think (for me) the term implies more of a tool than “media,” though just as media can mediate, I certainly believe that technology can technologize. It seems with “media” (compared to “technology”), human relationships are prevalent in my mind just in thinking of the word.

What about the word “mediation”? In the “Theories of Mediation” class with Grusin, all the readings were like Barthes and McLuhan (and we read Benjamin as well, who is also trying to understand how mediation happens, I think). In other words, all the readings were specifically trying to theorize the process or activity that happens between humans and reality, as humans try to come to terms with that reality, even to the point of a collapsed distinction between human/non-human. Many of the readings were specifically concerned with theorizing/defining reality itself, accounting for media in the definition. In this class, Latour’s theory really struck me and helped form my current sense of “mediation.” Latour tries to show that, once we get in the habit of using technology to help us do things, we stop paying attention to the mediation process and we stop caring about how the technology works and how it exerts agency. Latour shifts mediation away from the idea of a tool or instrument we use and shows that mediation is “translation” or a mediation that modifies – it does not leave things as they were. Mediation is not “business as usual.” He tries to map the complexity of mediation and all the foldings that cross between human and non-human actors on a daily basis. When we open and close a desk drawer, start the car, drive over a speed bump, or turn on a computer, we interact with an autonomous mediator. With Latour in mind, I would say mediation is the process of living with ourselves and with other individuals, but under the specific qualification that mediation is a transaction between human and non-human points. Neither point determines the other, but both are influenced (or even blended) as they contact each other every single day. I keep coming back to a quote in the introduction to CTMS. Hansen and Mitchell, in making a departure from Kittler’s techno-determinism, write:

[...] the shift from media as an empirical collection of artifacts and technologies to media as a perspective for understanding allows us to reassert the crucial and highly dynamic role of mediation — social, aesthetic, technical, and (not least) critical — that appears to be suspended by Kittler. (xxii)

The situation of media (a situation that would enclose humans and technologies — a situation that would account for the “dynamic role of mediation”) is addressed in different terms by the theorists I have read in both of my media studies classes this spring. Latour calls it a “labyrinth.” Patrick Crogan calls it a “contemporary techno-culture” (168). Adorno and Horkheimer call it “the system of the culture industry.” Raymond Williams calls it “a complex of technical possibilities” (294) or just a “social complex” (300). Enzensberger calls it “the media industry” or “a universal system” (261). Barthes calls it “the mythical system” (4). Kittler calls it a “mediascape” (13) or “the system of media” (216). In all these terms, the authors are trying to find language for something so big, it’s really hard to think about, let alone talk about: media culture.

Posted in ENG742 | 5 Comments

iMovie 09 Workshop

On April 1, I hosted an iMovie workshop for English faculty and grad students at UWM. As it turns out, the Mac lab has iMovie 08, and I planned the workshop for iMovie 09… details, details! At any rate, I think the workshop went well overall. Participants learned basic skills that could transfer to all versions of iMovie. Here are the topics we covered:

  • The iMovie interface
  • Importing video
  • Editing video
    • Using the yellow selector box
    • Using the Edit tool
    • Clip Trimmer (called Trim Clip in iMovie 08)
    • Precision Editor (iMovie 09 only)
  • Splitting clips
  • Adding transitions
  • Changing the speed of a clip (iMovie 09 only)
  • Adding still images
  • Adding text
  • Adding and editing audio
  • Crediting sources and choosing a license
  • Exporting the video
  • iMovie 09 advanced features
  • iMovie resources

An instructional handout is uploaded here [PDF file, 1.2 MB]. It has steps and lots of screen captures that any first-time iMovie users could follow to create their first video—though remember, the handout is written for iMovie 09. Feel free to distribute it to students, or you could email me for the editable file. (sullivan.rachael [at] Thanks to everyone who attended the workshop!

(Below, I’ll include some of the resources/links I’ve gathered over the years. Also: credit goes to Ken Stone for his excellent iMovie guide, from which I adapted a few screen captures and some instructions for my own handout.)

Material Licensed for Remixing:
Video footage
Prelinger Archive
FreePlay Music
Free Sound (just sound, no music)
Stock Music for Educators
Flickr (go to Search > Advanced Search > Creative Commons)

Possible Assignments, Student Samples, etc.
“Multimodality in 60” assignment [PDF file] (from Ohio State University, this assignment is harder than it seems)
Bill Wolffʼs gallery of student videos
Bill Wolffʼs “the one” assignment
iMovie Public Service Announcement (PSA) assignment
Using iMovie to Talk about Tragedy
Problems assessing multimodal student work
Fair Use regulations for educators

Know about more resources? Put them in the comments!

Posted in pedagogy | Tagged | 2 Comments

McLuhan on Choice and Media Hybrids

According to Marshall McLuhan, the media that we use—“extensions” of ourselves—are powerful influences. The contemporary trend is to see these prostheses as mere tools. As a consequence, we live in a state of “inattention and unawareness of the situation” (92). Technological prostheses alter sense perceptions, restructure human consciousness, shape social norms, and disrupt political and economic patterns. McLuhan encourages us to start seeing again and to understand media in order to gain some control over the way technology arranges human activity and (re)defines what it means to be human. Understanding begins by studying “media hybrids,” or a medium’s absorption or refashioning of another medium:

The hybrid or the meeting of two media is a moment of truth and revelation from which new form is born. For the parallel between two media holds us on the frontiers between forms that snap us out of the Narcissus-narcosis. The moment of the meeting of media is a moment of freedom and release from the ordinary trance and numbness imposed by them on our senses. (55)

In this passage, McLuhan’s apparent technological determinism is tempered by the hope of breaking the spell of numbness and redeeming human autonomy. Media change thus amounts to a power struggle between humans and their technologies.

The power of media rests in their effects, not in their content or “message.” This point is crucial for anyone who hopes to avert the numbing symptoms of media use, which McLuhan calls “Narcissus-narcosis” in the passage above. In the myth of Narcissus, a youth could only stare, utterly captivated, at his image in the water; he never realized that he had become a “servomechanism” (41) of his own image. According to McLuhan, all media operate in the same way. Once humans adapt to the medium and it becomes transparent (just as Narcissus could only see his image and not the water’s surface), technology has won the reins of power over man. Because of the numbing effects of media, this power is never about content. In fact, it is in the appropriation of one medium by another medium that we realize “the ‘content’ of any medium is always another medium” (8). As humans adopt and adapt to new technologies, they open themselves up to change, and change is the message McLuhan wants to highlight, rather than any content communicated by media. Fully aware that “it is the medium that shapes and controls the scale and form of human association and action” (9), he exposes critical points on the timeline of technological evolution.

McLuhan views the transition from orality to literacy as a particularly sensitive and important moment of change. He examines “the parallel between two media” (55)—speech and writing—and to pick apart “their structural components and properties” (49). A key conclusion from this examination is that the literate Western man experiences “fragmentation” or a splintered form of “psychic energy” that causes him to see his cognition and identity as separate from other people and to espouse a world view that mimics the visual uniformity and lineality of alphabetic writing (50). Unlike the “eye man,” the “ear man” of oral cultures views the past and present as a discontinuous mosaic of experience and complex emotion. The ear man would find the sequential cause-and-effect structure of Western logic “quite ridiculous” (86), and he perceives his body as “[including] the whole universe” (124).

In analyzing this “moment of the meeting of media,” McLuhan effectively denaturalizes the familiar medium of writing and creates a “moment of freedom” for us to see (or attempt to see) the conflict that is happening within our selves (55). “Since understanding stops action,” McLuhan writes, this moment is an opportunity to pause at the crossroads of two media and observe their effects. The transformative meeting of orality and writing overhauled human activity. McLuhan believes that if we can understand all media as extensions of ourselves, we could finally be attuned to the fact that “they depend upon us for their interplay and their evolution” (49). Granting human agency, McLuhan encourages readers to intervene and make the choice to understand media.

McLuhan, Marshall. Understanding Media. Cambridge: MIT Press, 1994.

Posted in commentary | Tagged , | Leave a comment