Media Theory for the 21st Century

March 11, 2008

Items of Interest

Filed under: General — kknight08 @ 6:49 pm
Tags: , , , ,

Hello Everyone,

Scheduling conflicts prevent me from attending class today and talking about these in person, so please forgive the impersonal post. These are a couple of things (one web directory and one event) in which I thought the class participants might be interested. (more…)

Advertisements

Phones, Bubbles, Go

I found this weeks readings particularly interesting because of the way they resonated with my current areas of research; particularly, Greenfield and Shepard’s text of urban ubiquitous computing stood out to me. There are so many great moments in that text that tie together a lot of what I’ve been studying on my own (the Situationist “derive”, personal space metaphors, games, urban screens), as well as what we’ve been discussing in class (getting lost, pervasive gaming, reactive architecture, among other things). I’m going to draw a lot of abstract connections and personal anecdotes now, like how Bruce Sterling’s discussion of “ARPHIDS” and coding relates really strongly with a recent lecture in our department from artist Beatriz Da Costa. Her work critically examines a lot of these ideas of privacy, consumerism and mediation in urban contexts. She is even referenced in Greenfield and Shepard’s text.
The Michael Bull text (also referenced in “Urban Computing and its Discontents”), delves heavily into how we mentally construct private space in public settings. The “personal bubble” metaphor referenced here, introduces the concept of alienation, which Bull expands upon in his examination of the “dialectic relationship” between mediating communication technologies and the non-spaces of urban culture. In his discussion of mediated proximity and intimacy, he relies heavily on mobile phones, automobiles and personal music players to construct his argument. Throughout the text, I kept returning to the thought of social networking websites as an equally perfect metaphor for alienation. Bull doesn’t even introduce this idea, because his approach to the problem is explicitly from the perspective of aural perception. But I think his thesis works just as well for the social networking paradigm. Our need for intimacy and connection drives our curiosity and willingness to engage in social networks, but ironically causes us to be alienated and alone, in a room at a computer rather that actually doing anything social. To return to mobile phones, I was interested in the idea of privatizing or “colonizing” space that he introduces: “[…] users of mobile phones in the street transform representational space into their own privatized space as they converse with absent others”. I had to laugh out loud at this concept, as it reminded me all too well, of my new favorite game: “Crazy, or Bluetooth?” Playing this game in Los Angeles is more fun that anywhere else i’ve ever lived. I’m sure you can infer the rules of play. This idea of “bubbles” also reminded me of an image from one of my all-time favorite animations, Rene Laloux’s 1973 classic, Le Planete Sauvage (Fantastic Planet).
picture-3.png
The metaphor of personal bubbles is key to Laloux’s story, about a race of humanoid aliens who keep homo-sapiens as pets. The aliens foster a cultural obsession with meditation, which is visualized above. In meditation, their spirits drift off in bubbles to a forbidden planet to gain “vital energy” through metaphysical intimacy with one another. This film is made well before the pervasiveness of cellular phones, but the similarity to this idea of remote intimacy is really interesting.
There’s one other connection I’d like to make, involving Kenichi Fujimoto’s concept of the mobile phone as territory machine: “capable of transforming any space—a subway train seat, a grocery store aisle, a street corner—into one’s own room and personal paradise.” (37). Similar to Bull’s text, Mark Shepard quotes Fujimoto in order to discuss the concept of privatizing urban space. I’ll spare the diversion into Shepard and Greenfield’s criticisms of architectural and media art related attempts to address the potential of pervasive media technologies. That’s a whole other discussion. Instead, I’d just like to focus on this idea of “territory”, because it’s a simple and conceptually beautiful idea that i’ve been exploring in a lot of my own work. I’d like to draw a connection here to the ancient game of Go as a similar metaphor for territorial procurement, personal space and ephemerality. In Go, black and white stones are played intermittently on a grid. Each player attempts to carve out a certain amount of “territory” for him/herself and simultaneously surround and alienate the opponent (occasionally even surrounding opponents’ stones and removing them). Go is often used as a metaphor for conflict, or simply appreciated as the visualization of a contest between two minds. For some reason, reading this text kept reminding me of my personal interest in the game: “the iPod becomes a tool for organizing space, time, and the boundaries around the body in public space” (36).
The connection isn’t terribly strong, I know, but in my mind i’m drawing a lot of loose relationships between Go, cellular automata and complexity science, movement patterns of people in public spaces, and the readings’ discussion of alienating technologies and private space in urban settings. Hopefully we can expand on this more tomorrow.

March 1, 2008

Wendy Chun on Memory

Filed under: Discussion,General — mkontopoulos @ 7:43 pm
Tags: , , ,

http://eda.ucla.edu/?id=506

Our department has put this lecture online. I’m not sure if any of you got to go to it (I was sick that day), but if you have Real Player you can stream it. VLC will play the audio, but not the video. I highly recommend!!

If you think about it, the media we use in the present is changing and evolving at a rate much faster that any kind of critical discourse can keep up with. This becomes evident in the increasing importance of documentation, to the point where it often usurps the idea of a new media object itself. Chun quotes Lovink and McKenzie Wark’s ideas of “theory as event” here, in order to argue that new media theory itself, must change to a model that would better accommodate the speed of change. In an example, she challenges anybody to write a legitimate analysis of a facebook entry at any given moment.
This idea of speed is complicated by memory. If much new media theory is about media that no longer exists, than memory, not speed is a better model for considering a change in new media theory. We have to “get past Paul Virilio”. Bush’s Memex introduces this idea long ago, in his text “As we may think”. It’s a bit too naïve to assume that the internet truly represents the notion of “always there-ness” that we apply to it; nothing is lost, everything is archived. Because we believe that everything is retrievable and accessible, we grant ourselves the privilege of forgetting. According to Chun, the problem of forgetting becomes complicated as one medium becomes the memory for another. This leads her to a discussion of “moving memory”; here, Chun draws a careful distinction between storage and memory, storage signifying a physical archive (Here, she draws on Vannevar Bush’s Memex) and digital memory being ephemeral. So with a project like the internet “way back machine”, the idea of an archive of the internet is introduced but again, in terms of memory and not storage. Links break and files disappear; the archive is full of ‘skeletons’ of web pages. Are we in a “digital dark age”?  This question totally collapses my understanding of memory. The internet, in which we can find any piece of information, in which all actions – positive and negative – are recorded, in which elections are basically decided, discourse is cataloged on blogs, and identities are constructed, has no cultural memory in itself. So, i’m thinking: when all the world’s nuclear weapons detonate or we plunge into global climate change on an apocalyptic scale (whichever happens first, really), the next civilization that springs out of the ashes will locate little to none of our cultural memory, because the memories were, in a sense, ephemeral.
Chun’s basic thesis is that we need to place more emphasis on grasping a present which is constantly degenerating. Our notions of storage and memory are linked to repetition and retrieval, which simultaneously is linked to forgetting, but also to disseminating knowledge. It is this problematic dichotomy that she introduces, in order to get us thinking and asking questions about the way we analyze the present and attempt to archive it. By problematizing the notion of memory, Chun sets the stage for a necessary shift in the way we think about new media theory.

February 20, 2008

Database and Recommendation

Filed under: Discussion,General — mkontopoulos @ 8:40 am
Tags: , ,

This doesn’t have much to do with aesthetics, but last night, I started playing around with Pandora, which is an website used for creating custom radio stations based on personal taste. Since I’ve been really into Curtis Mayfield lately, I started by typing in his name. Pandora pulled up a song by Curtis, and then proceeded to select songs that were similar to the first, based on a series of several hundred predetermined criteria. By rating several songs in a row, Pandora was able to fairly quickly and accurately, configure a radio station based on my impulsive desire to listen to soul music.
If you read a bit deeper into the Pandora website, you’ll find that their archive is based on what is called the “Music Genome Project”. Launched in 2000 by Will Glaser, Jon Kraft, and Tim Westergren, the intent was to “capture the essence of music at the fundamental level”. Ultimately, these three music aficionados developed over 400 attributes with witch to algorithmically analyze a database of songs. I am aware that the algorithmic prediction of taste is not new, as seen by Amazon’s or Netflix’s recommendation resources. What is striking about Pandora, however, is how it bases its selection criteria not only on user encouragement, but also largely on analysis of the song itself for things like “bebop qualities”,  “heavy use of funk samples”, or something as specific as “Subtle use of Harmonica”. I find it’s success in pleasing my musical needs to be equal parts advantageous and creepy.
The obvious connection point here is to the Human Genome Project. In considering human DNA as a database, as Vesna points to early in her text, we open up a whole new way of thinking about our bodies, especially in learning that we are 99.9% genetically similar. I’m similarly drawn to the Music Genome Project, for the way it makes a database out of a form of expression as ancient and primal as music, breaking down the differences between styles into such a (relatively) small amount of criteria; not to mention the questions this raises about disembodiment and removal of subjectivity in the age of algorithms predicting our  desires…
Christian Paul writes: “What distinguishes digital database from their analog predecessors is their inherent possibility for the retrieval and filtering of data in multiple ways.” (96). With, what Zach so eloquently described as “utopian flare”, Paul seems to praise the digital databases “limitless possibilities”. To parrot Zach’s discussion of exclusion, I’d like to suggest that the exclusion of certain songs in Pandora versus the positive rating of others does, in fact, construct a narrative of personal taste, based on an enormous range of choices that is algorithmically limited over time, per station. Here, I identify Pandora very strongly with Lindsay’s discussion of Machinima and the range of gaming options as being a sort of database, while individual play represents a type of narrative.
What is excluded from many of these texts, however, is consideration of issues raised by narrative/database and the politics of algorithmic recommendation/prediction. I find it disturbing, the extent to which software now attempt to finish my thoughts and predict my desires. In some cases, this can be useful, like with Pandora, but in most others it takes on a more political agenda, like Gmail attempting to push sponsored links based on text in my personal email, or Amazon attempting to sell me products based on my casual searches. To what extent do we forfeit agency when we leave more control to the predictions of algorithms?

February 15, 2008

Embodiment in Performance Art Systems

Filed under: Discussion,General,New media art — mkontopoulos @ 5:46 am
Tags: , , , , ,

Ok, now that all has been fixed with my author status, I can re-post my entry from last week. Sorry it wasn’t up on time:

This week’s readings presented multiple ways of considering issues of embodiment; physical and virtual spaces, avatars, notions of absence or mediated removal, the physicality of pre-cinematic devices are a few that come to mind. I found it difficult to generate an overarching thesis that was any more focused than generally agreeing that new media and digital technologies change the way we perceive our bodies and our roles and relationships to space and one another (not to mention, art).

I’d like to present two new media art works that I believe, serve as interesting compliments to one another, and will probably generate some interesting class discussion as a result. The first piece Very Nervous System, is a performance system developed in the early 90s by the celebrated media artist David Rockeby. Our classmates that study dancing will no doubt find this interesting; assuming they haven’t seen it already. In VNS, the bodily gestures of a participant are observed by a camera and translated in real-time, to a generative musical composition with a slight amount of randomness. On his website, Rockeby cites a variety of inspirations for developing this system: “Because the computer is purely logical, the language of interaction should strive to be intuitive. Because the computer removes you from your body, the body should be strongly engaged. Because the computer’s activity takes place on the tiny playing fields of integrated circuits, the encounter with the computer should take place in human-scaled physical space.”

The second and arguably more provocative piece, is the performance Ping Body, by Australian performance artist Stelarc (1996). This performance makes clever use a system built by Stelarc that manually actuates the muscles of the performer (Stelarc) based on impulses from a remote audience. In the Ping Body performances, the input is supplied not by a remote audience but by the flow of data itself: internet traffic. In her book Digital Art, Christiane Paul writes that “allowing the body to be controlled by the machine, Stelarc’s work operates on the threshold between embodiment and disembodiment, a central aspect of discussions about the changes that digital technologies have brought about for our sense of self” (167).

I’m interested in the relationship between these two very different works. Rockeby’s piece uses the body as an input device – an organic, impulsive and completely unique physical presence that gets outputted to pure information and pattern, but done so in a way that defies a recognizable pattern and assumes an organic appearance. Conversely, Stelarc’s performance uses pure data as its input, acting upon and subverting the agency of once unique physical body. In doing so, the data transforms Stelarc’s actions into a programmed and therefore, recordable and repeatable format.

As a class, I think it would be great to discuss Mark Hansen’s proposal that framing new media in terms of cinema (Manovich) denies the polymorphous potential of digital data. He doesn’t offer many examples of alternatives in this particular chapter, aside from agreeing that many digital art projects move towards the traditions of pre-cinematic devices in their necessity of physical participation and interaction. According to Hansen, “with the flexibility brought by digitization, there occurs a displacement of the framing function of medial interfaces back onto the body from which they themselves originally sprang”.

I’m also interested in discussing Kate’s theory of pattern/random vs. presence/absence and how emphasis on information technology privileges the pattern/random dialectic. This could be an interesting context to discuss and critique performance related works like these two. “The pattern/randomness dialectic does not erase the material world; information in fact derives its efficacy from the material infrastructures it appears to obscure. This illusion of erasure should be the subject of inquiry, not a presupposition that inquiry takes for granted” (28).

Very Nervous System:
http://homepage.mac.com/davidrokeby/vns.html

Ping Body (the website is hideous, sorry)
http://www.stelarc.va.com.au/pingbody/index.html
http://www.stelarc.va.com.au/pingbody/layout.html

January 21, 2008

Items of Interest

Filed under: General,Tools — kknight08 @ 4:54 pm

I thought others in the class might be interested in these:

1. The Succession of Simulacra: The Legacy of Jean Baudrillard (1929-2007).  This graduate conference will be held at UCSB and features Douglas Kellner as the keynote speaker.  Submissions are due Monday, February 4th.  If anyone is interested in attending from UCLA, I can help arrange carpools.

2. Alan Liu’s Toy Chest.   Part of the UCSB English Department Knowledge Base, the toy chest lists several free or inexpensive tools for online creation.  The list contains everything from mapping, to text analysis, to machinima creation tools.

– Kim

January 20, 2008

Japanese Cellphone Novels

Filed under: General — jjpulizzi @ 7:02 pm

A friend recently sent me this link to an article in the New York Times about the increasing popularity of cellphone novels in Japan. According to the article they’re written mostly by young women in a text messaging style. Though publishers have recently began printing them.

-James

January 17, 2008

Citations for “Hyper Deep”

Filed under: General — nkhayles @ 12:32 am

Works CitedAngold, A. A. Erkanli, H. L. Egger and E. J. Costell.  “Stimulant Treatment for Children: A Community Perspective,” Journal of the American Academy of Child and Adolescent Psychiatry 39 (2000): 975-984.Bear, Mark, Mark F. Bear, Barry W. Connors and Michael A. Paradiso, Neuroscience:  Exploring the Brain (Hagerstown MD:  Lippincott Williams & Wilkins, 2006).Bruer, John T.  “Education and the Brain: A Bridge Too Far,” Educational Researcher 26.8 (1997): 4-26. ———-.  “In Search of . . . Brain-Based Education,” Phi Delta Kappan 80.9 (1999): 9-19.Bukatman, Scott.  Terminal Identity:  The Virtual Subject in Postmodern Science Fiction.  Durham:  Duke University Press, 1993.Diagnostic and Statistical Manual of Mental Disorders DSM-IV-TR Fourth Edition.  New York:  American Psychiatric Publishing, 2000.———-.  DSM-III-TR Third Edition.  New York:  American Psychiatric Publishing, 1980.  Fisher, Scott. “Archives.” <http://interactive.usc.edu/members/jhall/archives/backchannel&gt;.Federation of American Scientists, “Summit on Educational Games:  Harnessing the Power of Video Games for Learning.”  2006. < http://www.fas.org/gamesummit/>Gee, James Paul.  What Video Games Have to Teach Us About Learning and Literacy.            New York:  Palgrave Macmillan, 2004.Hall, Justin A. and Scott S. Fisher, “Experiments in Backchannel:  Collaborative Presentations Using Social Software, Google Jockeys, and Immersive Environments,” http://nvac,pnl.gov/ivitcmd_chi06/papers/sub22.pdf&gt;. Hallahan, Dan P. and James M. Kauffman.  Exceptional Learners:  Introduction to Special Education, 10th edition.  New York:  Allyn & Bacon 2005 Johnson, Steven. Everything Bad is Good for You:  How Today’s Popular Culture Is Actually Making Us Smarter.  New York:  Riverhead Hardcover, 2005.  Kaiser Family Foundation. Generation M:  Media in the Lives of 8-18 Year Olds.<http://www.kf.org/entmedia/7251.cfm&gt;.LeFever, Gretchen B., Andrea P. Arcona, and David O. Antonuccio.  “ADHD Among American Schoolchildren: Evidence of Overdiagnosis and Overuse of Medication,” The Scientific Review of Mental Health Practice 2.1 (Spring/Summer 2003). <http://www.smhp.org/0201.adhd.html&gt;.  Linet, Les. “The Search for Stimulation:  Understanding Attention Deficit/Hyperactivity Disorder,” Science Daily (March 31, 2006).Marshall, Eliot.  “Epidemiology: Duke Study Faults Overuse of Stimulants for Children,” Science 289.5480 (August 4, 2000): 721. National Institute of Mental Health.  “Attention Deficit Hyperactivity Disorder.”  <http://www.nimh.hih.gov.publicat/adhd.cfm&gt;. Rafalovich, Adam.  “Exploring clinical uncertainty in the diagnosis and treatment of attention deficit hyperactivity disorder,” Sociology of Health and Illness 27.3 (2005):305-323.  Rubinstein, Joshua S., David E. Meyer, and Jeffrey E. Evans.  “Executive Control of Cognitive Processes in Task Switching,” Journal of Experimental Psychology:  Human Perception and Performance, 27. 4 (August 2001): 763-797.  <www. apa.org/journals/releases/xhp274763.pdf>. <http://sciencedatily.healthology.com/add-adhd/article6.htm&gt;.   Rudeda, M. R., M. K. Rothbart, L. Saccamanno, and Michael I. Posner, “Training, Maturation, and Genetic Influences on the Development of Executive Attention,” Proceedings of the National Academy of Sciences 102 (2005): 14931-14936.  Ryan, Richard, C. Rigby, and Andrew Przybyiski.  “The Motivational Pull of Video Games:  A Self-Determination Theory Approach,” Motivation and Emotion 30.4 (December 2006): 344-360.Shaffer, David. W., Kurt R. Squire, Richard Halverson, and James P. Gee, “Video Games and the Future of Learning,” <http://tabuladigita.com/files/Theory_2004_12_GamesFuturelrng.pdf&gt;. Sterling, Bruce. Distraction.  New York:  Gollancz, 2000.  Swanson, J. M., P. Flodman, J. Kenney et al.  “Dopamine Genes and ADHD,” Neuroscience Biobehavior Review 1 (Jan. 24 2000): 21-5.  <http:www.ncbi.nlm.hih.gov/entrez/query.fcgi?>.  Turkeltaub,  Peter. E., D. Lynn Flowers, Alysea Verbalis, Martha Miranda, Lynn Gareau, and Guinevere F. Eden, ”The Neural Basis of Hyperlexic Reading:  An fMRI Case Study,” Neuron 41 (Jan. 8, 2004): 11-25. <http://www.neuron.org/content/article/fulltext?uid=PIISO89662730303008031>.Turkeltaub, Peter E., Lynn Gareau, D. Lynn Flowers, Tomas A. Zeffiro and Guinevere F. Eden, “Development of Neural Mechanisms for Reading,” Nature Neuroscience 6 (July 2003): 767-773.  <http://www.nature.com/neuro/journal/v6/n7/nn1065.html&gt;.  Vitiello, B. “Student Treatment for Children: A Community Perspective:  Commentary,” Journal of the American Academy of Child and Adolescent Psychiatry 39 (2000): 992-994. 

My article on hyper and deep attention

Filed under: General — nkhayles @ 12:30 am

Apologies for this long post:  since there have been a couple of posts about synpatogenesis, here is my article that appeared in “Professor 2008.”  I will put the citations in a separate post.  All comments are welcome!  Kate Hayles

Hyper and Deep Attention:  The Generational Divide in Cognitive Modes

Networked and programmable media are part of a rapidly developing mediascape transforming how citizens of developed countries do business, conduct their social lives, communicate with each other, and perhaps most significantly, how they think.  This essay explores the hypothesis we are in the midst of a generational shift in cognitive styles that poses significant challenges to education at all levels, including colleges and universities.  The shift is more pronounced the younger the age group; already apparent in present-day college students, its full effects are likely to be realized only when youngsters who are now twelve years old reach our institutions of higher education.   To prepare, we need to become aware of the shift, understand its causes, and think creatively and innovatively about new educational strategies appropriate to the coming changes. 

            The shift in cognitive styles can be seen in the contrast between deep attention and hyper attention.  Deep attention, the cognitive style traditionally associated with the humanities, is characterized by concentrating on a single object for long periods (say, a novel by Dickens), ignoring outside stimuli while so engaged, preferring a single information stream, and having a high tolerance for long focus times.  Hyper attention, by contrast, is characterized by switching focus rapidly between different tasks, preferring multiple information streams, seeking a high level of stimulation, and having a low tolerance for boredom.  The contrast in the two cognitive modes may be captured in an image:  picture a college sophomore, deep in Pride and Prejudice with her legs draped over an easy chair, oblivious to her ten-year-old brother sitting in front of a console, jamming on a joystick while he plays Grand Theft Auto.  Each cognitive mode has advantages and limitations.  Deep attention is superb for solving complex problems represented in a single medium, but it comes at the price of environmental alertness and flexibility of response.  Hyper attention excels at negotiating rapidly changing environments in which multiple foci compete for attention; its disadvantage is impatience with focusing for long periods on a non-interactive object such as a Victorian novel or complicated math problem.

In an evolutionary context, hyper attention no doubt developed first; deep attention is a relative luxury requiring group cooperation to create a secure environment in which one does not have constantly to be alert to impending dangers.  Developed societies, of course, have long been able to create the kind of environments conducive to deep attention.  Educational institutions have specialized in them, combining such resources as quiet with assigned tasks that demand deep attention to complete successfully.  So standard has deep attention become in educational settings that it is the de facto norm, with hyper attention regarded as defective behavior that scarcely qualifies as a cognitive mode at all.  This situation would not necessarily be a problem, were it not for the possibility that a generational shift from deep to hyper attention is taking place.  In this case, serious incompatibilities arise between the expectations of educators, trained in deep attention and saturated with assumptions about its inherent superiority, and the preferred cognitive mode of young people who squirm in the procrustean beds outfitted for them by their elders. We would then expect a looming crisis that would necessitate a re-evaluation of the relative merits of hyper versus deep attention, serious reflection about how a constructive synthesis between deep and hyper attention might be achieved, and a thorough-going revision of educational methods.  But I am getting ahead of my story.  First let us look at the evidence that a generational shift from deep to hyper attention is in progress. 

 The Shift to Hyper Attention:  Generation M

Anecdotal evidence from educators with whom I have spoken at institutions across the country confirms that students are tending toward hyper attention.  During 2005-2006 I had the privilege of serving as a Phi Beta Kappa Visiting Scholar, making three-day visits during which I gave lectures, conferred with faculty, and talked with students.  I repeatedly heard comments from faculty to the effect that “I can’t get my students to read whole novels anymore, so I have taken to assigning short stories.”  When I queried students, there was a more or less even split between those who identified with deep attention and others who preferred hyper attention, but they unanimously agreed that their younger siblings were completely into hyper attention. 

Of course, one would not want to rely solely on such general impressions, so after my year was completed, I researched the topic.  An obvious explanation for the shift, suggested among others by Steven Johnson, is the increasing role of media in the everyday environments of young people.  The most authoritative study to date of the media habits of young people was commissioned by the Kaiser Family Foundation and reported in Generation M:  Media in the Lives of 8-18 Year Olds.  The survey focused on a statistically representative sampling of 2,032 young people, with 694 of those selected for more detailed study through seven-day media diaries they were asked to keep.  The results indicate that the average time young people spend with media per day is a whopping 6.5 hours—every day of the week, including schools days.  Because some of this time is spent consuming more than one form of media, the average time with media in general (adding together the various media sources) rises to 8.5 hours.  Of this, TV and DVD movies account for 3.51 hours; MP3, music CDs, and radio 1.44 hours; interactive media such as web surfing 1.02 hours; and video games .49 hours, with  reading bringing up the rear with a mere .43 hours.  The activity those of us in literary studies may take as normative—reading print books—is the least-often-used media form to which our young people turn in their leisure time.

The report also asked about the context in which young people did their homework.  Thirty per cent reported that they did homework while attending to other media such as IM, TV, and music “most of the time,” and another thirty-one per cent reported they did so “some of the time.”  Some or most of the time young people are doing the tasks assigned by educators, then, they are multitasking, alternating doing homework with listening to music (33%), using computers (33%), reading (28%), and watching TV (24%).  “Alternating,” I say, because psychological studies indicate that what we call multi-tasking is actually rapid alternation between different tasks (Rubinstein et al.)  These studies also indicate that efficiency declines so significantly with multi-tasking that it is more time-efficient to do several tasks sequentially than attempt to do them simultaneously.  One is tempted to conclude that the strong preference young people show for multi-tasking must have another explanation than the presumptive one that it saves time; one possibility is a preference for high levels of stimulation.

Seeking stimulation is also associated with attention deficit disorder (ADD) and attention deficit hyperactivity disorder (ADHD).  Many people do not realize that Ritalin, the drug frequently prescribed for children with ADD and ADHD, is actually a cortical stimulant; when tranquilizers were prescribed in the early days of testing ADD and ADHD children, their symptoms became worse.  This seemingly counter-intuitive result is explained by Dr. Les Linet, a child psychiatrist at Beth Israel Medical Center in New York City specializing in ADD and ADHD, by suggesting that a young person with AD/HD acts as if his nervous systems has somehow acquired a “shield,” so that normal stimulation is felt as boredom and relatively high levels of stimulation are necessary for the child to feel engaged and interested. AD/HD, Dr. Linet writes, might more appropriately be named the “search for stimulation” disorder (2).  The behaviors listed in the DSM IV as symptoms of AD/HD, such as failure to pay close attention to details, trouble keeping attention focused during play or tasks, and avoiding tasks that require a high amount of mental effort and organization such as school projects, should be understood, Linet argues, not as misbehavior but as the search for more stimulation than the assigned task yields, for example by looking out the window, fidgeting, breaking the rules by talking with other children, etc. 

AD/HD first appeared in the third edition of DSM-III in 1980.  It is important to understand that while a percentage range is typically assigned to the number of young people with AD/HD—usually given as 3%-5% (National Institute of Health)-these numbers are based on the statistical determination that at least six of the fourteen behaviors listed for “inattentive” AD/HD and/or six of the eleven behaviors listed for “hyperactivity-impulsive” AD/HD in the DSM cause significant impairment.  Inevitably these judgments contain subjective elements.  A child might have four or five of the behaviors and not be classified as AD/HD, although clearly he has tendencies in that direction.  AD/HD should be understood, then, as a category occurring at the end of a spectrum that stretches from “normal.”  Moreover, studies indicate that some children diagnosed as having AD/HD have been misdiagnosed and should not be included in this category (LeFever et al.,;Angold et al.; Marshall)..  Add to this controversies over whether AD/HD should be considered as a mental disorder at all, and the picture of a definitive category with clear-cut boundaries grows fuzzy indeed (Rafalovich; Hallahan).

My hypothesis can now be stated in terms that link it with AD/HD.  The generational shift toward hyper attention can be understood as a shift in the mean toward the AD/HD end of the spectrum.  It is often claimed that the percent of population with “official” AD/HD is constant over time; depending on the shape of the curve, the claim is not necessarily incompatible with a shift in the mean.  We know, however, the number of people diagnosed with AD/HD is rising in most industrialized countries.  While this may be a function of increasing awareness, there is enough controversy over the accuracy of prevailing statistics to make the claim for a constant percentage debatable, to say the least.  There is evidence that AD/HD has genetic causes related to dopamine transporters and perhaps to the brain’s inability to produce dopamine (Swanson et al.). Nevertheless, genetic predispositions often express themselves with varying degrees of intensity depending on their interaction with environmental factors, so the role played by increased environmental stimulation remains unclear.  Whatever the case with AD/HD, there is little doubt that hyper attention is on the rise and that it correlates with an increasing exposure to, and desire for, stimulation in general and stimulation by media in particular. 

As the Generation M report observes, rising media consumption should be understood not so much as an absolute increase in the time spent with a given medium—youngsters were spending about as much time with media five years ago, in 1999—as an increase in the variety and kinds of media, as well as in the movement of media into kids’ bedrooms, where they consume it largely without parental participation or supervision.   As Steven Johnson has convincingly argued, media content has also changed, manifesting an increased tempo of visual stimuli and an increased complexity of interwoven plots (61-106).  A related point (that Johnson does not mention) is a decrease in the time required for am audience to respond to an image.  In the 1960’s it was common wisdom in the movie industry that an audience needed something like 20 seconds to recognize an image; today that figure is more like 2 or 3 seconds.  Films such as Memento, Mulholland Drive, Time Code and others suggests that it is not only young people who have an increased appetite for high levels of visual stimulation.  Although the tendency has been most thoroughly documented with the “Generation M” age group, the adult population is also affected, if to a lesser degree.  Moreover, children younger than the eight years old that was the cut-off for the Generation M study are no doubt influenced even more deeply than their older compatriots.

Not without reason, then, have we been called the AD/HD generation.  Rumors abound that college and high school students take Ritalin, Dexedrine, and equivalent drugs to prepare for important examinations such as the SAT and GRE, finding that cortical stimulants help them concentrate.  Surveys of medications taken in North Carolina and Virginia public schools by two different research groups find that Ritalin is being prescribed for children who do not fit the criteria for AD/HD, with 5-7% misdiagnosed (Angold, et al.; LeFever et al.); B. Vitiello speculates that the over-use of Ritalin may be because parents press for it, finding that it helps their children do better in school.  These results suggest that as the mean moves toward hyper attention rather than deep attention, compensatory tactics are employed to retain the benefits of deep attention through the artificial means of chemical interventions in cortical functioning. 

How does media stimulation affect brain functioning?  It is well known that the brain’s plasticity is an inherent biological trait; humans are born with their nervous systems ready to be reconfigured in response to their environments.  While the number of neurons in the brain remains more or less constant throughout a lifetime, the number of synapses—the connections that neurons form to communicate with other neurons—is greatest at birth.  Through a process known as synaptogenesis, a new-born infant undergoes a pruning process whereby the neural connections in the brain that are used strengthen and grow, while those that are not decay and disappear (Bear et al.). The evolutionary advantage of this pruning process is clear, for it bestows remarkable flexibility, giving humans the power to adapt to widely differing environments.  Although synaptogenesis is greatest in infancy, plasticity continues throughout childhood and adolescence, with some degree continuing even into adulthood.  In contemporary developed societies, this plasticity implies that the brain’s synaptic connections are co-evolving with environments in which media consumption is a dominant factor.   Children growing up in media-rich environments literally have brains wired differently than humans who did not come to maturity in such conditions. 

Evaluating precisely how these changes should affect pedagogy requires careful analysis and attention to the ways in which different disciplines carry out their research John Bruer, president of the James D. McDonnell Foundation that funds cognitive neuroscientific research, has cautioned educators to distinguish between behavioral and cognitive research by psychologists on the one hand, and brain research in neuroscience on the other.  While behavioral studies focus on observable actions, neuroscience is concerned with neural structures and processes in the brain.  Bruer argued that while it is possible to bridge the gap between neuroscience and cognitive science, and also between cognitive science and education, trying to infer educational strategies from basic brain research is “a bridge too far,” for it would require establishing correlations between, say, microscopic neural patterns and macroscopic behavior such as a student fidgeting in his seat (1997).  As Bruer admits in his later writings, however, brain imaging studies are changing that situation because they allow correlations between observable actions—what the subjects are doing at the time the image is taken—and metabolic processes in the brain (1999).  

To my knowledge, there have been few imaging studies of the brain processes involved in video games and other interactive pursuits.  Among these are studies by Michael Posner and colleagues at Cornell University’s Weill Medical College.  The researchers measured the effect of video games on what psychologists call “executive attention,” the ability to tune out distractions and pay attention only to relevant information, or in the terms used here, the ability to develop deep attention.  The researchers adapted computer exercises used to train monkeys for space travel, modifying them into games for 4- and 6-year olds (Rudeda et al.).  For five days, the children progressed from a game involving moving a cat in and out of grass to more complicated tasks, including one that asked them to select the largest number while they were simultaneously given distracting and extraneous information.  The children’s brain activity was measured using electroencephalographs, as well as tests for attention and intelligence; some children underwent genetic testing as well.  The researchers discovered that the brains of the six-year-olds showed significant changes after the children played the computer games, compared with a control group that simply watched videos.  (The four-year-olds, by contrast, showed little change, perhaps because the age at which children typically can handle multiple information streams usually occurs between four and six years old.)   The results suggest that brain structure does change as a result of playing computer games at appropriate ages, and it also suggests that media stimulation, if structured appropriately, may actually contribute to a synergistic combination of hyper and deep attention, a finding with suggestive implications for pedagogy.     

In addition, there is an extensive body of research that throws indirect light on the subject.  By far the most research on media consumption and brain imaging patterns has been done on reading.  The research unequivocally shows distinctively different patterns in beginning, intermediate, and adult readers.  In an fMRI study at the Georgetown University Medical Center (Turkeltaub et al., 2003, 2004) designed to understand better the disorder called hyperlexia (in which someone focuses obsessively on letter forms while not necessarily comprehending content), it was found that in beginning readers, the most activity occurs in the superior temporal cortex, the area of the brain associated with connecting sounds to letters.  In experienced readers, by contrast, the most active area was the frontal left brain, associated with the accumulated knowledge of spelling.  For our purposes, the details of these patterns are less important than their overall import:  reading is a powerful technology for reconfiguring activity patterns in the brain.  When reading is introduced at an early age, as it customarily is in developed societies, it is likely that the process of learning to read—progressing from a beginning to an experienced reader—contributes significantly to the ways in which synaptogenesis proceeds.  In media-rich environments in which reading is a minor activity compared to other forms of media consumption, one would expect that the processes of synaptogenesis would differ significantly from media-constrained environments in which reading is the primary activity. 

Whether the synaptic reconfigurations associated with hyper attention are better or worse than those associated with deep attention cannot be answered in the abstract.  The riposte is obvious:  better for what?  A case can be made that hyper attention is more adaptive than deep attention for many situations in contemporary developed societies.  Think, for example, of the skills required for an air traffic controller who is watching many screens at once and must be able flexibly to change tasks very quickly without losing track of any of them.  Surely in this situation hyper attention would be an asset.  One can argue that these kinds of situations are increasing more rapidly than those that call for deep attention, from the harassed cashier at McDonalds to currency traders in the elite world of international finance.  The speculation that hyper attention is increasingly adaptive in contemporary society is highlighted in Bruce Sterling’s novel Distraction, in which the problematic next step in human evolution is envisioned as a chemically-induced transformation of the brain that allows the two hemispheres to operate independently of one another, turning the brain into a massively parallel organ capable of true multi-tasking.  While such ideas remain in the realm of science fiction, it is not far-fetched to imagine that the trend toward hyper attention represents the brain’s cultural co-evolution in coordination with high-speed, information-intensive, and rapidly changing environments that make flexible alternation of tasks, quick processing of multiple information streams, and low thresholds for boredom more adaptive than a preference for concentrating on a single object to the exclusion of external stimuli. 

What about the apparently paradoxical situation of the young person

totally into hyper attention who nevertheless spends long periods playing a video game, intent on mastering all of its complexities until he reaches the highest level of proficiency?   The key to the apparent paradox lies in the game’s interactivity, specifically its ability to offer rewards while maintaining high levels of stimulation.  As Steven Johnson convincingly argues, video games are structured to engage the player in competing for an escalating series of rewards (176-178), thus activating the same dopamine (pleasure-giving) cycle in the brain responsible for other addictive pursuits such as gambling.   But the dopamine cycle is not the whole story.  A study conducted by Richard Ryan and colleagues at the University of Rochester, in collaboration with Immersyve, Inc., asked 1,000 gamers what motivates them to continue playing (Ryan et al.) The results indicate that they found even more satisfying than the fun of playing the opportunities offered by the games for achievement, freedom, and in some instances, connections to other players.   Stimulation works best, in other words, when it is associated with feelings of autonomy, competence, and relatedness, a conclusion with significant implications for pedagogy.  Moreover, James Paul Gee convincingly argues that video games encourage active critical learning and indeed art structured to that the player is required to learn to progress to the next level.  The lesson has not been lost on the Federation of American Scientist, which commissioned a task force on educational games that concluded video games teach skills critical to productive employment in an information-rich society.  In a similar vein, there is growing interest in “serious games” (Shaffer et al.), in which the reward structure can be harnessed for the study of the sciences, social sciences, and as the next section argues, the humanities as well.

            The trend toward hyper attention will almost certainly accelerate as the years pass and the age demographic begins to encompass more “Generation M” young people.  As students move deeper into the mode of hyper attention, educators face a choice: change the students to fit the educational environment, or change the environment to fit the students.   At the extreme end of the spectrum represented by AD/HD, it may be appropriate to change the young people, but surely the environment needs to change as well.   What strategies might be useful in meeting this challenge?  How can the considerable benefits of deep attention be cultivated in a generation of students who prefer high levels of stimulation and have low thresholds for boredom?  How should the physical layout of educational environments be re-thought?  With the trend toward hyper attention already evident in colleges and universities, these issues are becoming urgent concerns.  Digital media offer important resources in facing these challenges, both in the ways they allow classroom space to be reconfigured and the opportunities they offer for building bridges between deep and hyper attention.  Let us turn now to consider the possibilities. 

 Hyper Attention and the Challenge to Higher Education

            An interactive classroom at the University of Southern California, under the direction of Scott Fisher, functions as a laboratory to explore new pedagogical models that provide greater stimulation than the typical classroom, including more possibilities for interactions among participants.  Fourteen large screens span the walls, providing display space for input controlled by wireless laptop computers scattered around a large conference table.  One mode of interaction is “Google jockeying”; while a speaker is making a presentation, participants search the Web for appropriate content to display on the screens, for example web sites with examples, definitions, images, or opposing views.  Another mode of interaction is “backchanneling,” in which participants type in comments as the speaker talks, providing running commentary on the material being presented (Hall and Fisher).

The laboratory’s archives, chronicled at Fisher’s website, provide a record of the various experiments (Fisher); they show the participants struggling to find appropriate configurations that will enhance rather than undermine the educational mission. One participant comments that in backchanneling, “The speaker function becomes more about seeding ideas and opening up discussion,” indicating that in such an environment, lecturing is less about a one-way transmission of information and more about providing a framework to which everyone contributes.  Other comments suggest that the participants share responsibility for the insightfulness of the comments they post.  As one participant comments, the interactive environment “challenges the audience to pay attention; it challenges the speaker to hold attention; perhaps it pushes everyone to . . . interact towards a shared goal.”  While the archives give the sense that the perfect configuration has yet to emerge, they convey a lively sense of experimentation and a willingness to re-conceive the educational mission so that everyone, teachers and students, bears equal responsibility for its success. 

            Other experiments might try enhancing the capacity for deep attention by starting with hyper attention and moving toward more traditional objects of study.  One of the difficult and complex texts I like to teach, for example, is The Education of Henry Adams.  Suffused with dry wit and stuffed with historical details, this text is an object, if ever there was one, that demands deep attention.   Imagine a course that begins by studying strategies of self-presentation at the wildly popular Facebook.com, including naiveté, deception, ironic juxtaposition, competition, cooperation, betrayal, and compelling narrative.   This provides a rich context in which the sly and subversive self-presentations in The Education of Henry Adams can be analyzed, including an assignment that asks students to compose Facebook entries for the book’s ironic persona. 

            A similar experiment might be tried with the popular computer game Riven and William Faulkner’s formidably complex novel, Absalom, Absalom!   Like the novel, Riven unfolds through geographically marked territory, the five islands in which brothers compete for dominance.  Whereas in Riven  access to the narrative can only be gained by solving the game’s myriad puzzles, in Absalom Absalom!  the narrative is accessible through the trivial device of turning pages.  Nevertheless, understanding Faulkner’s narrative requires solving multiple puzzles of identity, motivation, and desire.  The juxtaposition invites comparison with the hyper attentive mode of interactive game play, where the emphasis falls on exploring and remembering crucial clues embedded in a reward structure keyed to gaining access to the next level of play.  With Faulkner’s novel, the deep attentive mode of rhetorical complexity, temporal discontinuities, and diverse focalizations are coupled with the subtle cognitive reward of constructing large-scale patterns in which these can fit.    

            A somewhat different configuration emerges from juxtaposing Emily Short’s interactive fiction, Galatea, with Richard Powers’s novel Galatea 2.2.  Both works feature a gendered artificial intelligence with which the player’s character (in the interactive fiction) and protagonist (in the print novel) interact, respectively.   Whereas the challenge in Short’s Galatea is to engage the artificial intelligence in  realistic conversation to understand her backstory, motivations, and psychology, the challenge in Powers’ fiction is to use the interactions of the protagonist, named Rick Powers, with the artificial intelligence Helen to understand his backstory, motivations, and psychology. 

In Short’s interactive fiction, Galatea is visualized as an animated statue with whom the player’s character can interact by conversing with her.  If transitions in the conversation are too abrupt or unrelated to previous comments, the statue turns her back to the player’s character and refuses to engage in further intercourse.  Access to Galatea depends, then, on creating realistic ways to advance the conversation without alienating her.  In Powers’s novel, the climax turns on the protagonist giving Helen information that alienates her from the world into which she, as an entity with a profoundly different embodiment than humans, has been dropped halfway.  Whereas the interest in Galatea lies in discovering the complexity of Galatea’s responses, which typically vary with each game play and spring from the sophisticated coding of the game engine algorithm, in Galatea 2.2 the words remain the same but their meaning varies depending on the ways in which the characters’ actions are interpreted.  These differences notwithstanding, the challenge implicit in both works is for the reader/player to understand the personae through narration, a perspective that brings into view common ground between hyper and deep attention. 

            As these examples show, critical interpretation is not above or outside the generational shift of cognitive modes but necessarily located within it, increasingly drawn into the matrix by engaging with works that instantiate the cognitive shift within their aesthetic strategies.  Whether inclined toward deep or hyper attention, one side or another of the generational divide separating print from digital culture, we cannot afford to ignore the frustrating, zesty, and intriguing ways in which the two cognitive modes interact with one another.  Our responsibilities as educators, not to mention our position as practitioners of the literary a

Further citations from Jim Andrews

Filed under: General — nkhayles @ 12:24 am
The first publication I can remember–certainly the first one I am aware of–about code poetry was an issue of Galaxia, a publication from Brazil, in 2001. The section on code poetry consisted of posts from the Webartery email list from 2000, a list I started in 1999. It has gone by the wayside, but there was much of interest at webartery back then. The Galaxia article ended with a relatively long essay by me that I posted to webartery as my take on ‘code poetry’. The Galaxia selection was put together by Jorge Luis Antonio. There’s a PDF of it at http://revcom2.portcom.intercom.org.br/index.php/galaxia/article/viewFile/1252/1023 , I see, just having googled ‘galaxia webartery’.
But somehow my work fell out of the ‘code poetry’ discussion. Not sure what happened there. In any case, am still pursuing my folly.
Next Page »

Blog at WordPress.com.