What started as a short response to Jacob’s short series of posts on Wikipedia has somehow spun off in its own unique direction, so please bear with me.
Larry Sanger, mentioned in Jacob’s “Wiki part 1” post, has successfully launched his own post-Wikipedia project, Citizendium. Ever heard of it? It distinguishes itself as an authoritative resource accepting and vetting articles submitted by users with “real names” and areas of expertise. It boasts over 5,500 articles, but my guess is that it doesn’t have nearly the distribution or recognition that Wikipedia (warts and all) has.
Which leads to a more basic question: in a distributed informational system as the Internet and its offspring, what qualifies as expertise? What does expertise constitute? Consider the pre-Internet model, the codex library. If we appeal to expertise as deep familiarity with, and extended engagement with, information in books that must exist in physical locations, I can’t help but consider how resource distribution can play a factor. People with sufficient time and monetary resources for storage and classification of book media – or who can get procure resources consistently, who can travel to libraries with collections of rare books, who have available resources enabling them to write about, publish, and present their findings, seem to offer a model of expertise that at once offers to preserve important technical and cultural data, but may also potentially play into the hands of those who control the resources, and their biases as to what constitute “important technical and cultural data.” This can easily produce an circular, inbred homoculture that stands little chance against a compelling external threat. (Insert favorite apocalyptic fantasy here.)
The apparent counter-current to this practice has been endowed with a multitude of pejoratives: anti-elitism. Anti-intellectualism. Old wives’ tales. Superstition. Philistinism. Dan Brown.
It calls to my mind similar debates in the field of science studies over what limns “science” from “non-science,” with fresh debate arising from an emergent discipline (for lack of a better term) known as “indigenous knowledges,” which includes fields such as archaeoastronomy and ethnoastronomy. Researchers here have discovered that surprisingly sophisticated cultural and technical information can be stored and transmitted in localized groups through oral transmission, apprenticeship, folk objects, rituals and other traditions, data that until recently was neglected because of, perhaps, an intellectualist bias against media that does not privilege alphabetic literacy.
Think through this a bit, and it doesn’t take long to recognize the enormous gender, race, ethnic and economic implications drawn into the boundary – one element of which I anticipate Zach will address in his presentation on Queer Technologies today.
Whether we approach it as potential end-users, programmers, analysts, editors, or other content creators, when we in New Media explore the question as to what constitutes acceptable “knowledge” in our field, just as in the sciences, we must take care not to allow the medium get in the way of the message, however sloppily rendered it may seem. As resistors are essential to ensuring proper levels of current in a circuit, resistance in the form of indigenous / subaltern / outlier / queer knowledges may just be what keeps our informational schema from frying out. Here, “function” is in the eye (biological or otherwise) of the beholder.