As mentioned in an earlier blog, I attended my first THATcamp at Brown University last weekend. It was productive, relaxed, entirely ego- and hierarchy-free (as far as I could tell), and a welcome new experience in how to exchange information. I also second Roger’s call for open access, as that was among the discussions pursued with a number of colleagues. Having served as the co-editor of an open access journal (Flusser Studies) since 2005, it is refreshing to realize how much support OAPublishing receives now. The session on OAJournals, in particular, pointed to a number of models, mostly produced on wordpress these days (including DH Quarterly; JHNA; or the entire directory), and The Public Knowledge Project. So time to upgrade, update, integrate, and think about how interactive an online journal can and should be. After all, social media may facilitate scholarly dialogue right on the site. If you need collaborators, look to DH Commons. If you seek a multi-media platform, check out SCALAR (for those in media and interarts studies, this one is fascinating). And check out this neat project on visualizing Jazz networks. For more information, go to the THATcamp summary. For next year’s THATcamp New England, join us on the UConn campus.
Archive for the ‘Anke Finger’ Category
Everyone knows these days it’s tough to keep up with all technical innovation – there’s always a new code, a new program, a new cloud. As someone attending her first THATcamp (yup, newbie), I am humbled and fascinated by all these possibilities (the-kid-in-a-candy-store simile comes to mind). I do try to keep up with digital arts, though (media aesthetics again, Roger), and the Playground Digital Arts festival always presents some intruiguing work. Do check out older works, too, such as probe, a piece that challenges your notion of cinematic experiences, as Boris Debackere explains:
“probe is a metaphor of cinema: cinema as a space shuttle, or a probe you enter and you are completely separated from the external world. Suddenly, the huge screen you have in front of you disappears and becomes a sort of window to travel in time and space. And you sit in this vehicle, but the whole trip takes place in your brain: you concentrate with your mind with the screen that becomes invisible and everything that is projected on the screen puts your mind and imagination into action. The setting of the film is in your brain, not on the screen: it is part of the same dynamics of cinema, which stimulate your brain, the mechanism of your perceptive abilities.”
Are we soon going to perform as our own projection screens? Or have we been there all along?
After several conversations this week on the human-computer interface that is gaining greater currency as Digital (fill-in-field-as-necessary), Wired Mag’s musings feel… slightly 90s. The end of theory. Really? It makes me recall that cartoon we so revered as grad students, with the mice and the cereal box, when postmodernism was all the rage.
Breakfast as text! Foucault flakes. But I am digressing. And maybe 2012 isn’t 2008, as declaring the end of theory, accompanied by the end of methodology, whether scientific or otherwise, represents rather circular reasoning. For now, I, at least, am still grappling with context and variation, not models or theory or methods. Big data to detect correlations and define patters, perhaps. But what kind of data? Let’s take this excerpt from the article:
“Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required. That’s why Google can translate languages without actually “knowing” them (given equal corpus data, Google can translate Klingon into Farsi as easily as it can translate French into German). And why it can match ads to content without any knowledge or assumptions about the ads or the content.”
One may feel flummoxed by the announcement that “no semantic or causal analysis is required.” I pursued the fun factor, however, and stuck some Nietzsche into Google translator. For some semantic and syntactic clarity (yes, it does exist) I chose his “Why I am so clever” from Ecce Homo (just a bit of postmodern irony), translated the German into English and then copy-and-pasted an English translation to be re-translated into German. The results were hilarious – truly, even Nietzsche would have been amused. Especially when “pangs of conscience” were re-translated as certain anatomical elements that point to the posterior.
Granted, this is a cheap shot – and it may not have much to do with the science mentioned in the article. Patterns and correlations do not fill in for context and variation, though. Dare I mention the “human element”? If “science can advance even without coherent models, unified theories, or really any mechanistic explanation at all,” as claimed in the article, why do it? Or am I over-analyzing here…
The suggested correspondence between magicians and ideology is intriguing – and makes me cringe. Who doesn’t remember those moments between blissful childhood and cynical adolescence when all of a sudden you realized that the magical performance in front of you, complete with choices that come with the magician’s attention, had turned into a farce and that, all along, you had been led by the nose? And when the broad smiles (feeling special, good choices!) changed to a sneer? Maybe I’m all alone with this; but how can one be oblivious to the manipulations of the game, especially when the initial fun-factor has lost its charm? Channeling Borghes here, somewhat…
Maybe I just read Thomas Mann’s Mario and the Magician (1929) one time too many. Maybe I read the Hunger Games one time too many. Maybe I equate games with just too many rules one has to follow and thus wants to break. What would make games really magical is if they could undermine their own construction and gameness, as in, games get to be changed at the core. Haymitch Abernathy’s discovering the force field and using it as a bouncing device comes to mind. What makes a gamer not feel like a trapped mouse, the eager subject of the game makers’ disciplining layers of tests and competitions? When s/he can outwit the game maker – or is that just part of everyone’s ludic impulse?
Play is fun, games are fun, magic is fun. Especially if the game itself is part of the ludic enterprise – that’s magic. For those interested in ideological game making, how about Europa Universalis III…
One of the fun things about writing a blog is that you get to look at Roger’s (and many others’) blog entries and just follow your proverbial nose. You click on one link, then on another, and then go back or keep going and just lose yourself in the maze or network or meanderings of the blogosphere. Happily, searchingly, intrigued and on occasion enchanted or provoked. And you realize how much stuff there is that makes your brain stay awake way beyond its bed time. And you keep going…
the networks [réseaux ] are many and interact, without any one of them being able to surpass the rest; this text is a galaxy of signifiers, not a structure of signifieds; it has no beginning; it is reversible; we gain access to it by several entrances, none of which can be authoritatively declared to be the main one; the codes it mobilizes extend as far as the eye can reach, they are indeterminable . . . ; the systems of meaning can take over this absolutely plural text, but their number is never closed, based as it is on the infinity of language” (emphasis in original; 5-6 [English translation]; 11-12 [French]).
But then, how far back do we want to take that connection between text and network? Which text ISN’T a network? Florian Cramer, for instance, begins his blog entry with Borges and ends with creativity. Then again, Cramer’s work becomes much more interesting if we rephrase the question: how deep do we want that connection to go? You may find one answer in his intriguing Words Made Flesh – not a blog entry. And he has yet entirely different ideas about Wikipedia.
The discussion on the “new aesthetic”, as it is referred to in the blogs cited by Roger below, is biting off a tad much: ontology, remediation, aesthetics (as in Kant on beauty), post-humanism, phenomenology, and the Dasein of pixels – to list a few of the terms thrown about. Why not add some Husserlian Lebenswelt and a Nietzschean “transvaluation of all values” to address the reversal of perspectives from human to post-human, between things and things? If so, I want to just close up my airbook and go to sleep. What’s the point?
“Bogost also claims that the New Aesthetic is about the ‘relationship between humans and computers’ and he argues that instead it should be concerned with ontology, in this case the object-oriented relationships between lots of different kinds of objects. For now we will put aside the slippage between ‘computers’ and what are clearly representations for, or of, the ‘digital’ (see Berry 2012a, 2012b) and the fact that many of these New Aesthetic objects may have been created as artworks without the mediation of digital technology at all.”
Thank you. Let’s not put it aside. If mimesis is representation (and not limited to imitation), and what is represented is the perceived world, then we should be talking about aisthesis within the context of media aesthetics – a well-established field (Zettl 2013, Munster 2011, a series at Continuum) on creation and perception in the (digital) arts. Not sure how all of the above fits into that – and whether post-human and material studies (lots of things…) necessarily applies. Do judge for yourselves: check out the upcoming ISEA conference on MACHINE WILDERNESS or, if Albuquerque is not exactly around the corner, the current exhibit at the New Britain Museum of American Art, Pixelated: The Art of Digital Illustration. And if you are yearning for some post-human engagement, Vilém Flusser’s Vampyrotheutis Infernalis just appeared in English translation – pixelate that.
It’s been a while, I admit it. It’s been a fun summer, a very sunny summer, too sunny for some. We took a break, sorted some things out, including how to move on with the blog. And here we are again. Away from the sun, at the computer, into our projects. Welcome to a new semester! And welcome back readers.
I’d have a lot to say about Roger’s last blog, on the new aesthetic, but given that this is the digital era his last comment lies a steep number of light years away. So I’ll let it go. We can revisit this again later, Roger… And throw myself head first into the many buzzes on campus and the various worlds in which I participate, including the ongoing hoopla concerning all things digital humanities. Over the summer Steven Park’s web site has morphed into the go-to place for dighum info, Tim Hunter’s Digital Media Center is now linked to the new Dept. of Digital Media & Design, and over my son’s potluck picnic in Middle School I hear about THE cloud to be in at UConn – the virtual PC. Free software – who can resist? Leave it to Cathy Davidson to start your semester off with some serious thoughts on how to rally the digital media masses to finalize your syllabus. And if you’ve done all that and your classes are off to an excellent start, digital or analog, check out your chances to become editor-at-large at Digital Humanities Now so that you really do stay informed! What’s not to love about staying out of the sun?
More on the art of distraction and attention spans from Hanif Kureishi in a recent NYTimes opinion piece:
“It is said that distractions are too easy to come by now that most writers use computers, though it’s just as convenient to flee through the mind’s window into fantasy. In the end, a person requires a method. He must be able to distinguish between creative and destructive distractions by the sort of taste they leave, whether they feel depleting or fulfilling. And this can work only if he is, as much as possible, in good communication with himself — if he is, as it were, on his own side, caring for himself imaginatively, an artist of his own life.”
Read the whole thing, plus comments. How much are we stuck in constantly juggling “biological determinism” with the “poetic human”? How does one determine “good communication” with oneself when many are faced with a loose mosaic of their various mediated selves?
Kureishi leans towards disobedience:
“We might need to be irresponsible. But to follow a distraction requires independence and disobedience; there will be anxiety in not completing something, in looking away, or in not looking where others prefer you to.”
Comments one reader:
“These flights do … lead to nothing. It is difficult to reach even the smallest achievement when your thoughts are an ever-changing channel and you have no control of the remote.”
Back to technogenesis (and never mind the metaphors): how many pilots does it take…
So if I want to get into games – let’s just assume that the idea of the stuffy academic whose work is wedded to an analog universe is over – what games would I get into? And I don’t mean Tetris or Pac-man… Where to start? How to get fired up? How to enter that other universe your students know inside and out? And – preferably – how to avoid buying a whole lot of equipment your students have accumulated over years (thanks to generous parents and all)? Perhaps it’s not an issue of computer illiteracy that many academics (in the humanities?) are not the ludic bunch. Perhaps it’s generational (some of those I ask DO play Tetris), perhaps it’s lack of time (what? MORE time on the computer???), perhaps it’s sheer ignorance of the cornucopia of games out there that are actually FUN! And relaxing! And intellectually challenging! And FUN! So, suggestions anyone? What do you play when you KNOW you have more IMPORTANT things to do?
Tom Wolfe, “What if he’s right?”, 1965:
“McLuhan has developed a theory that goes like this: The new technologies of the electronic age, notably television, radio, the telephone, and computers, make up a new environment. A new environment; they are not merely added to some basic human environment. The idea that these things, TV and the rest, are just tools that men can use for better or worse depending on their talents and moral strength-that idea is idiotic to McLuhan. The new technologies, such as television, have become a new environment. They radically alter the entire way people use their five senses, the way they react to things, and therefore, their entire lives and the entire society. It doesn’t matter what the content of a medium like TV is. It doesn’t matter if the networks show twenty hours a day of sadistic cowboys caving in people’s teeth or twenty hours of Pablo Casals droning away on his cello in a pure-culture white Spanish drawing room. It doesn’t matter about the content. The most profound effect of television – its real ‘message,’ in McLuhan’s terms – is the way it alters men’s sensory patterns. The medium is the message - that is the best-known McLuhanism. Television steps up the auditory sense and the sense of touch and depresses the visual sense. That seems like a paradox, but McLuhan is full of paradoxes. A whole generation in America has grown up in the TV environment, and already these millions of people, twenty-five and under, have the same kind of sensory reactions as African tribesmen. The same thing is happening all over the world. The world is growing into a huge tribe, a . . . global village, in a seamless web of electronics.”
Check it out: Brain Pickings