I am still behind on blogging, but here’s some nuggets from Technosociety blog, a fresh blog about the relationship between technology and society by Zeynep Tufekci. Lots of deep thinking here. Here’s her take on the Nobel Peace prize announcement and Twitter:
The Internet is not a game-changer in the sense of a cat-and-mouse game because, yes, it empowers both the cat and the mouse. It is a game-changer because we are not cats or mouse but people, and people care deeply about what other people think of them and how they conduct their lives. In the 21st century, it is not sustainable for a governing elite to be both repressive at home and welcome in the world — and most elites deeply desire the latter as much as they may still cling to the former. One can point to apparent exceptions like the Burmese Junta but I think the formulation holds for most.
People almost never win over a repressive regime because they are better organized or better equipped or better able to get things done. All the examples one can point to are going to be rare remaining weak states or regimes from the past: the modern repressive state is just too powerful. People win over repressive regimes because the elites in those regimes are also people, and as people, they also crave the sense of belonging and legitimacy that people everywhere, even the most powerful, crave. People can win not because they can beat back force with force (they rarely can), but because they can withdraw their consent and undermine the standing the rulers have to repress, as the repressive apparatus itself consists of people. I think the collapse of the Soviet blocs is an amazing example of how the seemingly strongest state can wither away after being hollowed out through lack of legitimacy. (Pick your example: Apartheid South Africa, East Timor, etc.)
My unease with the portrayal of this story isn’t just about the fact that this mine, like many others around the world, had an appalling safety record and the miners would have been out by themselves in 48 hours had the mine owners installed the safety ladders in the ventilation shafts as they were, by law, required to and warned about. And I do realize, as others have noted, that we copiously consume many mined metals and minerals as if they do not come at a huge human and environmental cost – costs which have been increasing as the easy pickings have long been picked over and we are now drilling deeper and deeper to extort the earth to give up these precious elements. Miners around the world are treated as discarded lives (thousands were dead just in China just last year) and suffer from a wide-range debilitating diseases.
The problem is that this rescue was a spectacle of technological confidence. The message was that with enough money, determination, technological savvy, gadgets, NASA, experts, smarts, we can solve these problems which are of our making. Yes, we can, for small problems (and we should) like one collapsed mine. However, our bigger problems, climate change, resource depletion can’t be solved by just-the-right-amount-of-tech-wizardry. There are hard choices and inevitable compromises ahead and we should get ready for a (global) discussion on how to finally start ameliorating the unavoidable upheaval that is headed our way.
There is one part of the message, however, I believe is crucial. The spirit of solidarity and camaraderie that the miners held onto under very difficult circumstances will be key. Paradoxically, such a spirit is often easier to hold on to under grave conditions that threaten survival and require sacrifice from all. Metaphorically speaking, we as humanity are already trapped in mine, a big blue and pretty one but still one that confines us and is under threat, and our technology is not going to provide us with a magical phoenix capsule that will solve everything without much sacrifice or pain and there will be no extraterrestrials who send us glucose drinks and video-feeds through ingenious tubes.
So the multiple pronouncements by many pundits that there is nothing shocking here actually expose the heart of the issue: the jaded insider game of hypocrisy and cynicism involves much of the established media and punditry. Most diplomats and journalists already know, for example, that the U.S. has been spying on United Nations officials––this had actually been exposed but received fairly little media attention, certainly less than Dancing with the Stars. Most “insiders” know how the game is played. That nations go to war for interests and resources. That, around the world, the U.S. is not seen as a pure purveyor of democracy. That lobbyists dominate policy-making. That there is a scientific consensus around global warming and that almost all the dissent is financed through the oil-coal interests. That big nations often try to use multinational institutions to advance their own narrow agendas under the cloak of high ideals. That many politicians are corrupt, ignorant and self-interested. But most of this is rarely discussed in an open and serious manner.
There are many other issues that Wikileaks raises (such as the consequences of the corporatization of our social commons, which I had previously written about) and the relationship between the relatively open and distributed nature of Internet’s infrastructure and its ability to support a dissident public sphere (DDoS attacks cut both ways and I believe that they are counterproductive as they derail the conversation away from the real topic, transparency and accountability of the modern state, into trivial questions). However, I believe that Wikileaks also points to the way forward for civic journalism to survive as a relevant force — by first becoming an outsider to power. Without major newspapers’ role in acting as active intermediaries in focusing public attention to the revelations in these cables, these would likely get lost in a sea of confusion and clutter.
(She has written several pieces about wikileaks, dictatorships, Jaron Lanier and other things which I will get around to later).
One of the more intriguing posts was her pre-ipad piece about how consumption devices like the iPad may end up devaluing writing or at least the ability of readers to respond:
That is a consumptive mindset and before the Internet, that was the most easily available way of spending free time. Over the years, as the Internet has taken over my world, I have done less and less of that and I suspect that is true for many people. I find it hard to just consume without being able to say something. I want a keyboard nearby.A large part of what made the Internet such a breakthrough for the public sphere that was previously dominated by one-way conversation from the powerful, the rich and the slickly-produced is the fact that interactivity through writing was built into the device that we used to access it. Computers came with keyboards, not touchscreens and not speech recognition . So, the issue is not just that typing on a screen is clunky. And I suspect soon these devices will come with speech and natural language recognition. That’s fine if you want to tell your mobile device to call home or pull up recipes for oatmeal-raisin cookies. Not so fine if you want to comment on a blog post, fire off a tirade, or write a lengthy email. Very few people can dictate as if they were writing — and often those rare types are professional writers.
Writing, especially writing at length is a different modality of thought than talking and it also allows a different kind of exchange and discourse. (I refer specially to the scholarship of Neil Postman and Walter Ong.) As Postman argues, writing and the spread of the printed word through literacy and the printing press created a culture in which it is possible to debate ideas at length and produce analytic thought which can be produced, advanced, discussed, refuted, rejected, improved and otherwise churned through the public sphere. As Postman writes in Amusing Ourselves to Death: “almost all of the characteristics we associate with the mature discourse were amplified by typography, which has the strongest possible bias toward exposition: a sophisticated ability to think conceptually, deductively, and sequentially; a high valuation of reason and order; and abhorrence of contradiction; a large capacity for detachment and objectivity; and a tolerance for delayed response.” (p.63).
Even though my pasting of the relevant passage included the same hyperlinks, I have not checked the hyperlinks and have not gone to the trouble to determine whether they are relevant or interesting. Is it obligatory for one blogger to include these hyperlinks as is? Sometimes I disable them, sometimes (like now) I leave them as is. In this case, I trust that the blog author will link to things which are intrinsically interesting; on the other hand, I have noticed that academic discourse often uses footnotes and references to other scholarship to buttress (and possibly to distract from) an analysis. Perhaps it is important to allude to Postman here; on the other hand, can the thought be adequately conveyed without making this allusion? For academics it is important to convey a thought’s provenance, but for plain old writer types, this is less important (and possibly distracting). Blogger types constantly refer to things and updating threads. So is hardly surprising that a blog would throw in links everywhere (especially when citing economic or scientific analyses).
On the other hand, it can be cumbersome for the reader to travel though passages detailing such provenance. Maybe these links will be useful for confirmation, but during the first reading, it is not terribly helpful. Also, I have noticed that many current event bloggers throw in way too many links (which mostly offer confirming or redundant reports without offering new information). As odd as it sounds, when I blog or write something for the web, I tend to keep hyperlinks to a minimum (especially when I am trying to say something). There is a practical reason for this. Ten years from now, 90% of hyperlinks will be useless, and all that remains will be your descriptions of what was there. I remain skeptical that archiving technologies like the Wayback Machine will continue to be as helpful as web applications become more sophisticated and proprietary.
Back to Ms’. Tufekci’s piece about the ipad. There is something to her argument, but can’t people have two devices for two different modalities of communication? I have a desktop, a laptop and an ipad. I bring the ipad almost everywhere (mainly for reading articles/ebooks/RSS feeds, jotting short notes, bookmarking and catching up on email. I do my most productive work on my desktop and as a backup solution, my laptop.
My concern is different; I publish online fiction, and a laptop or desktop is not useful for reading in bed, in a restaurant or on mass transit. A keyboard might be nice to have when someone is reading my stories, but I’d much rather that a potential reader be able to have the ability to read comfortably in diverse contexts.
In that respect, the ipad functions for me very much as a storage device for media/information which I want to process later in a more relaxed mode. Yes, we are limited by the device’s form factor (and I would expect that to change a decade later), but I wouldn’t be surprised if at that time I am still using two devices for different modes of thought.
In the ebook world we are debating the uses and necessity of keeping the concept of pagination; frankly the experience of reading in a web browser (even if we overlook the ads) is unpleasant and not conducive to deep understanding. At the moment I have 20 browser tabs open, and screen width is significantly more than height (which results in lines of awful lengths). Also, each paragraph of text seems peppered with all kinds of hyperlinks of varying usefulness. (have you noticed that the NYT includes lots of links to NYT "topics" or "categories" of no intrinsic usefulness or relevance to the article themself?)
To contrast: the PDFs and epubs I read on my ipad allow me to read a subject in depth without distraction. It offers a way to read that I have deeply missed with the advent of the Internet.
Ms. Tufekci wrote this piece before the Ipad actually came out, so she didn’t have the benefit of seeing the device in action or the apps which would be later released. But I can report that after 8 months of ownership: 1)the amount of writing I have done on my laptop/desktop is no less than what it has been before, 2)I have in fact been able to edit an entire book on my ipad and 3)I have been able to read electronic devices on my ipad in ways I was never able to do comfortably on my laptop or desktop and 4)there are numerous contexts where I have both the ipad and laptop/desktop open simultaneously.
I am not altogether satisfied with the notetaking ability of the ipad — by that, I mean the ability to sync my clippings, notes and bookmarks with other computers where they can be easily accessed. Often I will find something in the ipad and then access the URL simultaneously in a PC web browser. Clumsy, but adequate. For me, the hardest adjustment about the ipad was having to deal with one app at a time and to keep switching back and forth. At least with a Windows PC, it is easier to switch between a variety of tasks listed on the toolbar. To summarize:
The ipad is a great single-tasking storage device.
Update: I notice that Tufekci gave a talk a few months later (presumably after she had time to look at the device). Here’s the provocative summary:
Apple’s much-discussed iPad fits more within the logic of consumer society rather than the participatory Web 2.0 with its focus on active participation, diminishing corporate control, and a trend towards free products and services (what has come to be known as “prosumer society”). In contrast to Web 2.0, where users as “prosumers” actively participate in the production of that which they consume and often create systems from the bottom-up, the iPad channels passive consumption, corporate control via “closed” systems and a renewed focus on traditional, top-down, pay-per-view media. Indeed, the iPad is engineered to enforce this passivity, for example through lack of a tactile keyboard. The iPad is indicative of Apple’s Disneyfied approach in the way it produces spectacle to enchant or “wow” individuals into passive consumption, attempts to exercise control by creating a “walled garden,” and seeks to monetize more and more of the interactions within the system.
(That would have been a great lecture to attend!) Let me say that there is merit to most (if not all) of this criticism. But not if you regard the ipad (and clones) as primarily storage devices.