Monday, July 28, 2008

Diversity 2.0

I'm giving a talk this Friday to the annual Vocation of a Lutheran conference in Decorah, Iowa. The title of the talk is Diversity 2.0. The talk will explore the changing nature of diversity in an increasingly "wired" society. I'll post the presentation slides in the next day or two.

The talk will look at diversity and how it relates to Aristotle's three forms of knowledge. The crux of the talk is that we're moving from a primary rationale for diversity based on episteme (epistemological knowledge) or techne (technical knowledge) to one based on prhonesis or wisdom, for lack of a better term. This is so because as the network society evolves, access to epistemological and technical knowledge can be acquired on-line but wisdom still requires the face-to-face interactions with diverse others. More soon :-)

Wednesday, July 23, 2008

Diversity and Information Overload

One of the more interesting aspects of Carr's Atlantic article and the responses in edge.org and britannica.com is the effect this has on inter-group, inter-cultural relations. This is Carr's main point

What the Net may be doing, I argue, is rewiring the neural circuitry of our brains in a way that diminishes our capacity for concentration, reflection, and contemplation.


Carr is suggesting this is happening mechanistically as if the irresistable draw of the web leaves us no choice in the matter. There are global driving forces which make us want to be insatiable Netizens. As we proceed through what Mauel Castells calls a "network society" we fear being excluded from its nodes. In a response to Carr, W. Daniel Hills attributes our desire for connectedness to globalization:

Fast communication, powerful media and superficial skimming are all creations of our insatiable demand for information. We don't just want more, we need more. While we complain about the overload, we sign up for faster internet service, in-pocket email, unlimited talk-time and premium cable. In the mist of the flood, we are turning on all the taps.

We are now trying to comprehend the global village with minds that were designed to handle a patch of savanna and a close circle of friends. Our problem is not so much that we are stupider, but rather that the world is demanding that we become smarter.


I think this is a better way of thinking about our relationship to information. Our desire to know the world around us is being outstripped by the increasing ease with which we can know it. The response to this is not an inability to reflect, but a desire to respond in real time to a rapidly evolving network of places, events and relationships.

This need to be "in the network" leads us towards what Douglass Rushkoff in his edge.org entry calls "thin-slicing" information. I admit to being a thin slicer, scanning headlines and RSS feeds to pull out nuggets of wisdom that I believe make me not only smarter, but a better global citizen. But does knowing superficially about what's going on in Rangoon, Geneva and Buenos Aires make me a better person? Am I really engaging with these "others" in a meaningful way? Larry Sanger says no:

To be limited to Twitter-sized discourse ultimately means that we will never really understand each other, because all of our minds are complex and in that way “cathedral-like.” It is extremely difficult to understand other people, unless you take a long time to study what they say. If we do not understand each other in our full and deep individual complexity, we will be invisible to each other, and ultimately incapable of real human society.


Carr suggests that Google's business model is dependent upon my believing that a "thin slicing" approach to the web is leaving me better off.

The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.


My great concern is that this is how we begin to view diversity, as a collection of disconnected experiences that define our consumer selves. In other places, I've called this "menagerie diversity" or a diversity built upon an "appreciation of the other" rather than based upon actual engagement and collaborative work with the other. The great irony is that, as Hills points out, we are closer to each other than ever before, but at the same time we've never been further.

Monday, July 21, 2008

Kitty Immigration

Thanks to The Sanctuary, a great blog new blog I found on immigration issues for this link to The Pinky Show oddly mesmerizing take on the immigration issue. Pinky is a kitten with a mission to speak truth to power! Stick it to the Lou Dobbs-man, Pinky!

Sunday, July 13, 2008

No Cost Castrations, Cybertizzies, and Now this!

Obama is certainly experiencing the crucible of presidential electoral politics. On one hand the venerated Jesse Jackson wants to perform a no-cost castration. On the other, the netroots are in a cyber-tizzy over Obama's singing of the FISA bill. Now he has to shake the mainstream media's gleeful exploitation of the "Muslim/ Black-radical meme." This New Yorker cover from Ben Smith's Politico blog highlights how the MSM can use the flimsy justification that the public's belief in "the Muslim thing" is an interesting cultural pheonomenon and thus worthy of treatment.



Of course, if you're going to talk about it, you need a controversial cover because, well, you have to sell magazines. It's a sleazy turn in the coverage of presidential politics. The New Yorker has decided to racialize the Obama's because a small sliver of the U.S. population thinks he's a Muslim. They've given the darker forces of our culture a new laptop screen background.

Thursday, July 10, 2008

The End of Theory II

Edge.org has a wonderful symposium on reactions to Chris Anderson's Wired article on The End of Theory.. What strikes me from reading the symposium is the lack of regard for inductive methodologies as "science." The presumption is that, what Richard Fenno called, soaking and poking, is something new in the world of science. Traditionally in my discipline, it has always been thought of as a prelude to the real work of hypothesis testing.

What strikes me as fascinating is the ability of "computing in the cloud" to hyper-soak and poke. Kevin Kelly uses some interesting examples from Google about this potential.
It may turn out that tremendously large volumes of data are sufficient to skip the theory part in order to make a predicted observation. Google was one of the first to notice this. For instance, take Google's spell checker. When you misspell a word when googling, Google suggests the proper spelling. How does it know this? How does it predict the correctly spelled word? It is not because it has a theory of good spelling, or has mastered spelling rules. In fact Google knows nothing about spelling rules at all.

Instead Google operates a very large dataset of observations which show that for any given spelling of a word, x number of people say "yes" when asked if they meant to spell word "y. " Google's spelling engine consists entirely of these datapoints, rather than any notion of what correct English spelling is. That is why the same system can correct spelling in any language.

In fact, Google uses the same philosophy of learning via massive data for their translation programs. They can translate from English to French, or German to Chinese by matching up huge datasets of humanly translated material. For instance, Google trained their French/English translation engine by feeding it Canadian documents which are often released in both English and French versions. The Googlers have no theory of language, especially of French, no AI translator. Instead they have zillions of datapoints which in aggregate link "this to that" from one language to another.

Once you have such a translation system tweaked, it can translate from any language to another. And the translation is pretty good. Not expert level, but enough to give you the gist. You can take a Chinese web page and at least get a sense of what it means in English. Yet, as Peter Norvig, head of research at Google, once boasted to me, "Not one person who worked on the Chinese translator spoke Chinese. " There was no theory of Chinese, no understanding. Just data. (If anyone ever wanted a disproof of Searle's riddle of the Chinese Room, here it is. )
This is no doubt true when it comes to Social Science where we are notoriously dreadful at prediction. It is not so true for explanation, science's other core purpose. Here's Bruce Sterling's amusing rejoinder to Kelly's
Surely there are other low-hanging fruit that petabytes could fruitfully harvest before aspiring to the remote, frail, towering limbs of science. (Another metaphor—I'm rolling here. )

For instance: political ideology. Everyone knows that ideology is closely akin to advertising. So why don't we have zillionics establish our political beliefs, based on some large-scale, statistically verifiable associations with other phenomena, like, say, our skin color or the place of our birth?

The practice of law. Why argue cases logically, attempting to determine the facts, guilt or innocence? Just drop the entire legal load of all known casework into the petabyte hopper, and let algorithms sift out the results of the trial. Then we can "hang all the lawyers, " as Shakespeare said. (Not a metaphor. )

Love and marriage. I can't understand why people still insist on marrying childhood playmates when a swift petabyte search of billions of potential mates worldwide is demonstrably cheaper and more effective.

Investment. Quanting the stock market has got to be job one for petabyte tech. No human being knows how the market moves—it's all "triple witching hour, " it's mere, low, dirty superstition. Yet surely petabyte owners can mechanically out-guess the (only apparent) chaos of the markets, becoming ultra-super-moguls. Then they simply buy all of science and do whatever they like with it. The skeptics won't be laughing then.

Wednesday, July 9, 2008

The End of Theory

Chris Anderson has an interesting, if not strange, article in WIRED where he makes the claim that we are arriving at the "end of theory." He make makes the case that massive amounts of data (what he calls the Perabyte era) make the scientific method obsolete. The large volumes of data collection and analysis that lightning fast processing speed and massive storage capacity of modern computing allows, makes pattern matching a much more viable approach to knowledge creating than hypothesis testing.

There is now a better way. Petabytes allow us to say: "Correlation is enough." We can stop looking for models. We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.


While the poor guy is getting shellacked on the comment boards, he's on to something. He probably overstates his case for the natural sciences, but his argument is more telling for the social sciences. If theory, even universal theory, about human behavior is time bound and context dependent, and society is innovating and changing at an exponentially rapid pace, then what good is universal theory?

Bent Flyvbjerg's wonderful book Making Social Science Matter makes a related but different argument about the shortcomings of applying scientific principles to social science. he argues for an emphasis in social science on phronesis, or knowledge on the "art of living," rather than episteme, or knowledge for its own sake. Here's a telling passage from an essay derived in part from his book.

Regradless of how much we let mathematical and staistical modeling dominate the social sciences, they are unlikely to become scientific in the natural sciences sense. This is so because the phenomena modelled are social, and thus "answer back" in ways natural phenomena do not.


This is the guiding principle behind my own thinking about race scholarship. it is much more instructive for use to be guiding our scholarship towards knowledge that enhances the art of living in a multicultural democracy over the quixotic search for some universal law of race relations.

Wednesday, July 2, 2008

Social Desirability Bias and Juicy Campus

In preparation for my Race and Politics course this fall semester, I've brushed up on the latest work out there on social desirability bias. The general idea is that we harbor implicitly biased views about other groups that we do not share implicitly lest we run afoul of social norms.

The web can provide a safe space for unleashing these implicit biases. One such place where college students can vent their implicit biases is Juicy Campus. A piece in the latest issue of Radar features the controversy over the site's content. The founder of the site seemed to have innocuous intentions:

We thought people might talk about what happened at some fraternity party last weekend, or to rank sororities. That sort of thing," he insists. "And if you look, you'll definitely find those fun stories. And then there's a bunch more stuff that we didn't realize people would use the site for.


But the site has turned into a dustbin of offensive, unsubstantiated accusations and slurs:

promiscuity, drug abuse, plastic surgery, homosexuality, rape, and eating disorders, along with enough racist, anti-Semitic, and misogynistic invective to make David Duke blanch—that seems to generate the majority of the page views.


I first heard of this site from a student in my Community Development class last semester. What struck me (perhaps it shouldn't have) is how graphic the comments were on the site. I can remember hearing some pretty graphic stuff in my own college days, but I couldn't imagine the desire to make such comments public. I suppose that is the point, social networking sites make the private immediately public. Devices like cell phones with SMS technologies and sites like Twitter allow you to post your impulses. I wonder how many of the posts on this juicy campus site are infused with alcohol or other drugs. What social networking and participatory culture allows us to do is to be on-line in the moment. But to me the unanswered question is whether is simply captures a moment of unvarnished racism or sexism, or does it encourage the creation of routines that support further exposition of offensive views?

Tuesday, July 1, 2008

Folksonomy as a Political Methodology in the Study of Race

There has been some good recent scholarship (here and here) in political science challenging the use of the hypothetico-deductive model to explain how race impacts the political process. Traditionally, political scientists have taken race or ethnic identification to logically precede group-based interest-formation and mobilization.

The reality of race and ethnicity is that they are multifaceted, inter-sectional and contextual constructs that cannot be captured through survey research that asks respondents to check a box next to the ethnicity with which they identify.
Attempts by statistical researchers to 'control for third variables'... ignore the ontological embeddedness of locatedness of entities within actual situation contexts." (Emirbayer 1997, 289).
This is true, but then the question remains, how do you validly and reliably study identity in the political process. One interesting approach might be to employ folksonomies to race questions in political science. Rather than asking people to classify themselves according to the controlled vocabulary of the survey researcher, a folksonomy would allow the respondent to use as many self-identifiers they want to describe themselves. You can use social network analysis to group respondents based on the similarity of their self-tagging structures into clusters and then test whether cluster membership is related to a desired political outcome.