Category: Editorial

A Problematic Statement

Problem Statement Blog Image

Photo Credit: International Herald Tribune

RECENTLY, a post at the liberal-friendly think tank, Center for a New American Security, “abu muqawama” blog, drudged up a part of the “W. Bush” war doctrine, though not the one we normally associate him with: of preëmptive strike based on a dramatically low-set bar, known as the one percent doctrine, that seems to have slip the mind of many. In the run-up to Afghanistan, Bush No. 43 made the statement:

We will make no distinction between the terrorists who committed these acts and those who harbor them.

The statement, at the time, seemed to be what was needed in our nation’s moment of vulnerability and what, frankly, many had probably hoped to hear. The posturing was par for the course for an administration in a war-footing and who was known to indulge in a kind of cowboy-like imagery, in the midst of crisis, which played very well in Peoria, as they say. But, in retrospect, did the statement ultimately create an ever-expanding problem for our government in the prosecution of the “Global War on Terror”? If we were willing to go anywhere and bring America to the terrorists, then wouldn’t this be a forever enterprise; a “forever war“? (To borrow a phrasing from the title of a Joe Haldeman book and the personal tome on Iraq, by Dexter Filkins.) There isn’t actually anything wrong with the statement in my mind, as any nation knowing and willfully sheltering terrorists, should be considered a national security threat. Except, there is a problematic exception: Pakistan.

As abu muqawama points out:

And whatever you think of the former president, not distinguishing between transnational terror groups and the individuals, groups and states that sponsor them makes a high degree of sense. What to do, then, about a country that, on the one hand, supplies much of the intelligence that allows the United States and its allies to target al-Qaeda but, on the other hand, most certainly also sponsors transnational terror groups to promote its own foreign policy? That’s our Pakistan problem in a nutshell, and it shouldn’t surprise anyone that U.S. policy toward Pakistan is schizophrenic, with us alternating between sticks and carrots, creating a dynamic that, from the Pakistani perspective, must make little sense and certainly fails to establish a coherent and enduring structure of incentives for collaboration.

Pakistan specialists talk of Pakistan’s strategic triangle and the way it relies on the possession of nuclear weapons, a robust conventional army, and state-sponsored terror groups to advance Pakistani interests. I can understand how a smart old Pakistan hand like Ryan Crocker could then argue we should support Pakistan anyway, but at some point, support for the Pakistanis is just going to cease making sense to Americans and their representatives in the Congress. Americans will begin to wonder how we got from the president’s words on 11 September to this. And it might not take another terror attack, emanating from Pakistani soil, to change the relationship.

Cornel West on ‘Credibility vs Mass Appeal’

Stephanson: Was there ever actually a mass black audience for bebop?

West: Yes, Parker’s was the sort of black music people dance to in the 1940s. Miles’s ‘cool’ stage was also big in the 1950s with albums like ‘Kinda Blue,’ though it went hand in hand with the popularity of Nat King Cole and Dinah Washington.

Stephanson: What happened to this avant-garde black music when Motown and Aretha Franklin came along?

West: It was made a fetish for the educated middle class, principally, but not solely, the white middle class. In absolute terms, its domain actually expanded because the black audience of middle class origin also expanded. But the great dilemma of black musicians who try to preserve a tradition from mainstream domestication and dilution is in fact that they lose contact with the black masses. In this case, there was eventually a move toward ‘fusion,’ jazz artists attempting to produce objects intended for broader black-and-white consumption.


“Act Won,” The Roots

SPIKE LEE’S 1989 film Mo’ Better Blues featured a particularly poignant scene in regards to the Cornel West-Anders Stephanson discussion excerpted above. (The film scene is also referenced in this The Roots song from 1999′s Things Fall Apart, entitled “Act Won,” also above, where Wesley Snipes’s character, “Shadow,” vehemently refutes Denzel Washington’s character, “Bleek’s,” insistence that his brand of jazz had lost black support, when he says to his fellow musician: “That’s bullshit, that’s bullshit; if you played the shit that they like, then the people would come.”) And to anyone familiar with the rise, acceptance and ultimately the backlash suffered by many of the most revered performers of black music, it should strike a familiar chord: The ever-present game of musical politics; the delicate balance and omnipresent tension between wide acceptance versus critical credibility.

Anecdotally, to fight the backlash as a result of broad acceptance and the taint that could be placed upon their credibility as artists, John Coltrane and Miles Davis would sometimes play with their backs to their crowd, perhaps to disavow the audience and imply that its opinion didn’t matter to them. Likewise, the persistent Lauryn Hill rumor that trails her to this day — even now in her strange absence from regularly performing and her reported troubles — and which varies on a supposed statement where she is claimed to have said that she would rather have her kids starve than have white people buy her album, is reflective of this tension as well, especially in its racial dynamic. There is absolutely no evidence that Hill had said that back in 1998 or 1999, after her solo album was released, but it is generally given credence by the implication that she reportedly made the statement to M.T.V., and the power of this tension (and its sometimes racial underpinnings for many black artists). This is despite no video or audio of the utterance to this day.

The rumor, regardless of its origin and its veracity, may have been a reaction to her thick album sales (8 million copies) to a primarily white audience, and the acknowledging of a potential backlash because of it, after she earned glowing reviews for her album, The Miseducation of Lauryn Hill, from the mainstream press. By someone implying that Hill didn’t have a particular desire to hold or entertain a white audience, the unidentified person or persons who started the rumor, might have presupposed that Hill would be saved from a backlash and the loss of her “credibility” among her black audience. (Acknowledging that this is a absolute shot-in-the-dark as to pinpointing any number of possible motivations.) However the rumor started, like the classic childhood game “telephone,” the story has become jumbled by a constant incantation and reinterpretation, and now whatever is the actual truth is lost in what the story is today. Hill has made several attempts at personally repudiating this rumor to no avail, as it became clear that it was generally accepted as verifiable fact, much like the Tommy Hilfiger urban legend, where the fashion mogul supposedly said on Oprah — of all places — that he didn’t particularly  support the idea of black people buying his clothes. (The Hilfiger line rose to prominence primarily because of its black support in the mid-1990s among urban youth. Streetwear circa 2002-2005, anyone?)

As seen with almost any critically praised artist of any art form, wide acceptance is the kiss of death for an artist to their earliest and most hardened, vocal devotees, and so many avant garde artists buck against the trend of the larger commercial ethic in society, aside from their personal objections to such commercial appeal. And with black artists this particularly even more thorny, as garnering a new audience could lead to the label of “sell out,” both in the racial and identity politics’ sense and in the commercial market form of the more typical denigration. This tension of “critical v. popular” is even somewhat seen within the narrowcasted cable television shows which are cultishly followed: When more people and a more diverse viewership — age wise, economically, and so on — begins to take shape for the show, within its most fanatical elements of its base, those who participate in its outside milieu existing on Web message boards and such, a backlash of some kind will generally follow. (Even if small.) Similarly, in many niche music circles outside of black culture, this tension is palpable.

And while I think that the phenomena is in some way a different animal than the gravitational force of internal racial politics and the art politics of wide audience appeal, it has a parallel in the discussion, because it in large part has to do with “authenticity” and concepts of “cool” by way of who originally spotted it first. Just as crossing over from the avant garde black realm to the critical realm of avant garde white audiences, ultimately can lead to a black-fan migration or a lessened interest within the black community, as Cornel West mentions, because of the novelty and “ownership” by the black audience is somewhat damaged. (This is because of the balancing act, perhaps, that leads such black performers to compromise even a little bit for its white audience or the luster of black ownership chipping off and coming to the fore, after an artist is embraced by a white audience of any kind.) I believe that there are also other elements at play in these discussions, which are difficult to identify, which could even track to social class. (Say, if a band has had a affinity for working-class content in its inception and moves more into less politically linked and associated topics, a backlash would also ensue and the “sell out” charge would almost certainly be levied.)

This balance of critical acceptance — generally as a result of experimentation, obscurity and novelty with some nod to the basic appeal of the content — and market success is hard to maintain precisely because it is ever-conscious of the tension and acknowledging of the desire to keep a particular audience. In essence, it caters to an interest in specific and is a compromise on its own. In the example of the “narrowcasted cable television shows,” maybe it is also the fact that those who supported the television show (or in the extended dialogue on the artist and even a genre), believe that they are more dedicated than someone who just “joined the bandwagon,” as it were. While it is important to note that this is not a rule; this is a very recognizable pattern of phenomena that is perhaps fueled on a mass level, subconsciously, that contends that massively appealing products will generally be diminished qualitatively than those that are not, as a result of their appealing broadly and the concessions which must be made to do so. It’s as though the model of mass production in manufacturing, credited as producing a lower caliber product, has been migrated from industrial economics to the template of music and art. Here, the “mass production” aligning itself to the growth of an audience.

This political game is particularly endemic to the sub-cultural phenomena that ascribes the undefinable trait of “credibility” to such subculturally-oriented musical genres where the music is viewed as a vanguard of a music movement of some kind, like most traditionally black music, and which also operate within a tacitly agreed rubric of an “acceptable expansion” of the artist’s fan base, and ultimately avoids the artist ever reaching a grand-scale popularity. It is an interesting phenomena to dissect, if you are an individual who has felt the misplaced, unjustified betrayal of seeing your favorite musical performer gain a wide appeal, after discovering them prior to generalized acceptance.

But the great dilemma of black musicians who try to preserve a tradition from mainstream domestication and dilution is in fact that they lose contact with the black masses.

But as the Cornel West statement above from the discussion states, there is nonetheless a great dilemma in this attempt by black artists to retain and nourish an uncompromised artistry and their black support. Ultimately, these artists lose their black followings. Now, perhaps in the scenario laid out, with the rise of Motown and Aretha Franklin versus the era of bebop, there was the element of a greater accessibility for black audiences comparatively, but why is it that non-mainstream black artists tend to lose their audience? It seems that such artists begin with the support of their cultural environments (e.g. black folk), and then they cross into some purgatory of wider critical praise and acceptance, and an increasingly niche cross-cultural following that is touted as both devout and “vanguard.” So in spite of the earlier example of Lauryn Hill and the possible motivations behind the rumor that was foisted upon her — as I surmised earlier that it was maybe started to keep her black audience? —  an artist like her, if she remained a critical darling as she did with her first and so far only full-fledged solo album, it’s highly likely that her audience would have become increasingly white, despite any of her efforts to avoid it.

Gaining a wide audience in almost any form of forward-looking movement is tagged with the notion of “selling out,” since wide appeal tends to lead to a fatter pocket book, and is usually the result of the compromising of some ideals, at some level, rightly or wrongly. And so the vanguard black artists in particular are left in a lurch, a philosophical ghetto of their own making. Attempting to avoid the tag of being labeled a “sellout” for profit — which necessitates cross cultural appeal —  only leads to them making more narrow music that is heavy on the artistry but low on popular appeal to the black audience, and which yet again leads them to more critical praise, if their methods are executed well, and thus inching them further down a particular path of limited accessibility. It’s a Catch-22.

And, ironically, their avoiding of wide appeal and the avenues that they take to do so, will lead them to lose their coveted black audience; the very support they look to hold, while being cross-culturally beloved. How then do black artists keep both their credibility and their core black audience? Is it even possible? I assume that even artists, black or not, who shun the idea of producing work which gains popular acceptance, or even the possibility of purposefully crafting popular fare as being an ultimately diminished product or even an unchallenging, unambitious one — because if something is well liked, then it would have to soften its edges and play to a myriad of sensibilities, thusly losing some of its daring — still hold the desire to be universally “liked.” They would just like to be liked on their own terms, as their art is a judgement of them.

The philosophical questions of art-versus-profit and a chicken versus-the-egg syndrome (and what exactly came first: the backlash or the casual fanbase?), also become apparent in these sorts of discussions as well, because of the inherently flawed nature of the music business selling art for profit; and thusly never completely removing the desire for wide appeal in order for both the labels and the artists to make money, though, it is a fundamentally human desire; to be liked by everyone. The “chicken-versus-the-egg” in this then, often comes in the form of the backlash that is experienced if one chooses to make largely accessible music. Did the artist actually “sell out” their artistic principles for financial? Or did they just gain a greater fan base organically, resulting in more sales, and then a backlash emanated among their original core, in response? It’s often hard to track, and it could also be just that an artists’ or bands’ influences, inspirations and motivations are changing or have changed. (Or any other infinite amount of reasons.)

In the Cornel West interview above from the French magazine Flash Art in 1987 and reprinted in West’s anthology of work, The Cornel West Reader, West brings up another point which is not often talked of in the discussion of “selling out.” Since “selling out” is a charge often heard levied in the confines of hip-hop — but also, any narrowly followed musical and cultural product — for black artists who once were broke because of a smaller audience and then become profitable later in their career arc, as in the case of The Roots, because of word-of-mouth, longevity and perhaps better promotion, and a well-seasoned content that is through byproduct of refinement, more accessible; it just seems to be a particular domain of misunderstanding. Now, to be honest, the “selling out” charge never truly befell The Roots en masse. They did become more commercially viable over time, but not any less daring, I’d say. Still, I am positive someone believes that they have, particularly when they now back a late-night program (Late Night with Jimmy Fallon) as their nine-to-five. While many can describe or point to events in the output of an artist and a believed change in their “sensibilities,” who is to say really if an artist “sold out,” or if they just happened to become more accepted for other reasons? Or even more applicable, whose to say that their newer musical output didn’t fortuitously track to a more widely accepted signature and style? Or that they didn’t just simply get better over time, but less daring? Since the ambiguity of “selling out” also seems to encompass risks taken or not taken by an artist, genre or cultural product.

What is interesting and less ambiguous, however, is what West points to in his discussion of bebop with Stephenson: Where he says that the rise of Motown helped map the genre demographically to whites and the black middle class, a demographic within the black community which had expanded in that time. Among all of these questions about “selling out” and its supposed culprits’ motivations, or it being based in reality or just a perception; can it also be possible that those of a particular economic strata need to listen to a particular brand of music that they believe to be reflective of them? And so an audience, its relative socioeconomic class and its artists are entwined, but not necessarily wholly linked? How come some — or many — black artists can produce superbly narrow music and lose their core audience, while gaining or reinforcing another one, as with bebop being tracked to whites and the black middle class, when Motown began to rise? And why is it that a particular brand of “conscious hip-hop” is more supported in the white community?

As people gain more markers of elite status in a society, doesn’t their fashion change? Doesn’t their taste in movies change? (Maybe not, if this transition happens within the same generation, but it almost certainly happens across generations.) Don’t people always look to art and products that reflect them and speaks to them? And if so, are artists “selling out” if they become the very reflection of the demographics that begin to adopt them, and the more recent world they live in, regardless of their artistic/biographical origins? Isn’t it hard to talk about struggle politics, while living in a mansion? And what if, say, an artist makes a very specialized music that requires a particular knowledge to understand (like hyper-sociological raps), as far as the artist’s ideas which are being communicated are concerned, wouldn’t that artist’s fanbase begin to become just as specialized as the message? Wouldn’t the fan have to be as erudite in the form of communication, as fluent as the artists, to truly enjoy it? And if so, does a blame befall the artist for losing their audience, if they go over their heads, just as Shadow implies Bleek’s jazz had done? My question right now and for some time is, “How can we even tell what ‘selling out’ is anymore?”

His Secondary Peak

Photo Credit: SLAM, February 2002

HONESTLY, even though I am a fan, I am surprised that he is here now: on top winning, the way he is. The rocky mid-career controversies that he was saddled with by vocal fans and the media, and those controversies that he created for himself in the court of public opinion, have disappeared into the dark of night. Or they have been eradicated by the sunshine of his winning; and winning big. It’s just so “counter-factual” that he got here, back to the very top, and through that “problem” that I can’t bear to speak of, because of its ugly implications and the tragedy already visited on both parties; and re-visited if the allegation was ever found to be true. “Colorado, summer 2003,” just seems so far from now. So removed. That string of words reads like a release date for some big movie event, except that it marks a blemish on the memory and dulls his luster, and if you are of a cynical mind or of the “Kobe Haters” phalanx that has cleaved basketball fans down the middle, perhaps it represents a time when a man who you believe is guilty of an unspeakable act was left unscathed by the law. (But even I don’t believe that a measurable amount of people, who despise him, believe this.)

And regardless of your feelings toward him: good, bad or indifferent, considering all which washes over the Kobe Bryant legacy, marketing reputation and name, are you not also surprised about his rehabilitated stature now? Because, there were times at the middle of his career that it was palpable, that hate: people were wishing for him to fail. People were wishing for no clear, justified reason, other than what they deemed to be his “arrogance,” far before Colorado even happened and stirred the pot, and then more so after. It was so thick in the air that it has taken time to even slightly subside; to the point that arguments continue to abound about where he stands in the current league, in the past decade and alongside the current generation of under-30 players and the history of the game. It’s a special form of diminution, saying, no matter what, he will not be given his due. He just can’t be at the very top, uncontested. Despite his longevity and in spite of all his winning.

Still he finds a way to slog through it, and he continues to perform at an extraordinary level knowing that “history will judge him kindly,” to borrow a philosophical line from the last Bush presidency. There are still caveats placed upon his accolades and his portfolio of accomplishments: “he had Shaq,” “he still isn’t shooting 50 percent from the field,” and so on. Writers dissect everything from his coach’s demeanor towards him in a crucial play, to how his teammates react to his displays of emotion. “That’s not how I remember Number-23 in red, doing it,” is actually the yardstick for him now. Yet people fail to see that the comparison and that shadow casted over him, actually means that he’s right there with “Him,” the other guy. They are comparing “like” things, because they want to divine a difference. And the differences are wide, supposedly. But are they, really? Has anyone else on the perimeter remained as consistently good as he has for this long, other than “23,” in recent memory? Kobe only gets better as a basketball player, even with the long minutes he’s logged. (Almost more than Michael Jordan at 40-years-old.)



The first crop of generational talents that he was placed in the context of and competed against — his cohort of Vince Carter, Tracy McGrady and Allen Iverson — the contemporaries whom he was originally compared to, have all fallen. At one point, all of those players were in the discussion of “being better than Kobe Bryant.” But their impact has been limited in the last four years, and his game still rises. Bryant just improves, outside of the statistical. His I.Q. goes up over those same years, his post-game becomes an absolute terror (though he doesn’t exploit it enough), he inexplicably becomes even more clutch and he manages the games far better than before. He is the standard-bearer for the young guys, now, whom people swiftly look to use to bury him, prematurely: “Durant is coming,” “LeBron is better,” “Wade is better,” but they’ve been saying this for years, that his demise is impending. In the spring, when he was hobbled by a knee that required an operation, a now-permanently busted index finger on his right shooting hand and he was struggling to play to his level, in the midst of another deep playoff run, critics were saying that his decline had finally come. And with all of that, he still wins it all? He found a new way to shoot, mid-season, with that tore-up finger and his percentages ticked back up, and he found a way to get lift from a knee that wasn’t providing any flexion. He just used his guile to stay the course.

Bryant is still more feared than any of those other contemporary guys he is measured against. A decade after a visible contingent dedicated to his discounting, five years after they’d begun to say, “He’s no LeBron James,” or “Nash was clearly the M.V.P that season”; the year when Kobe Bryant put up historical numbers the entire 2005-2006 campaign, but the award was still given to Nash. It’s a bit funny to me now that his résumé and longevity is beginning to create a special kind of begrudging respect. I don’t know of another player of his caliber in the history of basketball that has had to earn so much from so many, for things that are perceived about him. And to have to do so with such a high level of performance. Bryant wasn’t ever mediocre in the game, he’s always performed to the level of a top-three player in the league for the past ten years, but his doubters’ opinions were stronger than his accomplishments.

His beefing with a universally loved center in Los Angeles years ago, and a coach who had to earn the respect and trust of that center, is partly to blame; due to that coach’s method of publicly enlisting the media to meet his ends (unwittingly) with frequent excoriations of Bryant’s play, even though it was individually spectacular. Shaquille O’ Neal wasn’t exactly the best teammate, it turns out now, and the many problems he’s had co-existing with other superstar guards or just with the franchises themselves, clears Bryant’s name a bit, but back then it all fell on his doorstep. Then there is the criticism he took for not playing team ball, which is ludicrous when placed under the microscope. At which point, was he guilty of this? Was it some time during the Lakers’ run to numerous playoffs and N.B.A. Finals appearances and the three straight championships, that he failed to play “team ball”? It seems to me to be a non-sequitur, when you compare his team’s winning percentages through those years and that criticism of him. He literally had the ball in his hands the majority of that time in the early 2000s, as the orchestrator of the offense and the go-to-guy for three Lakers’ championship rosters. So it’s safe to assume that if he wasn’t playing team ball, then those teams wouldn’t have won so much. Unless those teams won in spite of Kobe, which I find hard to believe.

Look at that picture, far above, with Bryant lording over those three trophies. It was taken in 2002, with Kobe heading into the 2002-2003 season. He was then looking to help the Lakers win a fourth straight championship. He failed, they failed, and that terrible summer would become the confirmation for the “haters” as the events in Colorado would happen. When the news broke, I was beyond shocked, but I suspended judgement. At my university, the politics of it had even played out in a sociology classroom, with the female professor making frequent mentions of it and filtering the news story through a kind of Jim Rome listener way, only a bit more erudite. (She was an admitted Jim Rome listener.) And for all of her lefty, crunchy, granola leanings, I could hear her bias against him. That professor was and is a person I respect, and so I felt that if she could automatically feel a certain way about him — placing the years of her study of male-female dynamics onto the Kobe Template — then the young man would be damned to eternity, whether he was guilty or not. “That’s just how the mind works,” I thought. It will just develop a narrative or adopt one, and from that point it takes a conscious effort to remove the narrative or re-write it.

And so forgive me, if I’m the only one who is surprised of his steady comeback and that he has five championships now and is soon going for a sixth, especially when it seemed that he was going to burnout as the sole star on a team which couldn’t even make the playoffs, and all that he lost in marketability during those tumultuous years has been seized again. I was aware of how very good he was during those hyper-scrutinized years, so very good, but I also knew that in the media landscape it is perception that becomes reality, not truth, and certainly not rational analysis. He has revised his story better than anyone I know. Under great fire he somehow became the epitome of what you want in a player in the media’s eye: hard-working, dedicated to his craft, with an indomitable will.

He shot a putrid percentage in Game 7 of the 2010 N.B.A. Finals and his team still won, because in a tough, deciding game where almost everyone shot poorly, he knew that more possessions would be key. And so Kobe Bryant would run down from his position on the perimeter and fight in the paint to get rebounds: 15 rebounds, almost a team’s worth. But that was and is always Kobe Bryant, an inveterate believer of his ability and himself, with a desire to meet his destiny to be “great.” It shines through his story, looking at those rocky years that he had endured, and when he was vilified, that he is a testament to the kind of traits you’d want in anyone in any role: survivability, intestinal fortitude, tenacity, perseverance and a single-minded tunnel-vision focus to reach their goals. Here’s to the power of Bryant’s unyielding spirit within tremendous adversity, and a congratulations on his continuing fight to win. Championship number six may just be on the way, providing a new bullet point for a résumé that will, ironically, resoundingly vindicate an already magnificent career.

‘Sports as a Distraction in a Democracy’

If the Cameron government is bad news for those seeking radical change, the World Cup is even worse. It reminds us of what is still likely to hold back such change long after the coalition is dead. If every rightwing thinktank came up with a scheme to distract the populace from political injustice and compensate them for lives of hard labour, the solution in each case would be the same: football. No finer way of resolving the problems of capitalism has been dreamed up, bar socialism. And in the tussle between them, football is several light years ahead.

Modern societies deny men and women the experience of solidarity, which football provides to the point of collective delirium. Most car mechanics and shop assistants feel shut out by high culture; but once a week they bear witness to displays of sublime artistry by men for whom the word genius is sometimes no mere hype. Like a jazz band or drama company, football blends dazzling individual talent with selfless teamwork, thus solving a problem over which sociologists have long agonised. Co-operation and competition are cunningly balanced. Blind loyalty and internecine rivalry gratify some of our most powerful evolutionary instincts.

Football a Dear Friend to Capitalism,” The Guardian

I HAVE an undying love for sports and sports’ culture: One look at this blogs many entries on the subject (or even the entry just below on Bo Jackson), is positive proof of this. But as the spectacle of the World Cup is upon us and much of the world is focusing their minds on the global event — while almost every region’s market and government is in some level of nearing-epic economic disarray, or slogging through a perpetually moribund state since the global financial crisis hit in 2008, and most often both — I’d like to shift some of my focus and attention to a recent The Guardian blog piece by famed British literary critic, Terry Eagleton, that posits a theory similar to one I had first heard Noam Chomsky espouse in the documentary companion to his Manufacturing Consent.

Chomsky, one of the most influential and dissenting public intellectuals of our time, is a man who originally boldly outlined a theory that sport is a national obsession in many nations, precisely because it both creates solidarity and commonality among people (and nationalism), but it also provides a tremendous distraction — read: cover — from the things that truly matter, e.g. the daily minutia of governing, a government’s misdeeds and so on, and it therefore assists the “powers-that-be” to proceed in their dominance and exploitation of the locked-out many.  The sheer amount of coverage given to sports and its almost universal following, and the very way in which it constantly creeps into the culture and our daily lives, versus the dying newspaper culture and un-biased* and “unopinionated” news outlets in general (online, over the airwaves  and in print), is not exactly refuting this idea of genuine civic engagement being trounced by the spectacle of sport, and the media’s undue importance placed upon it.

Men and women whose jobs make no intellectual demands can display astonishing erudition when recalling the game’s history or dissecting individual skills. Learned disputes worthy of the ancient Greek forum fill the stands and pubs. Like Bertolt Brecht‘s theatre, the game turns ordinary people into experts.

This vivid sense of tradition contrasts with the historical amnesia of postmodern culture, for which everything that happened up to 10 minutes ago is to be junked as antique. There is even a judicious spot of gender-bending, as players combine the power of a wrestler with the grace of a ballet dancer. Football offers its followers beauty, drama, conflict, liturgy, carnival and the odd spot of tragedy, not to mention a chance to travel to Africa and back while permanently legless. Like some austere religious faith, the game determines what you wear, whom you associate with, what anthems you sing and what shrine of transcendent truth you worship at. Along with television, it is the supreme solution to that age-old dilemma of our political masters: what should we do with them when they’re not working?

While I have no reason to believe that there is some nefarious movement in the darkened sectors of the society or the world to say, use football and other sports in America, or soccer throughout the world, as a detour from politics, in order to fill attention spans and distract the society from the fact that they have no jobs or are saddled with dysfunctional political systems; one should ask, if we could just make our election days as important as the Super Bowl, with perhaps wall-to-wall commercials encouraging people to vote and even create an eventful feel, if not an actual national holiday — to relieve the many people who are working of navigating the logistics and scheduling issues in voting — would we be a bit better off politically and as a result economically? Since there would maybe be a better informed populace, as a result, if more attention was paid by everyone and voting and participation became as ubiquitous as that seen in sports culture, during an election cycle? [...] Would we be a better watchdog against government malfeasance?

Were we all too distracted from the discourse of politics and its intertwine with capitalism, to see that the top one and two percent were robbing the till everywhere over a number of decades, as we focused on whatever sporting season it was, our favorite teams/clubs  and a generally disposable popular culture, just a bit too much and not holding our politicians accountable, and not forcing them to truly look at economic policy outside of the cursory, soundbite way? What if we all spent just a fraction of the money we have spent or will spend on our fan gear and tickets on campaign donations? Or donations to non-profits, or to public broadcasting; in order to journalistically cover our governments better? Here’s what Chomsky had to say on the matter of sport and democracy, that is following a similar path of logic as Terry Eagleton’s sports-and-capitalism thesis:

I understand that in many ways that this is a false dilemma, since regardless of every nation’s sporting culture obsessions from: the N.F.L. on Sundays to baseball and the Red Sox vs. Yankees rivalry and LeBron and Kobe, in America; to Manchester United v. Chelsea and soccer’s  megawatt stars:  Rooney, Beckham, Ronaldo and Henry, in Europe, to whatever it is on the sports’ radar in any neck of the woods in the global village; there are still many more choices for distraction than just sport.

However, sports is so overwhelming, so abundant, so passionately invested in and covered, just so utterly ubiquitous, that its closest analogue is truly politics and, again, therefore economics: A place where almost all nations are currently failing, because of the fact that the masses are pacified and satiated by their entertainment of choice so much, that their desire for more from their governments is often blunted, if not made nonexistent. With entertainment such as sports filling so much of their life and cognitive investment, they needn’t truly think, become informed or regularly question the legislative actions that determine their fate, and then politics becomes only a place for the manipulating powerful, the intellectuals, the pseudo-intellectuals, the extremists and the blowhards.


Read Eagleton’s “Football a Dear Friend to Capitalism” at The Guardian [Here]

*“Un-biased” news is a general term for outlets which differ from the sideshow of “opinion-news,” since I believe that news cannot truly be “un-biased,” nor should it.


Many ‘Climate Change’ Lobbyists Are Evil

Photo Credit: Retrofuturs

NOW don’t be alarmed by the lede. I am no skeptic of the great warming that has created more and more storms with increased intensity over the last couple of decades, most likely producing the watershed moment of Katrina. And I do not believe that it is sensible to conduct any more research — as some very thick-headed politicians who are beholden to business and are espousers of anti-rationalist thought would want you to believe — since there is nothing else that will augment an already Titanic-size body of evidence: the ice caps are melting, sea levels are rising and there are now more warmer days than ever; all of which occurred after industrialization.

[Supplement: In fact, according to this graph of historical temperature and precipitation trends since 1950, provided by Harris-Mann Climatology, the hottest years on record have all been recorded in my cohort's -- Generation Y's --  lifetime. Looking at the red line, indicating Earth temperatures, one sees a frighteningly sharp incline towards "Much Above Normal," at least seven times, counting the peaks, since 1980. It does, admittedly, sink to just below "Above Normal," six times in that same span. But this is still disconcerting isn't it, considering that the temperature hasn't been below "Above Normal" in that period, and is now resting at "Much Above Normal"?]

This “warming trend” as fringe  skeptics still characterize it — implying that the Earth has naturally gone through these cycles, which accounts for the current evidence – has hurt crops and is doing strange things to the ecosystem of the ocean. Furthermore,  I am not, in the post’s title, actually speaking about the multitude of folks running around with a much-needed sense of urgency, looking to effect change in the glacial movement on climate change policy.

I am only talking about the wolf in sheep’s clothing that was my one serious point of contention with the Obama run to the White House as, prepare for an understatement, the incestuous business-political machine necessitated it for the-then-future No. 44. That is the power of the “clean coal” movement and its lobbyists, who not surprisingly managed to get the closest mainstream, “truth-telling candidate” to make a concession, and support a mentally blunted, foolishly-technologically-hopeful solution to our problem of producing environmentally safe energy . And they now use his pro-”clean coal” stump speech in their ads:

“Clean Coal” is a fabrication of marketing produced by the various industries who stand to make a profit by continuing the nation’s dependence on a “dirty burning”energy resource. Recently, the power and influence of the Clean Coal “movement” that relies on yet developed technology, a mixed bag of hypotheticals — such as yet found effective scrubbers for smoke stacks — and other “ifs,” was pointed to in a Mother Jonesarticle on K Street’s booming sector of climate change lobbyists. My problem is not exactly with the lobbying, however, since it is now the reality that money has become equated to free speech and it is therefore “protected” by the Constitution; regardless of whether it is donations from an individual (quite sensible), or from a corporation (completely asinine); but with the type of lobbying, and the causes that they are advocating.

Photo Credit: Mother Jones

If K Street were flooded with lobbyists of a different stripe on the matter of climate change — lobbyists who actually care for the environment and are very, very honest in their intent to thwart it — then my qualms would be assuaged. I know that lobbying even with its tremendous downsides is a part of our democratic process, and with a consistent money flow and tenacity and constituency, it can work wonders; (see: senior citizens and Medicaid and Social Security, as prime examples). However, the most powerful lobby now on the issue of climate change is the American Coalition for Clean Coal Electricity. (The link is to their official site, here is a more honest review via the Center for Media and Democracy: SourceWatch Wiki of them.) “ACCCE” is a conglomeration of companies from multiple industries which include rail, mining and manufacturing. *So, you know, the most “trustworthy” folks in regards to the environment, with little to gain from lobbying for policies in the climate change arena to be unambitious, and rife with notions of clean-burning fossil fuels. (*Sarcasm is all over that last sentence.)

According to Mother Jones, “Agents of Climate Change“:

For a long time, the climate change debate in Washington took place along predictable lines: industry on one side, environmentalists on the other. But now, with the prospect of actual legislation passing Congress, and the attendant opportunities for political and financial gain, the competition has erupted into a giant free-for-all. Since 2003, the climate lobby has grown by more than 400 percent, to a total of 2,810 lobbyists — 5 to every lawmaker.

The largest players are still formidable: The American Coalition for Clean Coal Electricity, a collection of power companies and mining, rail, and manufacturing interests, spent $9.95 million lobbying Congress and the White House last year, more than any other group devoted solely to climate change. But there are now also 138 lobbyists representing alternative energy technologies. Environmental and health lobbyists numbered fewer than 50 six years ago; there are now 176. (Still, the alternative energy and environmental lobbyists put together are outnumbered more than 7-to-1 by those for major industries.)

Read Mother Jones‘s “Agents of Climate Change” [Here]

The First Public Embroglio of Allen Iverson

Photo Credit: Daily Press

AT least half of the (listed): 6’0, 165 pound, shooting guard, now-prematurely-retried-for-reasons-of-pride, Allen Iverson’s legacy; is wrapped in an everlasting bifurcating air of controversy. It is to the point that saying the formerly great gunner’s name in a sports culture so toxic that even a sanguine personality like Magic Johnson’s would’ve found it hard to survive all the slings and arrows from the fans and media, will generally conjure ill-will or a child-like fascination with his on-court ability and lasting cultural impact of cornrows, shooting sleeves, a go-to crossover, [at least] three pairs of classic Reeboks, and an air of unabiding strong-mindedness. Any middle ground between an ill-will towards him or “fascination” with his game, exists merely on the margins.

How this came to be is partly on Allen Iverson, and as mentioned, partly on the culture that has spawned around sports since the 1990s and its every-minute-of-play examined, every rumor explored, dissecting radio stations, filled with primarily white patronages who provide a seemingly only petulant, one-sided voice to fans and further charge the environment with their sometimes venomous talking heads. (There is in some sense a parallel in this environment to right-wing radio, with figures who are purposefully spiteful by trade.) It is a place where the feeling among them is: “We can go hard at them (pro-athletes), because they’re all arrogant, undeserving millionaires.” Iverson is a victim of this, in a small way. But the role Allen Iverson has played in his sometimes unflattering perception, feeding into the nasty elements of the sports environment megalith, should not be underplayed, as it partly stems all the way back to his roots and an incident he experienced as a kid in Hampton, Virgina, where he learned that he had to be as hard as the times he lived early on, and to always appear bulletproof.

I have been making a point to watch E.S.P.N.’s outstanding documentary series, 30 for 30, which commemorates the cable sports news network’s three decades of existence. Nearing the eve of this year’s N.B.A. playoffs, E.S.P.N. aired their latest installment, “No Crossover: The Trial of Allen Iverson.” The film by Steve James of Hoop Dreams, a native of Hampton, Virginia — the very same town that Iverson grew up in — and a man who attended the very same high school years earlier, and who is the son of a Bethel High fanatic that rooted for Iverson’s football and basketball teams passionately, was enlightening for just the Iverson sociopsychological sketch and the biographical history it chronicled. Not to mention, a hearty examination on Iverson’s unwitting effect on race relations in Hampton. But that is just one facet of the now-cemented Iverson story of being polarizing, in the publics’ mind. And Allen Iverson, “the standout athlete,” it turns out — as the documentary revealed  for those who do not remember the first national headline-grabbing Iverson case was about — had pretty much always been publicly ensnared in a heated debate that split people down the lines of culture and race in some way or another, since the time he was one of the nation’s top prep athletes.

Those who remember Allen Iverson as a rookie N.B.A. phenom in the fall of 1996, following his more or less lilly-white, cross-cultural appeal and respite from negative perceptions, at the elite university Georgetown, where he performed legendarily well during his time there for John Thompson; can surely recall his obstacles towards acceptance and praise among the establishment of pro-basketball. “A.I” immediately drew fire from all angles in his first few games for being anything from too “brash” or “ignorant” of the game’s history– I still have no idea if that means he was not deferential enough, or if he genuinely was unaware of who to dole out heaps of respect to according to the established league pecking order — to being the physical manifestation of the beginnings of David Stern’s death rattle as the commissioner of a less-threatening league than the coke-blown, “too black for T.V.,” thugged-out one that he inherited in the 1980s.

But years before donning Philadelphia 76ers’ red and blue, as a Virginia prep-level legend in football and basketball, the young man known as “Bubba Chuck” to the denizens of Hampton, found himself and some friends notoriously caught on videotape in Allen’s crucial senior year of high school; where they unfortunately participated in a Valentine’s Day bowling alley brawl that pitted white residents of the community against black residents of the community on the innocence of Allen Iverson, a person some within the Southern, coastal community had begun to believe was given too much, already. The video that was dug up for James’ doc clearly shows Iverson, but it is also unclear in the video that he did what he was accused of, hitting another person with a chair; a young, white girl, at that. (The racial overtones already then-overpowering, as it still would be now, and far before the gender of the alleged victim of Iverson was brought to public record, and which only inflamed the situation.)

Photo Credit: E.S.P.N. 30 for 30

What set forth following the brawl, which began according to some due to the slinging of a racial epithet by whites who had confronted Iverson and his friends, could have derailed the promising career of the 17-year-old, in 1993. That year Iverson was a consensus top basketball recruit in the nation and a Pied Piper for the depressed Hampton community and particularly so for its black residents, many mired in a restless situation of poverty. According to Steve James, Iverson was already on the level of a Muhammad Ali figure for Hampton by the time he found himself in quite a bit of trouble, especially for a Southern town, even if it was the more progressive 1990s. In the words of James: “The Allen Iverson case in Hampton was O.J. before O.J.” The town was going to attach years of prejudice and tension to the trial, from both sides of its divide, and marry it to a newer one, that of: “the over-privileged athlete.”

The documentary poignantly outlines the dramatic racial divisions among the faction-ed communities of Hampton and how the event also played into the various racial and local politics of the town. There is even an implication of a high school rivalry affecting the judicial decisions in the case. Steve James goes on to make mention of the clear delineations between where he was raised — a more middle to upper middle class section of the town, though his father, integral to the story and its underlying sympathy for Iverson-the-athlete, owned a tile company for several decades in the “black” area of Hampton — and the city’s more “redneck” areas, in his words, it seems to be defined by, though he was less familiar with.

While the ground, granular truth of the case is never revealed, Iverson and his friends did end up doing time for the charges leveled against them and Iverson was harshly sentenced to 15 years, which some believed was for him to be “made an example of,” for his involvement in a bowling alley matter that produced minor injuries, and despite near-overwhelming support for Iverson by Hampton’s community activists and black leaders of the time, as well as extremely shoddy evidence. Though, it turned out to only be four months of served time for Iverson, after he was granted clemency by Virginia governor Doug Wilder and the Virgina Court of Appeals overturned the entire case for insufficient evidence.

Whether what Iverson and his friends experienced because of that night, all inevitably spending time in some kind of correctional facility, was justice, though, is left out in the ether by James. What is known is that the prejudices (on both sides), and the narrative that surrounded the “Iverson case,” which even drew the attention of C.B.S.’s Tom Brokaw, had left the Hampton community ravaged by racial tension that is still unresolved 17 years after the fact. And ironically, in a racially-charged brawl, it was four young black men who were draconianly sentenced  due to an obscure Virginia law — known as “maiming by mob” — which was  placed on the books, post-antebellum, in order to protect people like them, a century earlier, from lynching. To his credit, Iverson has said something akin to, “Whatever I went through, I had to go through at that time,” either cryptically commenting on the incident that nearly left him to the clichéd fate of so many athletes who never make it, trapped in a ghetto, or the need for him to endure to be the me-against-everybody-ever-in-my-way person he was, and was so loved because of. And it is perhaps why, even, the young man has been given so many chances, because everyone knows the Iverson internal creed that sprouts from all of his adversity: “Make it in spite of all of this.”

On Soderberg and ‘The Informant!’

I FINALLY saw one of 2009′s sleepers, The Informant!, and I realized once again, how much I love a Steven Soderberg flick. Why? Because Soderberg seems to never be too divorced from complexity: The complexity of modern life, the institutions, the structures and their strictures, (sometimes) the social psychology involved, and the personal. Traffic was the first Soderberg work I really remember, not actually recalling any of Sex, Lies and Videotape, years after I watched it on the Sundance channel. (Other than James Spader probably being sort of creepy, again, per usual.) The Informant!, based on the actual case of high-level corporate star-cum-whistelblower, Mark Whitacre, of agribusiness power, Archer Daniels Midland (A.D.M.), was able to traverse a very serious, droll subject — market manipulation, collusion and price-fixing in the corn-based additives trade — and make it funny. (Though the jocularity is partly at the cost of Whitaker’s mental illness, but also just how mundane and stereotypically “exurban-Mid-Western” the characters are in the story and the setting.)

Amid a man who was by most common standards brilliant and accomplished, comparative to the general population, holding advanced degrees in a tough subject matter, who is also vice president of the company as well as a biochemist with a complex understanding of an area most turned off just after organic chemistry, if they even took it, Soderberg was also able to show the absolute naiveté of a person who just wanted to pay a good deed forward: A good deed he experienced in his life that changed his entire course, in fact, and has informed his decision to help the F.B.I. (The paucity of details, obviously meant for those who’ve yet to see the film, and are unfamiliar with the Mark Whitacare story.)  This “good deed” is nearly the cornerstone element for his justification for deciding to coöperate with the F.B.I. — to their cynical surprise — and volunteer he and his company’s involvement in a vast price-fixing scam to a federal agent he comes into contact with for an entirely different matter, and risk the grand fruits of his labor, and what he has further gained in his participation in corporate malfeasance.

But within The Informant!, as with other Soderberg alignments, there is a great moral question, as the government’s bumbling star-witness and case-builder, Whitacare, is revealed to be less than clean himself, to the point of his credibility not only being shot, but completely obliterated by both institutional practices by A.D.M.’s top-shelf players, which he participated in, and his own mental scaffolding crumbling. Throughout the film you are left to wonder how one man can be so bright, but also so dim as to actions and their consequences, and the fact that good deeds very rarely are rewarded, as the Mark Whitacare character sketch has him saying at least once in the film that he thought he was heroic because of his exposing of wrongdoing, while conveniently forgetting his own transgressions, to which he pays dearly, ironically suffering a harsher fate than other A.D.M. execs prosecuted, or if he had, honestly, just kept his mouth zipped about the entire operation.

In the end, Whitacre’s “helping” actually costs him several years of his once-fruitful, enjoyable, monied life, due to the stress of his case-building for the government against some very well-connected, political folk, which added to the character’s paranoia in the film and heightened his generally humorous incompetence, and made one ask,  “Was it really worth it, for him to tell the truth?” For those who originally watched the global lysine price-fixing scandal unfold in real-time over the period of the early to mid-’90s, or read the similarly titled 2000 book by Kurt Eichenwald, The Informant, Soderberg’s adaptation, The Informant! (with an exclamation mark [!]), seems to do what the auteur has done so well over his body of work: that is take a semi-political, semi-journalistic and erudite examination of a particularly unexplored and faceted world or subject, and interject some cutting commentary.

In his film, Soderberg is able to mine the funny from the absolute boring, drab environment and the unassuming ephemera of the business life it is set in, and do so particularly well for a large-level corporate crime, which normally would be played up in its weightiness. The sole difference from this film and his past ones, however, is the ability to make what could’ve been an even more serious, mordant, exegesis of a real-life case (somewhat) in the mold of Michael Mann’s The Insider, light and enjoyable, while still keeping it extraordinarily rich in informational detail about the actual event.

The Recent Past

MY aunt had passed, just recently, after a ridiculous set of circumstances that had beset her last five years of existence. At my old, now moribund, LiveJournal — in 2005-2006 — during my nascent “emo-blogging” and near-quarterlife crisis (inner dialogue: “surprise, it’s here!”, though unrelated to this event), I wrote about how she had come to find out about her brain tumor following an accident where she mysteriously impelled her car into a tree in the middle of the day; sober. (Or sober from external impairments, at least. Because a brain tumor made her decision-making and faculties, all but lucid.) And how either before then or just after then, she had fallen or fainted — the exact details slip my mind — in the shower for no reason whatsoever, again, during her recovery from that accident. The cause of the multiple events, at the time, was yet found.

It turned up in the wash, that those incidents were the result of a brain tumor and a vertigo it was producing. Later she would get into more car accidents, the result of passing out, or taking the wrong medication amid a conglomeration of medicine that would have made shareholders in Pfizer, Merck et al. proud. Every time, it was kind of comical, but then so tragic. “She got into another accident?” We’d laugh at her story, as she embarrassingly tried to provide an explanation. Sometimes there was a squirrel in the road or she’d just say “Oh, I don’t know,” in the way Margaret Cho’s voice would sound when she’d mock her mother.

In and around that time she developed that brain tumor — which she would recover from — she had also become a diabetic; the very serious kind of diabetic, the kind that involved a hyper-vigilant state, but it was further exacerbated by my dotting uncle, her husband, who used the insulin injections he wielded as a kind of crash method, regulatory procedure, instead of a stasis producing one that it was intended to be, for her. She wouldn’t recover from the diabetes. (Concurrently, my uncle’s health — her primary care taker — had taken a turn: the result of cancer and other assorted problems, the handiwork of his smoking cancer sticks, or tobacco products through a pipe, from the time he was 12-years-old. He waxed poetically to me about it once, his eyes clearly filled with fascination and joy. He was in his ’60s.) As the result of that time, and my uncle’s influence that kept her merely hanging by the thread of her insulin, her body never found a way to settle. It would cost her soon.

Moreover, between them, it was always up and down; up and down, all the time. Their relationship was clearly filled with nothing but love, but it was a-roller-fucking-coaster. And it was like the entirety of their relationship was suffusing throughout the microcosm of her health at the end. They were definitely in the throes of love, but co-dependent: cussing each other out one moment, fighting over Vicodin pills, then asking for the kind of vital help one needs, when at the end of life’s positive evolutionary stages; the very next. They needed each other to live. It was “A Strange Arrangement,” in the words of the Mayer Hawthorne song that blasts from my iTunes, at this moment.

Because of the multitude of things that she didn’t independently take care of in her own body, relying on everybody but herself to understand the complexities of her disease: of a diet regime and the tell-tale signs that only she could feel, she only upped the ante for her demise; upped the Grim Reaper’s roulette numbers towards his merciless favor, as he was determinedly pushing for her taking it seemed, especially when her husband passed just a year prior and she might’ve given up due to the loss.

Might’ve given up…There is no answer to that “X-factor.” Did she give up? Was there no more will? No more fire inside that lights the most vital of us, turning the lived-life into a conflagration we refuse to let die so easily, like the resistance of a banana peel’s insides? Because while her diabetic condition was serious, it was far more manageable than the situation she lived under: Almost always weak, almost always with a blood sugar count either too high or too low, and mostly on the verge of passing out.

I wasn’t particularly close to her due to geography, though. I feel closer to her kids, my cousins, but I knew her well. She was always kind to me, as was her stubborn, working-class Irish husband — my recently passed uncle — who I shared more conversations with and whom I felt I knew a bit better than my bloodline auntie. But there were plenty of warm moments between us. I went to the Philippines with her, her husband, her son, my mom and my other aunt, (whom I am close to and grew up with, spending summers in San Diego at her beauty shop and swimming in her pool).

This particular aunt and her husband met in the Philippines when my uncle was stationed there during the Vietnam era, where he was a mechanic on jet fighters. (F-4s and F-105s, I think. If anyone cares.) It was a job he carried out until the early 1990s, working on F-16s. The gig shaped their lives, as with most career military families, it was the foremost fingerprint upon their existence: She went to Egypt to visit him, drove for hours to see him at bases that were far but reachable by car. They lived in California, the Philippines and Florida over their time together.

They had a child before their two children, my two cousins, they raised. The baby was a still birth, buried on the base cemetery in the Philippines. The same base I grew up on, years after they had left there. In late 2007 I went with her and my uncle and their son to that base, to again see the place I partially grew up, and ride along the flight line that held my imagination captive for so long, dreaming of jets, and my then-believed future in them. We also visited their lost child: I held my aunt’s arms and walked her over with the assistance of my uncle through the overgrowth of the since abandoned base, and the grass that had overrun the cemetery, to the young girl’s grave; suspended in time as a baby.

My aunt wept, my uncle consoled, I continued to hold her hand and forearm buttressing her weight as it warmly drizzled a bit, the way it does in the Philippines in the summer monsoon months. And I now realize that maybe we weren’t too close, but she cared for me deeply. She shared one of the most painful moments of her life with me twice. The second time was when I went to her husband’s funeral just last year, and I thought she couldn’t endure that pain she showed throughout the memorial service. And so, for her recent passing, and the last five years, and all of the losses and hurt — physical and emotional — I just wanted to say: I hope there’s no more pain, auntie.

Are We Re-Living the ‘Lost Decade’?

Photo Credit: NY Times

JAPAN is often painted in Western popular media as merely a hyper-technological, hyper-efficient, futuristic land, far in the distance of human and logistical development. And the nation’s “cool” has undeniably grown from out of the days of its Godzilla film imports, Kurosawa movies, anime and manga, robotic toys, video game systems and their oddly-organized-relatively-benign-but-sadistic game shows; to the point of it even becoming a starring character in what may have been the best Western coming-of-age tale of the last 20 years, if not longer: Lost in Translation.

But what is not as sexy to talk about and dissect as much as its lasting cultural footprint — even as we have seemed to echo many of Nippon’s downturn’s foundations occurring during that particular time of its pop-cultural ascent — is the depressing economic state the nation found itself in during the 1990s. It was a downturn which exposed its society to a discontent and distrust of government, and exposed social ills that it had once glossed over in the early parts of its postwar boom. And it is an experience that we are possibly backtracking, now, and in the coming years, as a National Public Radio segment on the subject had obliquely mentioned in 2008:

In some ways, the similarities are striking. Both housing bubbles involved reckless lending and high-flying real estate — commercial in Japan, residential in the U.S. As in the U.S., Japan was flooded with cheap and easy credit, thanks to newfangled financial products such as derivatives. Real estate prices soared. For a while, the tiny spit of land surrounding the Imperial Palace in central Tokyo was worth more than the entire state of California, and a $1,000 bill (if such a thing existed) would not pay for the surface area it would cover in the city’s Ginza district.

Japanese investors believed they had broken free of the usual boom-and-bust cycles. Everyone assumed that prices would continue to climb indefinitely. “Even if some local property markets tanked, (Japanese investors) figured, a nationwide bust was almost unthinkable. They were very wrong,” scolds The Economist magazine.

Sound familiar? Such exuberance was also common in the U.S. just a few years ago, when home-buyers, real estate agents and speculators breathlessly opined that “real estate prices never go down.”

What the U.S. Can Learn From Japan’s ‘Lost Decade‘,”  NPR

Roughly 40 years after the war that de-fanged imperial Japan and had forced it to be “Economic Power Japan” had ended, a speculative financial market free-fall and commercial real-estate sector implosion, rippled throughout the nation and even affected the Japan-dependent economies of Asia. What began in 1990 was just recently beginning to recede into a past that was somewhat forgotten but has, due to many mentions in American media, while we bullet-trained towards the Great Recession; been conjured for exploration. (Since the American G.D.P. began to fall off a cliff from December 2007, and hit the down slope particularly hard just prior to the November 2008 presidential election, in a similar manner and markets as seen in Japan, 20-plus years’ ago.)

While sluggishly recovering from an event that had left it socially and culturally changed as a result, Japan found out that the event had a yet measured, profound, lasting effect: It turns out that the nation was not only faced with a downturn in economics, but it was revealed that an entire generation of its young adults during the period had experienced unprecedented levels of depression and suicide, as a result of Japan’s placing of extreme valuation on corporate work — work that was especially hard to obtain fresh out of university during that time — and which left many of the unemployed young adults  feeling like failures.

As we’ve hit this similar rough patch in the American economic story, are our youth who have been forced to set lower expectations with a job-market that seems perpetually on the fritz, now going to respond in kind? Since they are the most educated of all generations prior? Wouldn’t this new economic reality be the hardest blow for them to take? What if these doldrums last a decade as many economists project? Will those essentially locked-out of the climb up the social economic ladder, feel forever cursed in a way that would lead to the skyrocketing of suicide?

While I haven’t thoroughly researched the topic for this cursory blog post, couched in a question, I’ve been pondering — “Are we re-living the ‘Lost Decade’?” — I’d venture to say that the rise of Japanese suicide Web sites and “hikikomori” — a social withdrawal syndrome that came in the wake of the Lost Decade — have some correlation, to play it safe, if not outright causation, as a blurb found in an online teacher’s guide on the bubble economy of Japan in the 1980s has provided some buttress:

When the bubble burst, land values plummeted, the stock indices tumbled, and economic growth ground essentially to a halt. Over what has come to be called the Lost Decade, the economy was moribund as corporations refused to invest, consumers refused to spend, and all of the standard economic remedies (relaxed monetary policies and generous government spending) failed to spark a recovery.

Meanwhile, the political landscape was upset by the collapse of the Liberal Democratic Party, which splintered and lost hold of the prime ministership in 1993, for the first time since the party’s founding in 1955. *Japanese society also seemed to be in disarray, as divorce and delinquency rates spiked, suicides increased*, and a series of crises — the Aum Shinrikyō sarin gas attacks on the Tokyo subway, the Great Hanshin Earthquake that devastated Kobe — revealed the weaknesses of the Japanese social fabric.

The Bubble Economy and the Lost Decade,” Japan Society

I doubt that the young in America will take the hit of the “worst economy since the Great Depression” in much the same way that the young of Japan did, but only time will tell. What’s patently obvious to me, is that America’s youth aren’t culturally fatalistic nor as dramatic or obsessed with suicide as an answer. (Tales of Kurt Cobain and Elliot Smith, aside.)  What seems certain and unavoidable, however, is that this generation — my generation, the young workers born in the 1980s — will have a rough road to hoe as indicated by the two shocks already experienced in the cohort’s working lives, in short time: seeing recessions in 2001-2003 and again in 2007-2010.

The lax, “friend of the corporation” government that placed the nation in this predicament, and the generations of free-market capitalists and Ivan Boesky, Michael Milkin and “Gordon Gecko-like,” self-ascribed aspirants to the top one percent who control almost all of the national income of the people below them, surely will not pay the price if we are walking through our version of the Lost Decade.

No. It will be those with skills generally coveted in the information economy, but without work — the oldest of whom came of age in the recession-laden decade of 2000 — who will pay the most, as younger workers with similar skills and qualifications will inevitably see an eclipse of the dark period by better economic times, with ample time to re-arm themselves for the fight, as a Business Week article implies happened in Japan’s experience with hiring those who were even younger than the late-twenties and early-thirties “lock-outs”:

If a rising tide lifts all boats, then why are millions of Japanese like Nehashi treading water? There’s an entire generation of people in their late 20s and early 30s who came of age during Japan’s so-called lost decade, a stretch of economic stagnation that started to ease in 2003. Through that period, with Japanese companies in retrenchment mode, young people faced what came to be known as a “hiring ice age.” Many settled for odd jobs or part-time work to make ends meet but hoped eventually to find their way into regular employment with the stars of corporate Japan. Instead, they’re being passed over in favor of new graduates—a serious problem in a country that still values lifetime employment and frowns on midcareer job-hopping.

This group is called the “lost” or “suffering” generation. Some 3.3 million Japanese aged 25 to 34 work as temps or contract employees — up from 1.5 million 10 years ago, according to the Ministry of Internal Affairs. These young people have earned various less-than-desirable classifications in hierarchy-conscious Japan. They might be keiyakushain, or contract workers, typically lower-paid than full-time staff, with fewer benefits and minimal job security. Or they’re hakenshain (people employed by temp agencies); freeters (those who flit from one menial job to the next); or, at the bottom, NEETS (an acronym coined in Britain that stands for not employed, in education, or in training). The plight of such folks was the subject of a recent TV drama called Haken no Hinkaku, or Dignity of the Agency Worker, the saga of a twentysomething temp who must put up with the snobbery of full-time colleagues despite her long list of qualifications.

Japan’s Lost Generation,” BusinessWeek

Activision’s ‘Dirty Level’

AS you abruptly begin the mission, disoriented from the loss of the once-black screen and implication that you have just egressed an elevator, you encounter a room of people milling about, and if you are like me there is a bit of shock and puzzlement. And then as quickly as you entered the frame an onslaught begins, as your group commences in the most random, graphic carnage you’ve probably ever witnessed in a video game: to the point that there are blood-laden escalators with civilian bodies atop their metal steps, which you or your fictive, computer-operated team are responsible for having struck down. There are also scenes of the already-shot, crawling bystanders that are helped by others and then: they are shot, once again; by either you or your “team.”

You literally, if you choose to enter this mission and pull the trigger, as you’re prompted by a dialog box asking if you’d like to participate in what might be considered “objectionable content”; mow down droves of unarmed innocents in what is revealed to be an airport, like some nasty, alternate recollection of reports concerning Lieutenant William Calley and his stressed, mourning and retribution-seeking unit in My Lai, or the ethnic cleansing reportedly committed by Slobodan Milosevic’s charges in Kossovo, a decade ago. Or perhaps more apropos to recent world events, the coordinated terror plot in Mumbai. And as a player, being a party to this you cringe inside, while you are forcibly, ploddingly, sauntered through the entire scene in the first-person and a bass-heavy, cinematically ominous score bolsters the feeling inside of you that this is all “very bad.”

You already know of this level, if you’ve either read about, heard about the controversy, or played the “No Russian” mission in Activision’s Call of Duty: Modern Warfare 2. I’ve been playing the game since about Christmas’ Day — a month after its release — as I was one of the holdouts in one of the biggest video game releases ever. (Update: As of Janurary 18, 2010, sales for the game have topped 1 Billion USD.) Thirty days-plus after the hullabaloo concerning the game and its content, and a nightly appearance of Modern Warfare 2 as a Twitter trending topic and the instance of a Tumblr friend of mine posting “boys are weird” in response to the sight of the aforementioned, pixellated horror she witnessed in her friend’s coveted and highly anticipated video game, I finally purchased and began to play the second installment of the Modern Warfare story.

The game has been lauded by multiple outlets for its cinematic gameplay and (somewhat) realistic — if that can be said about a video game where you can be shot multiple times and not actually die — exploration of real-world scenarios soldiers and especially the rarefied bunch of war-fighters known as “special operators,” can find themselves in, strategically. (From “clear and hold” missions and “building breaches,” that find soldiers in close quarters’ combat, to “hunt and kill” operations and “target painting” for a light armored support attack, it’s all represented. )

“No Russian” is part of a larger story arc in the Modern Warfare saga, where through two iterations so far you are enlisted to play as characters all over the globe in various roles within the U.S. Armed Forces, and as a part of a dragnet within an intelligence agency special task force — dubbed “Task Force 141″ — directly ordered to hunt down a notorious sociopath and leader within the command structure of ultra-nationalists who have assumed control of Russia; a man who is the key to an ongoing war between the former U.S.S.R. and the United States, and responsible for an emergent terrorist campaign which he has sparked in Europe.



The mission is not as utterly, ethically void, as it sounds in my description, and in the news media vivisection. It is, however, very dark and eerie. This is because of the verisimilitude involved in what is affectionately abbreviated online as “MW2,” and its immersion factor. From the imagery, to the weaponry,  to the aircraft, to the tactics — there are even close-air-support air strikes conducted by U.A.V.s and AC-130 gunships — to some of the situations and the soldiers’ equipment inventory, the game’s physics and plot; there is an extraordinary “life in odd parallel” aspect to Modern Warfare 2, especially in the “campaign mode,” I am specifically talking about, where there is an all encompassing story and environment.

And it is perhaps why “No Russian” is so troubling, because the idea that “this is a game” is at times suspended for the player, making the participation in what can only be described as a massacre with little forewarning or explanation to the overall objective, jarring. The seeming perfunctory wall of absurdity that accompanies other games such as the Grand Theft Auto series, which is noted for its ludicrous violence, and which has historically been taken to task because of its Warner Brothers’-cartoon-like gratuitousness, is nowhere to be found in Modern Warfare 2.

I believe, though, that despite the controversy, “No Russian” does two things very well and contextualizes a somewhat dubious distinction in the field of combat but which, in my opinion, many times, isn’t so dubious: that of producing and operating from a moral high ground. In any war, both sides commit deeds that do not pass the more sober judgment of those off of the battlefield — people who examine amid the cold light of day — hence the term “the fog of war,” but there is almost always one side whose moral imperative is less relativist than the other. What “No Russian” does either by design or just by the unintended circumstance created by Activision opting to place an extreme “shock-value” moment into the game; is show the game player that the enemy that they are facing are a much colder bunch: a special evil, that they must help defeat. And, perhaps, for the more analytical, the level also seems to say why combat exists.

While pacifism is an important cosmological and political approach that should be held by as many as possible, “No Russian,” whose story’s nuts and bolts seem to be “ripped from the headlines” — as a Law & Order promo would phrase — somewhat makes the realpolitik case for military action, in which diplomatic means and pacifism are important ideals and goals, with the understanding that many scenarios or “actors” within a dispute, do not subscribe to similar genteel philosophical foundations, and therefore military action of some kind must be taken. (Is there a diplomatic way to handle a theoretical sociopath who is willing to massacre innocents? Or a group of sociopaths who are part of terrorist sect, say like in Mumbai? Or a lone-wolf, as with the Fort Hood shooter? And what if those sociopaths are not just a handful, but part of a large state — or non-state — organizational structure with global influence? Must you take military action? It’s a difficult question to answer.)

In regards to the specifics: “No Russian” places the player in the shoes of an undercover C.I.A. operative (and Army Ranger) whose orders are to gain the trust of a key Russian ultra-nationalist by the name of Makarov, and he must do so by going on this objectionable mission. While the game has drawn great ire, especially in Russia where the P.C. version of the game was banned for its depictions and its lack of explanation overall concerning the mission and its relevance, and absolute necessity to the game’s objectives; one can almost be certain that such instances and scenarios; where an operative must make doubtful moral judgments for an overarching and believed greater good exist…Maybe not to the gross display of subduing of one’s own values, in the manner it is shown in the game, but they do exist. And to this point: the work of human intelligence and clandestine operations, as the one implied in the game’s level, are extraordinarily rough trades, where often morally objectionable duties are (most likely) routinely conducted in the name of this vital level of intelligence gathering and organizational infiltration.

Further, the atrocity committed in the mission honestly echoes a mish-mash of recent events in post-Cold War, Russian-state, and Russian-satellite history. In late 1999 and going into the millennium, the Russian army reportedly massacred in upwards of 40 Chechens in Grozny, and in 2004, Chechen militants in North Ossetia took hostage 1,100 people at a school with the goal of demanding the end of the Second Chechen War; where in response to the demands, Russian army officials made the decision to raid the school to free the hostages, 777 of them children, resulting in a firefight that ultimately left slain a total of 334 hostages and 186 children. There are many news pages and many Web sites filled with such stomach-turning tales within the borders of Russia, primarily because of the nation’s hobbling economic system, corrupt political nature, ethnic tension, outgunned and out-manned law enforcement, and a heavy-handed approach to its satellites’ dissent. From organized crime syndicates, to anti-Muslim killings; all have been going on, palpably, for the last 15 years or more, since just after the fall of the Berlin Wall two decades ago; when the former communist state began to wholly make its on-going, corruption-filled transition to free-market capitalism and stable democracy.

The question is, however, was there a need to place this level in the game? Namely, a game aimed at still-impressionable young adults? The Entertainment Software Rating Board rates Modern Warfare 2 as (M) “Mature.” This means that it can only be sold to those at least 17-years-old. And that is where it seems to hit the roadblocks it has recently encountered from the more fair-minded non-gaming set. Obviously, a video game will find its way into the hands of a child at some point, and placing such a display of wanton slaughter comes off more than a bit troubling, to say the least. But Modern Warfare as a series, in name alone, seems to purport a particular brand of realism. The locations of the game’s battlefields are some of the world’s most visible geopolitical and military hot spots: there are missions set in Afghanistan and Pakistan; two fronts we are either in or were covertly operating in, for the last two decades. The game bends the  movie genre of counter-factual history towards the gaming world. Its goal seems to be to merge (somewhat) potential scenarios with actual war-fighting technology and situations.

Why, then, should it not make mention of the realities of the world? Was the mission necessary? Perhaps it wasn’t, but the entire enterprise of Modern Warfare 2 is unnecessary, as is any game. And mind you, this is a game where there is also a nuclear attack on the United States. So it hasn’t just confined the carnage to the opposing side. Where the controversy lies is in the parents’ involvement in their childrens’ entertainment habits, and their understanding of the market for video games, which is no longer just those age 9-years-old or younger, and therefore they must be cognizant of that fact, and adjudicate accordingly. Gaming is changing, and it is changing us and our culture, those who are unaware need only see the movie studios, which have begun churning out movies based on video games for the last 10 years, and the U.S. Army’s own use of video games for recruiting as proof. And the latter is the more obvious danger, than the moral one presented by Activision’s controversial mission, it is the misuse of the power of the video game, for somewhat nefarious purposes. In real life, there are no “cheat codes,” or extra lives, soldiers just die.

Reflecting on the ‘Gig Economy’

No one I know has a job anymore. They’ve got Gigs.

Gigs: a bunch of free-floating projects, consultancies, and part-time bits and pieces they try and stitch together to make what they refer to wryly as “the Nut”— the sum that allows them to hang on to the apartment, the health-care policy, the baby sitter, and the school fees.

Gigs: They’re all that’s standing between them and…what? The outer-outer boroughs? Eating what’s left of the 401(k)? Moving to Alaska? Out-and-out destitution?

To people I know in the bottom income brackets, living paycheck to paycheck, the Gig Economy has been old news for years. What’s new is the way it’s hit the demographic that used to assume that a college degree from an elite school was the passport to job security.

-Tina Brown, “The Gig Economy,” The Daily Beast

IT happened all of a sudden; this new way, this happy-to-have-work, consistent under-employment: All of these non-technical degree holders from “great schools,” jettisoned from out of capitalism’s mouth like some alcohol-stenched vomit from the very depths of her failed body politic, and now just dribbling down her chin headed towards oblivion. Tina Brown’s words on “The Gig Economy” were written at the beginning of this year at her then-newly-minted; media, culture and news site, The Daily Beast. The piece came at the end of the beginning of the financial meltdown that has left families and institutions strained and national employment figures hovering around 10 percent — and greater in some cities — leaving those of us who are the least economically aware, pacing the same dark corners for answers, alongside the economists from Carnegie Mellon or the London School of Economics.

American Dream? How about “American Disaster Movie” come-to-life? (2012?) It was just a year ago that multiple interdependent markets failed. And the fact of the matter is, no one really understands why we’re here now. Sure, there are very good explanations. There was the wanton deregulation of the markets in the ’90s, which removed the checks and encumbrances on speculative financial institutions, and the fact that there was too much housing built (in the early aughts) and too many loans given — and vice versa — to unworthy borrowers and buyers, because of supply and demand; and greed. (The lifeblood of capitalism is greed.) This was seen in the functionally, ethically-retarded banking companies who opted to bundle risky assets — e.g. borrowers with a high likelihood of default — to make them an appealing commodity to financial firms who traded these dice-rolls. But still, no one exactly knows how all the levers that needed to be pulled and gears that needed to be turned for this to happen, actually managed to be pulled or turned, and without much warning. I hate clichés but it was literally, “the perfect storm.”

So we are all here now, in the breach between the past certainty and a prolonged uncertainty, where corporations provide cold-comfort by cutting hours, freeze hiring, keep the cheapest employees and slash benefits, and then say: “Be happy that we didn’t eliminate your position altogether.” And so for a degree holder — and particularly a new one — you’re now fortunate to even experience this sad state of life at a corporation, because any decently paying job is better than the bread line, or being talked down to by a 19-year-old manager at The Gap. (A future that could be at hand, especially for the very, very young degree holders, unless they can justify an American studies or world arts and culture major to a prospective employer.)  And so this means, to avoid such a fate, that many of these folks are doing multiple jobs, sometimes, badly: designing, blogging, PR, all at-once, for a pittance.

A college degree from a “great school” used to mean something. It used to mean that you were worth a little bit in the capitalist system: That you had already paid some amount of dues the moment you left the halls of your school, and that you were trainable and had leadership abilities. But in a world and job market that really doesn’t require such a global view of knowledge as the one elite universities sell nor as much in-depth, high-level critical thinking or, at least, it is not as much at a premium as a trade specialization, this new way that we are in, means that the kind of bare-bones, multidisciplinary, know-a-little-of-everything, prestigious degree from an upper-middle-class school (or even an upper crust school), is only good enough to qualify this kind of degree holder for pretentious conversations that vacillate between Chaucer, Zola and The New Deal. And if they can find a job doing that full-time with benefits, then great. But it’s most likely that they won’t. So what does that leave for them? Fact-checking and copy-editing for peanuts? Where at, exactly? That plan works in Chicago perhaps or in New York, where publishing is a major industry, but it more and more appears that the “Gig Economy” is the result of a bubble that is not unlike the one seen in housing. If everyone attends a prestigious school — just as everyone looked to own a home — what is the prestigious degree worth?

When everyone began to attend college as part of the American Dream, and many attended the now many more prestigious universities to go to, because of a dubious national ranking scheme by one publication (U.S. News and World Report); that made them and their achievement-oriented parents chase harder than ever before the acceptance letter to one of these “Tier-1s”; when there is an entire industry dedicated to testing and prepping, and when the skills these colleges teach are hard to qualitatively imply and quantitatively measure on an application; it becomes a similarly constructed shell-game as housing loans, for the time being; especially when there are plenty of such kind of college-educated workers available. The question is, “Is this permanent?” It probably isn’t, is the most sufficient and honest answer, just because social class tends to replicate, as seen all throughout history, and the down trajectory of young (and even some old and experienced) prestigious university degree holders will straighten up and fly right as soon as the economy heats. But for now, as Tina Brown wrote practically a year ago:

As noted above, the folks at the bottom of the greasy pole have been living with the anxieties, uncertainties, and indignities of Gigwork (it used to be called piecework) for a long time. Now that people nearer the top are learning firsthand about the wonders of “individual initiative” and “self-reliance,” a little more sympathy — maybe even solidarity — with those the meritocracy dismissed as losers may be in order. Maybe having to trade that first-class cabin for a smaller one without a porthole will alert some of the erstwhile winners to the fact that everyone’s in the same boat.

Read Tina Brown’s “The Gig Economy” [Here]

Chicago’s ‘Operation Ceasefire,’ A Bellwether?

Photo Credit: SpY

THE Economist’s “The World in 2009″ — a collection of articles and thought pieces, projecting on the ideas and events to watch for in 2009 — ran an article titled “Crime, Interrupted,” profiling an innovative policing method that looks to curb the gun violence on the mean streets of America’s cities. The article’s sub-heading, beneath the lede, read: “Treating Violent Crime as a Disease.” And like another practice of medicine, that of triage E.R.-care in a desperate-for-peace central city or a battlefield — sometimes these are the same — this new method looks to focus on only the most severe cases first, while letting less serious matters hit the back-burner. (This makes tremendous sense in an economically strapped environment where local cities are operating on less than a shoestring budget, sometimes.) But this approach runs counter to the established policing practice that has dominated law enforcement philosophy for nearly 20 years.

The more recent orthodox doctrine of law enforcement, argues that petty crimes in high-crime areas, left unchecked, make for an environment conducive to more crimes of increasing magnitude. (Read: violent crime/gun crime.) It is a “Broken Windows” model; a theory that posits a broken window left unfixed in a neighborhood is a sign of blight, and sends a message that “crime is okay here.” Figuratively speaking, small crimes are “broken windows.” And so the traditional model deals with all crime, regularly enforcing even the most minor offenses and dealing with low-level offenders — like squeegee men — and it places many more officers on the streets, heightens their visibility and their rate of enforcement and deterrence. Under this method known as “zero-tolerance,” there are also local commanders designated to such high-crime areas, who are then held accountable for the crimes committed in their area, and responsible for the tracking of the crime rate in their respective zone via a spreadsheet program. (Compustat is one kind of such software, used by the New York Police Department.)

This multi-pronged approach of creating a highly visible presence and regular, routine enforcement of law violations of varying severity; creating tremendous order by focusing on petty crime, along with a consistent tracking of the crime rate of a specific area, and coupled with accountability, was developed under then-Mayor Rudy Giuliani, during his ’90s clean-up and re-branding effort of New York. And the method was implemented and executed alongside his then-Police Chief William Bratton, the now recently retired L.A.P.D. police chief. Giuliani’s and Bratton’s zero-tolerance method worked well. It turned New York into one of the safest cities in America. And because the strategy was so effective, to the point of reducing murders in N.Y.C. from 2,200 in 1990 to less than 500 in 2007, it became the template on which many cities based their policing programs on.

That method, however, was devised under the circumstances of a New York from 15 years ago, and it is a single-minded approach tailored for that city; adapting it to a one-size-fits-all program for all metropolitan areas, lacks nuance and understanding. It presumes that all things being equal, or in this case smaller (or much smaller), than the Big Apple; that such zero-tolerance policing methods would work almost anywhere. And all things were not equal, obviously, since other cities haven’t found the zero-tolerance policy nearly as effective as it was in New York, and this is due to the fact that very few cities are as densely populated or as centralized.

Most cities are much more spread out, and so many more police officers on the streets cracking-down on petty crimes is harder to notice –  key to the philosophy of zero-tolerance is the impression of no crime being tolerated — and so it isn’t as effective, as well as it is harder to commit to logistically. And those who looked to repeat the zero-tolerance model in their own cities, missed an important variable in the N.Y.C. formula: the knowledgeable leadership embodied in Giuliani and Bratton. Unlike in other cities, both this mayor and this police chief, understood police culture and knew precisely how to motivate their officers using praise and fear. As a result, any cities absent similar conditions to that of New York in the early ’90s, which is most, are saddled with an ineffective policing strategy, almost two decades in the making. What is needed, then, is a new approach.

The Economist projected that in 2009, a community-assisted philosophy to policing that is the diametric opposite of zero-tolerance, would rise to prominence. The brainchild of epidemiologist, Wesley Skogan, this new method looks to curb violence directly, honing in on those most likely to kill or be killed. (The article specifically mentions candidates of such focus to be the recently released from prison, and those who are associates of persons recently wounded by gunfire. ) The strategy is a far-cry from zero-tolerance, because it does not worry or waste as many resources on trivial law enforcement matters and more importantly, this is key, it places an onus on communities most hard-hit with bloated violent-crime rates to do some of their own policing, and change their neighborhoods’ cultures from within.

In this method, communities become involved in their own struggle for safe streets through their local leaders, specifically clergy, in tandem with outreach workers who mobilize the community to directly oppose violence. At night, there is also the use of “violence interrupters” who look to find emerging trouble and stop it in its tracks. And these “violence interrupters” know the lay of the land and the nature of those streets themselves, many of them were gangbangers and former prison inmates, and present a rough-hewn approach to violent-crime from say what is expected of local city law enforcement. “Violence interrupters” may attempt to convince rival drug cartels that a street war is bad business because it is a magnet for cops, or that perhaps a man who feels he was wronged or disrespected in some way that requires death in the code of the streets, just beat a man, as oppose to kill him. Obviously, this tact takes an entirely different approach which leaves latitude for the disaster law enforcement is trying to avoid: murder. But the tact is either going to work or not, and if it fails, the result is no different from what it was going to be anyway, a violent crime. (The main hope is to remove the gun from the equation.)

This is the method that has been the basic standard operating procedure in some areas of Chicago for the last 10 years, as part of Operation Ceasefire, and despite that city’s continuing disheartening murder statistics, especially among youth, this method seems to work in the areas where it is implemented. In 2008, Chicago’s Operation Ceasefire method was audited by the Justice Department. The study yielded that in five of seven communities where the methods of Operation Ceasefire were used, shootings had decreased precipitously, and in four of the tracked areas the decline was far greater than comparable locations where the Operation Ceasefire practices were not in effect.

However, Chicago is a very unique situation, maybe as unique as New York’s was in the ’90s, since the city’s variable of a longstanding tradition of the sort of community organizing and mobilizing needed for this program, is rarely seen nationwide. Nonetheless, Chicago is an incubator for a project that has begun to spread. Cities such as Newark, Kansas City, Baltimore and even New York, with its trial program, along with 10 more according to The Economist’s Joel Budd, are adopting methods inspired-by Operation Ceasefire or implementing its exact techniques. And we will not know for some time, if they are working broadly.

Ceasefire Chicago [Here]

The Economist‘s “Crime, Interrupted” [Here]

On ‘In Between Days’

IN all the “model-minority” code speak that seems to denigrate all “others,” while simultaneously backhandedly complimenting Asians, that seems to go on in the less nuanced parts of North American society; one thing that is often overlooked by those quick to make a monolith out of a diversity of cultures and experience, is that life as an Asian emigrant is generally very rough, just as it is for any recently immigrated population.

While it is presumed — based on recent history — that many first generation Asian immigrant youth (and even higher percentages of second and third generation Asian immigrant kids), with enough hard work and a bit of luck, end up attending the U.C. Berkeleys, Stanfords or U.C.L.A.s, if not the Ivies, and then summarily transition to relative security in middle class careers and a perceived “well-integrated” station in Western society, because that is a part of their parents’ understanding of “making it”; for most first generation immigrants, merely adjusting to the Western ways is an uphill climb.

The 2006 film, In Between Days, anecdotally examines such a world, through the eyes of a Korean girl who is new to her North American environs: estranged from her father, with a strained relationship with her mother and only one friend — a male — as her escape and sole compass in an entirely different world than the one she is accustomed to navigating. But that is where the problems just begin, as she falls in love with her friend and vies for his attention against “Westernized” Asian girls, whom he shows more interest towards. The result is a look at the personal Asian immigrant struggle in full interplay with the normal challenges of teen life, school and home.

In Between Days was a Sundance Selection Winner in 2007, and it drew much-deserved magnanimous reviews from critics and viewers. The film worked in a deliberately artistic way that conveyed the awkwardness of not only the main character’s life and personal situation, but the absolute foreign nature of everything to her and many making similar transitions to a wholly different society. It made every moment appear to be if not just some small struggle, then part of a larger, never-ending one: from the expectations of her mother, to the expectations of her friend, for her to adapt to new ways and teen rebellion.

And the film succeeds in telling the story of Aimie — the main character, a South Korean emigrant — without saying much in its dialogue. In much of the exposition, there is only but a paragraph of actual verbal interaction dispersed over the span of multiple scenes, as visuals and the situations viewers find themselves watching Aimie in, say much more: from awkward interactions with Westernized Asian girls who display their personal freedom demonstrably at a party, and their comfortability in such expressions, to the My So-Called-Life, “Angela-and-Jordan Catalano”-type angst that she feels in regards to her object of affection, Tran.

In this way, In Between Days universalizes the emigrant experience, and it helps one to both relate and care for the character of Aimie, who is given a very rough road to trudge, as the memorable scenes of her walking alone in the snow symbolically convey. In Between Days is the first feature film directed by So Young Kim, and it is a masterful debut; as the film strikes on two fronts, being both incredibly honest and very well-acted. And though it is about life as a South Korean, teenage, female emigrant, the film is wholly relatable and especially skilled in re-creating the sense of the liminality of youth and the feelings of isolation, even when with someone, or in a crowd.

‘The Book’ on Afghan Conflict

WITH President Obama in the greatest of pickles, politically, concerning the war in Afghanistan; to the point that the core cogs of the administration are regularly engaged in three-hour long strategy and information sessions that intend to clear up the debacle’s fog machine and help develop a decisive strategy and endgame; And with that war’s own, General Stanley McChrystal’s, confidential report to the president leaking to the Washington Post, and him then later upping the ante by publicly recommending more drains on treasure and more soldiers; to implement what he believes to be a winning, nation-building counter-insurgent strategy, far before any decision has been rendered by the president; Predident Obama is now at the crucial crossroads of his still-nascent presidency.

Add to all of this the news that Hamid Karzai, the current Afghan president, who is accused of stealing August’s pivotal election, agreeing to a runoff after some cajoling by high-level envoys — including Senator John Kerry — there are just too many moving parts for the president to make a decision, right now; who is viewed by some to be dragging his feet on the matter. But Karzai’s decision to a runoff makes some things easier: for one, any decision by the president on the war will need a legitimate partner in Afghanistan, and August’s disputed election would not allow for it. Karzai’s agreeing to a runoff relieves some of the questions inside Afghanistan (and internationally) about a potentially U.S.-backed illegitimate regime. And two, the runoff buys Obama more time, time which he needs. The Afghanistan decision may determine the bulk of his remaining years in office and what he can accomplish, especially if the war appears more and more like the failing enterprise with no means of escape, that many want to paint it to be, and it drains his political capital. (Especially with Republicans shooting at a larger target.)

All this necessary hand-wringing does not mention the still very hot domestic issues, and the debate concerning the potential government-run health care system, and an economy that might as well be in a “false recovery.” (Especially if you pose the question to Joe and Jane Q. Public: “Whether or not the recession is over?” ) There is also the other problem of his own left-wing constituency’s expectations; looking for him to create what they believe to be real “change” — a somewhat personally arbitrary qualitative assessment — which in their mind was to happen from day one, minute one, all based on the hopes that they had pinned to the marketing and imagery of a man who may or may not be the picture of progressive idealism. (War to many of these folks, no matter what kind of war — and how different the objectives — is not change. Despite Obama basically saying throughout the campaigning season that: “Afghanistan is the good war,” and up until recently most Americans had shown that they agreed, in electing him.)

But unintended changes have occurred. Within Obama’s own party and the larger left-wing, support for the war is eroding. August brought the bloodiest month in the Afghanistan War’s history, and the conflict hit the eight-year mark post-9/11.  A palpable fatigue has begun to set-in. Now, no one envies the decisions Obama faces. No one. Even as he is a great student of history, the situation lined-up before him has to be some of the worst set of conditions a newly inaugurated president has ever faced. The history-nut in him has to think that much of this “President thing” is now a fool’s errand at best. Any information President Obama can leverage to navigate through the labyrinthine dilemmas he faces, he now most likely exploits.

Therefore I expect that William Maley’s The Afghanistan Wars has been part of the Obama team’s internal dialogue in the strategy sessions, and in the president’s own facile mind. While intelligence of the here and now matters, and emissaries and commanders on the ground play a vital role in painting the picture, there is the needed element of history. (As Obama pointed out yesterday in a White House lawn speech honoring Vietnam veterans, and making a comment about lessons from “that day in the jungle.” Though the overall Vietnam-Afghanistan media-favored parallel is a bit off, aside from the asymmetry of the conflict, and the drain both wars became.) Since fighting a war where you know very little of the complicated nature of the people, their history and their general patterns of behavior, is equivalent to going into a boxing ring never knowing anything other than your opponent’s physical characteristics. It’s plain dumb and will — if the opponent is aptly skilled — lead to quick-work on their part. Seeing as the last administration took their attention off of the golden egg once it was found — that being a swift victory in the nation — toppling the Taliban in days, the lessons of Afghanistan’s wars of the past now becomes crucial as the pattern of long-drawn, insurgent conflict that is so particularly pathological to the nation reshuffled the deck, and made for a new game in 2003.

The Afghanistan Wars is one of a cornucopia of critical texts concerning Afghanistan, and for those interested in foreign policy, considering our last 25-plus years in the region and the more recent bloody eight, it should be a very good primer. I, myself, am just now beginning to read it. What’s important to note in all of this debate, and the problem for the Obama administration, is the concept of: “Afghanistan is a backwards doomed to hell country, that has always been at war.” That’s not really true. In fact, during King Mohammed Zahir Shah‘s reign from 1933-1973, Afghanistan was a fairly moderate nation that had a parliament, free elections, was a member of the forerunner to the United Nations, the League of Nations, it allowed women to vote, and was actually quite governable. While the nation was never on the level of the developed-world and Western scale, its more moderate Taliban-less time is a far cry from the Afghanistan we now know and the draconian state of “Talib” rule. And while that may or may have no bearing to the present, this should be noted. As a recent discussion in the New York Times has also mentioned, [this is unsourced for now, until the article can be searched for]: if Afghanistan is not able to be governed, then what exactly has the Taliban been trying to do? They certainly believe that it can be governed, and they would like to install Sharia throughout the land.

I happen to think that “no more war” is good policy almost all of the time, but Afghanistan sits in a very delicate region, in a very delicate time, with heightening global tensions and an inequality that breeds exploitable discontent among its young, hungry and poor; and I don’t know what we should do. What is the right answer? Do we double-down at the poker table and lose more kids to recreate King Mohammed Zahir Shah’s Afghanistan? Or do we just leave now? It is argued by many, that us doing nothing there to support a moderate political climate, is what has led us here, making the country a safe-haven and fertile soil for jihadist recruitment and training. Therefore, so goes the argument for fighting, if we can educate the population and especially the girls and women — in order for them to be personal firewalls against extremism within their family units — and provide opportunities besides narco-trafficking, save the locals from the nasty rule of the Taliban while rooting most of them out, we’d be on our way to success.

Except all that sounds arduous and costly, especially when factoring in that this has been an eight year struggle. As many note, this is going to have to be Obama’s war, good or bad, when many of this operation’s problems lay in Bush no. 43, and how he didn’t finish the fight: not providing enough troops to hold the nation’s security for the time of the Taliban’s eventual re-grouping and recruitment of soldiers, and for the locals themselves who should be free from fear of the Taliban. The already-won-but-now-have-to-win-again state of this war is just another indictment of the last administration. It truly appears that the entire cabinet had no understanding of the globe and history. They were a hammer when a “War on Terror” — a tactic, mind you, not an enemy — needed a surgical scalpel, and perhaps the information in The Afghanistan Wars , along with some critical thinking and dot-connecting. For President Obama, however, in hoping to fix this mess and restoring Afghanistan to the moderate nation that it once was, the path to hell may be filled with good intentions.

Limited preview of William Maley’s The Afghanistan Wars at Google Books [Here]

B-More’s Growing Art Identity

I was back East a couple of weeks ago, and I overheard a conversation about Baltimore by some run-of-the-mill white-flight types, and invariably once I heard the “Charm City” come up, I knew what was to follow based on my own admitted assumptions: A kind of city-oriented ethnophaulism about B-More’s blighted homes and denizens of a stripe unlike the conversation holders’ own. The subtextual ciphers of the dialogue being replications of the stereotyping of unnamed “chocolate-cities,” and the black underclass elements that translate to Avon Barksdale and the stories in David Simon’s work such as The Wire. And that is actually what happened. Sadly, that is Baltimore’s legacy to most who know little of the D.C., Maryland, Virginia area known as the “DMV.” (An area I admit I am only somewhat familiar with, myself, having friends who hail from the region, and my father who used to deliver mail on its mean streets.)

But Baltimore is so much more than Simon’s depressing, but brilliantly drawn city and the portrayal of an institutional framework of failing schools, hard-boiled childhoods, dubious political machinery and struggling informational outlets. And because just a few could see the city’s diamond in the rough qualities and promise, it has recently begun to see the edges of a rebirth, perhaps not in spite of, but in fact because of, the Simon-sketched image in the popular mind: Since artists tend to not be afraid of such waywardly defined lives and “close to the bone” existence in towns such as those portrayed in the city’s signature show. It’s also important to note that this ruggedly urban idea of Baltimore existed far before The Wire; seeing its foundation in Simon’s early to late ’90s critically-acclaimed crime drama, Homocide: Life on the Street (which was based on his book Homocide: A Year on the Killing Streets), and another earlier Simon work of the same time: The Corner.

But even more so than the entwine of Homocide, The Corner and The Wire, there may be another more traditional artistic media element involved in Baltimore’s growing embrace among some sectors; John Waters and his films. It was Waters who first told of Baltimore’s strangeness and especially odd circumstances. He did so in Pink Flamingos, Hairspray, Cry-Baby and Cecil B. Demented, making off-beat and camp a part of Baltimore’s softer, quirky side which rarely gets mass media run.

But now it seems the cultural cognoscenti are coming back to Baltimore for affordable spaces to make their art or write, or design. So much so, that a recent New York Times‘ travel article on spending a weekend in Baltimore cements the city’s growing artsy identity in the mind. As of last week the Times‘s “36 Hours in Baltimore” was the most E-mailed article in the travel section. So Baltimore is in fact, to now use a morphing-to-derisive term, being “gentrified.”

And while I generally have a cynicism towards the social phenomena of white upwardly mobiles and those either from upper middle class stratas, or soon to be, coming into the “hood” to slum it up and bolster their bohemian image or prove something to themselves about their authenticity, social understanding and privilege, this is the glowing upside of gentrification; cities are given a new lease on life. Their downtowns are made vibrant again, over time, and while some lower-income or sometimes many lower-income folk who called these hard luck areas home for years or decades are displaced by the rising rents determined by realtors who hype the new, aesthetically friendlier clientele, the city itself attracts more opportunities and business, because of its newfound magnetism. According to the Times:

Once rough neighborhoods like Hampden and Highlandtown have been taken over in recent years by studios, galleries and performance spaces. Crab joints and sports bars now share the cobblestone streets with fancy cafes and tapas restaurants. But against this backdrop, there are still the beehive hairdos and wacky museums that give so-called Charm City its nickname.

Read the N.Y. Times’s “36 Hours in Baltimore” [Here]

Visit the article’s media supplement [Here]

A Look at Bugatti’s ‘Two-Mil Thrill’

By now, the Veyron’s stats are legendary: 1,001 horsepower from a mid-mounted, 8.0-liter, 16-cylinder engine that gets air stuffed down its ravenous gullet by four massive turbochargers. All-wheel drive. A seven-speed, dual-clutch transmission that switches gears faster than a state staffer ducking questions about the Appalachian Trail. Depending on how you define “production car,” it is the fastest in the world. In the quickest Lamborghini ever produced, the Murcielago LP640, you can hit 60 mph in 3.2 seconds. In the Grand Sport it takes a hair under 2.5. How does it feel to command that pace? Godlike.

-Bugatti Veyron 16.4 Grand Sport,” Wired

I happen to believe that cars are the ultimate status symbol, aside form the less accessible private jet, or I guess, the private submarine. (Come on, a sub? That’s status! There’s not even a real pronounced market for them.) For one, cars are sexy and imply freedom, they’re also a prop for a person’s persona. And who doesn’t remember any one of their list of particularly intriguing characters with which they identify, and the automobile they owned? Here’s just a short run-down: Batman? The Batmobile. Inspector Gadget? The Gadgetmobile. Speed Racer? The Mach One. Agent 007? Any number of finely-tuned Euro whips. (And for young girls:) Barbie? That hideously pink Corvette. The list can go on. And for us all cars are like relationships, they have to be maintained; memories are tied to them in ways other inanimate objects really aren’t, and they — like relationships — cost us money.

Which is why cars, though they have no true value other than getting one from point A to point B, can still fetch robust sums from suckers or those with money to burn, despite bloated pricing for their “luxury,” when it needn’t be that serious. Because the truth is, like any major consumer good, we like what cars say about us, how they make us feel, and they are actually part and parcel to the current marketing world’s “new and improved” sales pitch. It was the car, after all, that began to be updated yearly for no reason whatsoever, and that truly made consumers feel inadequate; made them feel a need for something they already owned and wrapped a sense of self-worth into something that shouldn’t apply to one’s value.

Thus, I’m ambivalent to the hyper-engineered $2,000,000 USD, Bugatti Veyron, Grand Sport. While I love cars, especially those of the high-performance variety, like any male brainwashed into thinking manhood had some oblique link to the performance of a machine, this Bugatti Veyron is the essence of overkill. Still knee deep in a global economic recession — or “The Great Recession” — and a world having a hard time reconciling how much should be given to those with so little to thwart the darker side of capitalism: creating unquestionable winners and absolute, destitute losers, all pulling from the same pie; there is this piece of machinery that drives the point home, quite literally. It is however beautiful, stunningly sleek, a joy to look at, and is as sexy as Rachel Bilson, without having to go overboard aggressive, like Megan Fox. Despite its price tag, the Grand Sport is somewhat subtle in its design, not looking like a $2,00,000 dollar ride, but more like its affordable cousin the Audi roadster model, the “TT,” from the side, and it sports no ostentatious badging and design elements that say more than they already do or need to.

The Veyron Grand Sport is as claimed in WIRED, “the greatest gasoline-powered vehicle that has ever been, or will ever be, built. Seriously.” And that may be so, it is also, as the same article points out: demanding a hefty sum of money to pay for a car, regardless of its “get laid” magnitude, and this car’s measure of “get laid” factor, an indices scientifically formulated on my own, I might add, has to be 1,000 x infinity, to the tenth power, cubed. Still, WIRED makes the point that in this continuing down-turned economy the 2.1 million dollars to be exact, that a billionaire would dump on such a superfluous purchase, could be used for good or in the case of many billionaires, evil, by giving that same amount of money to the political machine and becoming a “one-man special interest group.”

The speed performance specs on the Veyron are equivalent to a leading American military attack helicopter, the Hughes AH-64 Apache, though minus the forward-looking infrared, laser-target painting, strategically accurate G.P.S. satellite and real-time computer link to the American forces’ database, it really isn’t as interesting. Its top speed is as fast however, clocking-in at 253 miles per hour. Though in convertible mode, its top-speed is only 217 miles per hour, so you know, it’s slow. (Best to leave the top on I guess, since you never know when you may have to go up against Racer X while on the 101.)

The first Veyron is an engineering marvel. That’s the one with the massively reinforced roof that helped keep the rest of the body from deforming into an amoebic tangle of graphite composite and exotic metal under the joint stresses of lateral acceleration, horsepower and wind. It stands as one of the greatest achievements of the petroleum age. It required the intellectual might of one of the largest and arguably smartest car companies in the world to birth a car that was not only faster than anything on the road, but easy enough to pilot that anyone could drive it. (“It killed my husband” is not the kind of country-club buzz that sells cars.) To make the Grand Sport, Bugatti’s engineers had to do the same thing, only with a giant hole in the middle. It was like designing a picture frame to break rocks.

And this is the second Veyron. The first was also a tour de force, but it didn’t have the much desired convertible feature that the Grand Sport provides. (It turns out, when you pay that much money for something that fast, you’d actually like to feel the wind in your hair and sun beating on your head.) By reinforcing its doors, B pillars — the point where the back edges of the windows rest — and floors with copious carbon fiber, and turning the frontside air scoops into structural supports for potential rollovers; Bugatti has said to have made the most-rigid convertible in the world, structurally, which is good to know since the car is going to be driven fast.

Read more about the Veyron Grand Sport at Wired [Here]

About That Man: Jack

JACK was more “gully” than you know. He had to be — prepare for my ridiculous and arbitrary figure and baseless guess — 65% top-shelf (of human distribution) bright, 30% back alley brawler, with the remaining 5% left to heartfelt diplomacy. Don’t let the picturesque, New England-prep, “Camelot” imagery fool you.

The guy had a brass pair: through war, campaign trails, sociopolitical and personal turmoil, a failed invasion, extra-marital affairs, a nuclear standoff and a “Vietnam problem.” There wasn’t a whole lot of yellow in him. Say what you will about his decisions and his trending towards elitism, technocracy and credentialism, but say nothing negative about his possessing a lack of courage and confidence, and his ability to function in adversity. He was also always in some kind of physical pain. Always.

How Uncle Sam Targets Kids

In the past few years, the military has mounted a virtual invasion into the lives of young Americans. Using data mining, stealth websites, career tests, and sophisticated marketing software, the Pentagon is harvesting and analyzing information on everything from high school students’ GPAs and SAT scores to which video games they play. Before an Army recruiter even picks up the phone to call a prospect like Travers, the soldier may know more about the kid’s habits than do his own parents.

[...]

The military has long struggled to find more effective ways to reach potential enlistees; for every new GI it signed up last year, the Army spent $24,500 on recruitment. (In contrast, four-year colleges spend an average of $2,000 per incoming student.) Recruiters hit pay dirt in 2002, when then-Rep. (now Sen.) David Vitter (R-La.) slipped a provision into the No Child Left Behind Act that requires high schools to give recruiters the names and contact details of all juniors and seniors. Schools that fail to comply risk losing their NCLB funding. This little-known regulation effectively transformed President George W. Bush’s signature education bill into the most aggressive military recruitment tool since the draft. Students may sign an opt-out form — but not all school districts let them know about it.

Yet NCLB is just the tip of the data iceberg. In 2005, privacy advocates discovered that the Pentagon had spent the past two years quietly amassing records from Selective Service, state DMVs, and data brokers to create a database of tens of millions of young adults and teens, some as young as 15. The massive data-mining project is overseen by the Joint Advertising Market Research & Studies program, whose website has described the database, which now holds 34 million names, as “arguably the largest repository of 16-25-year-old youth data in the country.” The JAMRS database is in turn run by Equifax, the credit reporting giant.

A Few Good Kids?,” Mother Jones

THE Armed Services, and the U.S. Army in particular, have a problem: they need bodies for the two wars and the coming ones, in a time when the benefits versus the costs of such duty seem to be dwindling. Nearly 10 years of bad news in the nebulous cloud of uncertainty once called the “Global War on Terror” and now known as the more sterile: “Overseas Counter Insurgency,” have given them this problem of lacking fodder; along with the fact that the U.S. military is an all-volunteer force. And in war times if the fight loses its “good war” image, the military tends to see decreases in their available pool. To remedy, the U.S. Armed Services have gone to culling personal and demographic intelligence on kids all over the nation via a rider on the Bush education bill: No Child Left Behind (NCLB), that requires high schools across the land to provide the names of all their juniors and seniors. (And the only way to avoid such information dispensing is through an obscure opt-out provision.)

And of course, the demographic collection does not stop there, it has to get a bit nefarious: the information is not just sent directly to recruiters, it is also mobilized into youth-oriented or parent-oriented marketing, with the help of the private sector giants like marketing firm Nielsen Claritas analyzing, credit company Equifax performing the data-mining and Mullen Advertising pitching to the parents; the would-be roadblocks to recruitment goals. Also a factor are the schools, themselves, who are negligent in understanding that they can or should opt-out of sharing kids’ vital information. The Los Angeles and Washington D.C. school districts have looked to thwart such mining by only providing this information by request.

There is also the other veritable recruiting wolf in sheep’s clothing, here, in the use of testing companies like Kaplan and Princeton Review through a 1.2 million dollar Pentagon funded Web site, March2success.com, that provides standardized test-taking tips, but also sends its data to recruiters — if the student fails to opt-out — and boasts 17,000 new users each month. Moreover, the Armed Services also have the Armed Services Vocational Aptitude Battery (A.S.V.A.B.) which once was expressly used for the purpose of testing the potential strengths of a prospective soldier, and has now been re-branded as a “career exploration test,” which logs youngsters’ career aspirations into a database along with demographic information into the Joint Advertising Market Research and Studies program (J.A.M.R.S.).

How the Army and other branches use this information is important, since it gives recruiters a pronounced ability to sell the branch of service to a kid: from knowing their shopping patterns, their frequently visited Web sites, their abilities in the classroom; their ethnicity, socioeconomic situation and so forth. There is even a program that helps recruiters pitch during cold-calls, that can access information about potential recruits in a surrounding area; which includes even their recreational activities.

For a 17-year-old or 18-year-old, this could be a kiss of death. Since it becomes easier for their impressionable minds and limited experiences to be sold on “adventure” and “challenge,” amid the hyper-relevant and personal eliciting methods of an older distinguished soldier armed with information and experience selling military duty. The child, essentially, no matter how smart, is markedly disadvantaged going one-on-one against a sea of companies and interests all embodied in the recruiter in front of them, and since they are young and most likely highly inquisitive about the services, if the youngster is taking a call or meeting with a recruiter, they are now less-likely than ever before to weigh the negatives of service. That is, unless their parents can provide a strong, informed alternate voice that removes the recruiter gloss, and some of them have.

Read “A Few Good Kids?” at Mother Jones [Here]

The Outlook for Current’s Journos?

LAURA LING and her camerawoman, EUNA LEE, while reporting on the phenomena of North Korean refugees fleeing to China, were apprehended by North Korean soldiers, and are now being held in a Pyongyang “guest house,” where they are set to face trial with the possibility of a sentence to 10 years in a “workers’ camp” for “hostile acts,” and (my guess) possibly another charge of “espionage.”

Both Lee and Ling and a crew of two others (who somehow managed to escape detention), were reporting from the border area in-between China and North Korea, and it is unclear if they were actually in North Korean territory. There has been a reluctance (and overshadowing in news media, due to North Korea’s recent missile launches), to talk of the matter in the media by officials who are hoping that the less attention North Korea can extract from the issue, will lead to a back-door deal for their release. And considering the recent events concerning journalist Roxana Saberi by Iran, this tact’s success seemed even more hopeful. This “hope,” however, is tempered by this recent news from the Wall Street Journal:

Under international criminal law, defendants have the right to access diplomatic officers of their own state. But American journalists Euna Lee and Laura Ling, detained for nearly two months, haven’t been allowed contact with Western officials since March 30. A South Korean man known only by his surname, Yu, also has been kept from any contact with officials from his country, according to the South’s Unification Ministry.

The North said on April 24 that it would put the two women on trial for “hostile acts,” in what would be its first trial of Americans, but it didn’t say when. It has given no details to the U.S. or to Sweden, which has diplomatic relations with North Korea and provides services to U.S. citizens in the country.

Mats Foyer, the Swedish ambassador in Pyongyang, met with Ms. Lee and Ms. Ling separately on March 30. He declined to comment on the situation late last week, and referred questions to the State Department. An official there said Mr. Foyer has “repeatedly requested additional visits,” but none have been allowed.

U.S. officials have said less about Ms. Lee and Ms. Ling than they have about an American reporter, Roxana Saberi, who was recently convicted of espionage in Iran. The strategy is partly a gamble that not provoking the North Koreans may lead to a speedy resolution, analysts say, but it’s also a sign of the increased uncertainty in dealing with Pyongyang.

U.S. officials have said little about the journalists’ situation, but have indicated they aren’t making progress with Pyongyang. A person not in government who is familiar with the situation said that North Korea isn’t talking to the U.S. at all.”

Read W.S.J.‘s “North Korea Blocks U.S. on Journalists” [Here]

Kristof on Rethinking the Politics of Sweatshops

Kristof Politics Blog Image

THERE is this idea in the academy known as: “cultural relativism.” It is sometimes used to describe the ethnocentrism that happens when applying one nation or culture’s standards to another and from it, making judgments that deem a flawed ideal of “right” and “wrong.” The world doesn’t work so neatly. There are not usually such easy answers to things, especially when dealing with the issue of quality of life in countries who are so destitute in every economic metric, that a popular form of employment is scavenging. Thus, providing an instance where principles are too high-minded and operate above the brass-tacks of an issue. Those ideals, inevitably, hold a level of disconnection and apply a rubric befitting another setting. The Western world’s hullabaloo over sweatshops and their politics in the late ’90s going into the millennium is an example.

Sweatshop labor in Asia and other regions of the world were a lightning-rod for companies like Nike and K-Mart and its Kathy Gifford line, as recently as 10 years ago. Both companies and their headline endorsers morphing before the public eye into symbols of multinational corporations and their “inhumane standards,” and productivity requirements that fostered a culture of deputy managers that ran factories slavishly. College-age kids looking for a cause often campaigned against these companies, banned their products and even went so far as to demand their schools’ athletic departments, in the case of Nike, to forgo their use on moral grounds.

But the societal outrage while grounded in honest ethics; the disgust while commendable but ever-so-slightly misplaced, failed to see the entire picture of life in the exquisitely impoverished regions of the world; places where life on less than $2 a day is a normative reality and a “scavenger economy” where toiling in the most disease incubating, sweltering space one can imagine is the best many could hope for. New York Times‘ columnist, Nicholas Kristof, a man who has made a reputation on shinning a light on Third World problems, begs for a new understanding:

The miasma of toxic stink leaves you gasping, breezes batter you with filth, and even the rats look forlorn. Then the smoke parts and you come across a child ambling barefoot, searching for old plastic cups that recyclers will buy for five cents a pound. Many families actually live in shacks on this smoking garbage.

Mr. Obama and the Democrats who favor labor standards in trade agreements mean well, for they intend to fight back at oppressive sweatshops abroad. But while it shocks Americans to hear it, the central challenge in the poorest countries is not that sweatshops exploit too many people, but that they don’t exploit enough.

Talk to these families in the dump, and a job in a sweatshop is a cherished dream, an escalator out of poverty, the kind of gauzy if probably unrealistic ambition that parents everywhere often have for their children.

“I’d love to get a job in a factory,” said Pim Srey Rath, a 19-year-old woman scavenging for plastic. “At least that work is in the shade. Here is where it’s hot.”

Another woman, Vath Sam Oeun, hopes her 10-year-old boy, scavenging beside her, grows up to get a factory job, partly because she has seen other children run over by garbage trucks. Her boy has never been to a doctor or a dentist, and last bathed when he was 2, so a sweatshop job by comparison would be far more pleasant and less dangerous.

As Kristof describes an area of Phnom Penh, there are many other nations with urban centers that have similarly constructed entire “trash cities” upon the refuse of landfills, and in these shantytowns citizens work longer hours in more putrid conditions than one can imagine hoping to eek out a life; hours longer than they would probably see in sweatshops. Knife fights and murders over bottles and other more valuable goods occur there, to the point that shifts have to be made by the local government — who find it fruitless to outlaw the scavenging practice — in order to organize the scavenger parties and provide a level of reasonable fairness that staves violence, and naturally combats against gangs who monopolize the areas’ “good dumping periods,” when more profitable scavenging is likely. And the children, they work there too, with no age restrictions, and they suffer from innumerable maladies as a result of working the dumps and their parents’ inhabiting of the landfill cities in which they make their livelihood. These are places where sweatshops are a step up the economic ladder; a ray of hope in nations where prostitution and more illegal means becomes accepted.

I know this world, not personally, but I know its smell. And I know the heat and the grime and rising smoky mounds: all the result of my passing trash cities on the highway, before Manila’s local government somehow hid them from the plain sight of the more economically developed areas. Most importantly, I’ve heard since I was a child, about the people in similar trash cities as the one Kristof mentions. Whether in Pnomh Penh or Manila, Philippines, or almost any other densely populated Global South city; it always sounds far worse than a sweatshop to me. As a result, the kind of disconnected from the ground-truth politics I hear from American politicians and young activists are a waste. It is not that there shouldn’t be an uproar; there should be. But their problem should be with poverty itself, the kind of poverty that makes sweatshops possible and appealing to the world’s workers and Third World economies. Developing nations agree with American firms to produce in such factories because its profitable for both sides. As Kristof points out:

I’m glad that many Americans are repulsed by the idea of importing products made by barely paid, barely legal workers in dangerous factories. Yet sweatshops are only a symptom of poverty, not a cause, and banning them closes off one route out of poverty. At a time of tremendous economic distress and protectionist pressures, there’s a special danger that tighter labor standards will be used as an excuse to curb trade.

When I defend sweatshops, people always ask me: But would you want to work in a sweatshop? No, of course not. But I would want even less to pull a rickshaw. In the hierarchy of jobs in poor countries, sweltering at a sewing machine isn’t the bottom.

My views on sweatshops are shaped by years living in East Asia, watching as living standards soared — including those in my wife’s ancestral village in southern China — because of sweatshop jobs.

Is it necessarily ideal and up to “our standards” to be working in a cramped space with little to no break for long hours in oppressive heat? No. But for just a moment: consider the alternative. (Say: “scavenger employment” where death is more likely than in a sweatshop, and where disease is a near-certainty.) Companies who employ sweatshop labor provide a moderately safe environment, comparatively. The main of people in developing nations are not given the many roads of opportunity that are experienced in the developed West, their economy hasn’t the strength that ours has, and so they live at a survival level in the world’s toughest, most dangerous ghettos. “Sweatshops” are a far cry from what those on the lowest rung of developing nations know as “work.” There’s even, sadly, a dignity in sweatshops, by comparison.

We should demand more of our companies who employ overseas for more profit, after all, they did ship an American job out to help their bottom line, and so it shouldn’t come at even more cost of someone else’s health, but in all honesty: this is the best these people have for now, until we can lift their entire nation from the constant brink of economic, organizational and political collapse. (That is after all, what being a Third World nation is.) There is a moral calling for us with privilege to ask for a more considerate path by the corporations that operate beyond our shores, but we mustn’t completely lose the focus of the entire picture of global poverty being the contributing factor in sweatshop labor.

Read “Where Sweatshops Are a Dream” [Here]