Edgewood’s Secrets


WHAT HAPPENED AT EDGEWOOD ARSENAL, a United States Army munitions research facility in Edgewood, Maryland — made somewhat notorious in the midst of the Cold War — is viewed properly as a travesty: Testing chemical weapons on American soldiers and then classifying that research to keep the world and America’s enemies from knowing the details is anathema to science; and happened to serve the interests of security, while perhaps shielding the government from questions of violations of an ethical nature.

However this commonly accepted view of “moral hazard” is an effect of the now and adjustments made to the standards of our moral compass today. But within the context of the Red Scare and the hysteria over the growing “Russian Threat,” Edgewood Army base’s post-World War II commitment to research in order to produce new chemical weapons before the “Ruskies,” was a slave to a greater good. Or so the argument of the time goes, and this is the enduring interpretation of Colonel James K. Ketchum, then the government’s expert in chemical and biological warfare and, for a time, Edgewood’s principal figure.

Ketchum has been Edgewood’s lone defender against its aggrieved test subjects, who have been haunted by their time at the base as a part of the national Medical Research Volunteer Program, a program that gathered soldiers from all over the nation to serve ostensibly as guinea pigs in the beginnings of our now defunct national chemical warfare program.

To achieve the program’s establishment — following “psychochemical weapons” being added to Edgewood’s list of research purviews in the mid-1950s after mostly focusing on standard munitions and simple chemical weapons — Congress was assured by the Army that the chemicals set to be used in Edgewood’s trials were “perfectly safe.” The Army went as far as arguing that such chemicals could prove “a new vista of controlling people without any deaths.” The program found congressional support in the midst and context of fears of a more developed Russian chemical weapons program and even the possibility that such weapons had already been exposed to American legislators and could be used against our diplomats.

Colonel Ketchum and the complicated circumstances of Edgewood are profiled in The New Yorker this month in “Operation Delrium,” where Ketchum speaks boldly of the collective mindset and the ideas behind the American chemical weapons program and the research conducted there, then supported by Kechum’s belief that chemical weapons are a more efficient, and oddly, more humane a means to conducting warfare than tanks, bombers, strategic nuclear arms and such.  (As a sober assessor of Ketchum’s, I could actually see that argument as being a sound possibility, before we actually knew what we know. The reality is though: chemical warfare is conflict at its most crude and heinous, and it leads to significantly more suffering and is an indiscriminate application of military arms. It is simply torture.)

Ketchum, a psychologist, worked at Edgewood for nearly a decade subjecting lower-level soldiers often unsuspecting to the arcane instruments of this military research. Almost 5,000 military personnel were exposed to VX gas — a nerve-agent developed at Edgewood and which became infamous in Sadaam Husseisn’s 1988 Halabja chemical attack on Kurds — tear gas and even the drug LSD.

As time passes by and the history of Edgewood begins to lose definition and blurs into a collective faintness and the surviving subjects of Edgewood pass, we forget its unfortunate misuse of science in the name of protecting the world from communism. From The New Yorker:

Within the Army, and in the world of medical research, the secret clinical trials are a faint memory. But for some of the surviving test subjects, and for the doctors who tested them, what happened at Edgewood remains deeply unresolved. Were the human experiments there a Dachau-like horror, or were they sound and necessary science? As veterans of the tests have come forward, their unanswered questions have slowly gathered into a kind of historical undertow, and Ketchum, more than anyone else, has been caught in its pull.

In 2013 there will be a class-action suit that will go to trial concerning the Edgewood experiments, filed by its former test subjects. Colonel Ketchum, as he has already done, is expected to be key to the government’s defense. It will not be uncharted territory for Ketchum, who wrote a book in 2006 titled Chemical Warfare: Secrets Almost Forgotten, in what may have been an act to quell his guilt by re-iterating his belief in the testing, against the growing anxiety of now, if not outright opposition.

Since Edgewood’s closing Ketchum retains many of the base’s records that were once intended for disposal. The records are filled with names, doses, scientific tables, graphs and data of all sort. It is, in fact, so large and deep an archive that it could become a problem for Ketchum or more likely, the government. Somewhat unbelievably, as a matter of some personal quirk, he’s maintained those files against the pressure put on him by the C.I.A. and lawyers. He even provides some of these ample digital records to The New Yorker and the writer who profiled him, Raffi Khatchadourian, who wrote:

Before I left, Ketchum promised to send me a full digital copy of his archive. A week or so later, a binder arrived at my office, decorated with a photograph of the two of us, which Judy had taken. Inside, Ketchum had constructed a meticulous index to the papers, and for months afterward the raw material came in waves. There were technical reports and scientific tables, lists of soldier volunteers and their test data. There were memos and letters. There were personal items, too: golf scorecards, family photographs, college essays, data on the sale of a house. “I made a list of all the jobs I had in my green notebook, which is the kind of thing I carried around,” Ketchum had told me. “I also made a list of all the drugs I’ve taken.” Tens of thousands of pages of scanned material began to fill up my hard drive. “This is me,” he seemed to be saying. “This is what I did. You be the judge.”

The profile of Colonel Ketchum today paints him a bit too trusting or naïve about what is actually underway concerning his former life at Edgewood and the world he lives in now. He readily admits to giving away large sums of money to people for various projects, or just out of charity and the goodness of his heart. It appears that he is prone to getting swindled, as well, but who is to say? Once, one of his donations amounted to $20,000 dollars, bequeathed to a fellow scientist, who was never to be heard from again.

Ketchum’s life for the most part has been wrapped up in the justifications for Edgewood and his early belief that chemical warfare was a “higher form” of war, this has produced a particularly odd way to interpret his post-Army existence, with him now living at odds with the values he’d presumably had while researching cutting edge weapons.

Today, Ketchum and his wife live in Santa Rosa, California ensconced in a semi-bohemian existence. It’s a new reality that is tough to de-couple from his leaning Republican and formerly working in a way that is antithetical to the humane ethics of science’s principles, and defending it. The whole thing is complicated in the way life is complicated and not easily definable. On the matter of Edgewood as a base, the same paradox rings true. Its complexity is laid bare in its introduction of drugs like PCP or the methods and subjects in regards to human experiments, but Edgewood was also key to the production of Kevlar; and its early trials with mustard gas helped to produce many early cancer chemotherapies.

In the sober light of day, it is a bit astounding, but also fitting that the American chemical weapons program was run by a bunch of crazy characters. Other doctors at Edgewood, even in that perilous time of the Cold War, did understand the longer view and the ethical questions involved in conducting scientific research at the base, however; as seen in the various and strange but entertaining profiles of the men who participated in the development of chemical weapons as part of a loosely-military organized corps of Army scientists, who’d engaged in the experimental drug-taking themselves, to understand what they were subjecting other men to. And still there were plenty sane and circumspect.

One time, a scientist on the way to New York was asked by another scientist if he wanted to bring a vile of VX to perform a short demonstration, which prompted surprise in the New York-bound scientist, who was dumbfounded that by not in the slightest was there a thought that an accident with a transported vile of VX could kill thousands. As a paragraph in The New Yorker perfectly encapsulates the Edgewood realities:

The differences between Lindsey and Sim reflected deeper tensions that the Cold War imposed upon the doctors at Edgewood: men who sought to remain ethical as they advanced the frontier of military research. Sim appeared to believe that personally sampling every chemical agent made him free to circumvent conventional standards; “I have to live with myself,” he once said. Lindsey had an officer’s protectiveness for the enlisted men. Many of the Army doctors—draftees, like the volunteers—who worked under both men strove to reconcile their military obligations with their medical commitments. “As doctors, we are used to treating people who are sick, not making them sick,” one told me. “I did not like the idea of what I was doing with individual human beings. But I understood what I was doing in the context of the defense of this country.”

Read “Operation Delirium” at The New Yorker [Here]

The Anti-Poaching Drone Mission


Photo Credit: Canadian Broadcasting Company

DRONES have become the tremendously controversial “flying robot killing machines” of our extraordinary times, even becoming the subject of lampoon in Saturday Night Live animations, and finding introduction into young boys’ (and girls’) consciousness through first-person shooter games. It seems there’s no turning back from the technology as drones are now a part of our pop-culture, as a result of their integral nature to military Intelligence, Surveillance and Reconnaissance (I.S.R.) missions, clandestine counter-terror operations and their presence in a vision of a dystopian future; expected to be a pervasive part of the law enforcement apparatus and our daily lives.

Two years ago, it was reported that Google had purchased drones (reportedly for personal use, but may have become part of tertiary image map data), though it has never been explained, which has led to some significant public speculation. Now Google has awarded $5 million U.S.D. through “Google Giving” to assist World Wildlife Fund to purchase drones and software to track and surveil poachers and monitor endangered wildlife populations.

It’s an ideal civil use of drone technology, because of their ability to loiter above warrens and provide eyes. Its also an unforeseen and unintended upside of the C.I.A. and U.S. Air Force experiment ten years ago to use and arm drones at levels previously unseen; that has allowed for drones to be widely accepted as a potential tool, particularly after the improvements in the technology and practices by way of war. As Google’s Impact Awards page notes:


World Wildlife Fund will use its $5 million Impact Award grant to adapt and implement the use of specialized sensors and wildlife tagging technology, coupled with cost-effective ranger patrolling guided by analytical software, to increase the detection and deterrence of poaching in sites in Asia and Africa.


The illegal wildlife trade, estimated to be worth $7-10 billion annually, is emptying our forests, landscapes and oceans. This criminal industry devastates endangered species, damages ecosystems, and threatens local livelihoods and regional security. This grant will enable World Wildlife Fund to test advanced but readily-replicable technologies in four key African and Asian landscapes. Together with local and global partners, World Wildlife Fund will help nature’s frontline protectors get out ahead of poachers by utilizing innovative technologies.

The mission of the Google and World Wildlife Fund project is to harness not only drones, as noted above, but to introduce a system that integrates Radio-Frequency Identification Tags (R.F.I.D.) and an analytical intelligence system to attack the problems of limiting black market wildlife.

The Canadian Broadcasting Company [graphic furthest above] explains the potential intelligence gathering and interdiction system. In theory: 1.) a command center launches airborne assets (the U.A.V.s or “drones”) to monitor R.F.I.D.-tagged endangered species, and also determines drones’ flight paths, and forwards this data to mobile law enforcement units; 2.) the drones then seek and track animals and their poachers and relays their intelligence to the command center to find best interdiction routes, or whether to continue with surveillance; 3.) mobile law enforcement units, if decided, deploy with that intelligence — satellite coordinates, suspects’ images and locations — with the intention to intercept.

The process will be facilitated by the Spatial Monitoring and Reporting Tool (S.M.A.R.T.), and drones will be tablet-controlled according to Slate. Google’s financial assistance is timely, since attacks against rhinos has significantly risen from 13 in 2007 to 588 last year. From World Wildlife Fund:

The grant enables W.W.F. to test advanced but easily-replicable technologies and create an overarching system to curb poaching — an important compliment to the work W.W.F., partners and governments are already undertaking.

Remote aerial survey systems, wildlife tagging technology and ranger patrolling guided by analytical software like the Spatial Monitoring and Reporting Tool (S.M.A.R.T.) will be integrated to increase the detection and deterrence of poaching in vulnerable sites in Asia and Africa. Our goal is to create an efficient, effective network that can be adopted globally.

The Tear Line

The Tear Line Button Wide

THE WORLD is complicated” and understanding just how complex it is requires a constant news media intake focusing on geopolitical, global crises, defense and security news. Besides that those are obvious interests of mine, scanning the landscape of the intelligence available on anything from the security, defense and geopolitical sphere makes for a better — but still murky — understanding of state-to-state relationships, regional tensions and so on.

At some point in the long development in the art and science of intelligence, open-source methods (abbreviated in the American Intelligence Community as “OSINT”) became seen more and more as an efficient method for collection, as there are innumerable amounts of outlets constantly updating published information for which to sift through, from academia to news agencies and social media, as a result of the Internet. It’s become so daunting a resource that in conjunction with intelligence services’ own sources, that glut of information has come to be called “chatter,” and finding what’s pertinent in that tidal wave of noise is a growing challenge. Robert Baer, a former Central Intelligence Agency case officer, even speaks of his former employer’s shift to more open methods in Gentlemen’s Quarterly in 2010, though not in any kind of glowing way, in an article about how the agency lost its way on-the-ground:

Analysts were convinced that most good information was right out in the open. All you needed was a good brain to make sense of it. And what you didn’t know from open sources, you could learn from intercepts and satellites.

Recorded Future’s Analysis Intelligence in its “About” stub, writes that:

With an estimated 90% of required intelligence available in open source, it is imperative that intelligence analysts become adept at mining open sources.

For those reasons: my personal interests, an emergent field, availability and a desire to better understand the world, I’m aggregating news reports — mostly in truncated form, merely getting to the thrust and basically employing the reporting hourglass — on matters of security, diplomacy, crises and intelligence through open means. (I will also at times produce short-form summaries called “communiqués.”) And because Tumblr is my favored medium for blogging, since it’s easy and it allows for secondary and tertiary sources; I’m doing this there. The Tumblr is called  The Tear Line, a sobriquet whose origins comes from how intelligence reports are divided to various security clearance levels of many different “consumers,” sometimes literally partitioned by “tear lines,” that can be ripped off. The site will attempt to maintain a breadth of academic, regional, trade and news reports from many sources.

Liberal Quants Rising


WITH THE RE-ELECTION of President Obama and the soul-searching of Republican operatives and pundits that was its ancillary result, something became noticeably codified inside the perceptions of politicos in the election campaign of 2012; that for whatever reason, Republican analysts simply weren’t getting the best data picture.

For months many G.O.P. strategists said an Obama victory was not only unlikely, but that the race wasn’t as close nor as in his favor, as it appeared. Within the party they’d question polling data in what is known as “poll trutherism,” listen to and only tout favorable polls — Rasmussen and UnskewedPolls.com – and made projections that were too speculative. Some right-wing pundits even believed President Obama would lose by double-digits.

Even in the glow of election night and amid the tally, Karl Rove was working his own numbers at the Fox News Election Night desk on live T.V., debating Fox’s numbers team on-air about the Ohio results after the network called the state for Obama, giving him the presidency. How or why were Republican numbers so off? The answer may have to do with shortcomings in G.O.P. data-mining.

Throughout the election year the political news programs had been talking about the data-mining revolution profiled by the New York Times’s Sasha Issenberg’s articles and his book Victory Lab, which unpacks “micro-targeting,” originally an instrument of marketers’ who used consumer-data to perfect their pitch, now employed in politics; it can individually identify voters down to the coffee they drink and the personal and familial associations most likely to get them to vote.

In Victory Lab, Issenberg covers not just the new methodology of political statistics  – which holds cognitive associations and parallels to the Moneyball movement in baseball and what is known as “sabermetrics” (this formerly defines deep baseball analytics, but it is also used in basketball and called “advanced statistics” in other pro sports)  – but also the history of political scientists’ attempts influence voters and produce predictives.

In 2007 and 2008, the Obama campaign found success in micro-targeting, identifying soon-to-be 18-year-olds in Iowa’s caucus, to the point that the campaign could even mark their individual bus routes; using the data to attract them to swell voter rolls in their favor.

Unlike Vance Packard’s 2007 oeuvre The Hidden Persuaders which traverses similar ground, Victory Lab doesn’t portray the new data-minded pols as lowly, underling analysts and to-be-feared marketing wizards using mystical math powers for bad, but as honest believers against an old-guard system of mailings and flesh-pressing (though still important, just done better with this data), who have faith that quantitative analyses can be leveraged to win in new ways and help campaigns be more precise, efficient and effective.

Victory Lab and the rise of wonky, stat heads like the New York Times’s Nate Silver and Washington Post‘s Ezra Klein have introduced us to a generation of number-crunching Millennials who are bringing a fresh approach to politics and our understanding of it as an extension of a network of data points that determines behavior based on what is known in military intelligence circles as a “pattern of life.” (Though not as ominous as its usage in military circles, the idea is the same.)

The notion is that considering a multiplicity of knowns from where one lives to their age, family, (online and offline) social networking associations, buying patterns, to you name it, one can somewhat predict behavior and therefore affect outcomes by pinning-down assessments to specific patterns.

In 2010 there was another book on data-miners and quantitative analysts that exposed the world of Wall Street machinations that led to the derivatives market. Though it wasn’t at all positive, obviously, since derivatives, short-selling and no one minding the store at the government level, wrecked the global economy; it highlighted the rise in a kind of numbers wizard we are seeing throughout the world that once described “technocrats,” but is expanded to people like behavior scientists and social scientists, intelligence professionals of all kinds, and financial services sector folks.

They are informally known as “quants” in the financial sector and more recently in politics, which is also the name of the book (The Quants) by Scott Patterson, that covers financial services data-miners. Though “quants” is a bit imprecise as it’s not just quantitative analysis that is being plied to the political campaigns, but also behavioral psychology; it gives us a quick and dirty identifier for a specific kind of individual or group of individuals and their skill sets.

What’s interesting in all of this rise, more so than identifying a trend of a particularly skilled class of pols is the findings of Sasha Issenberg, the author of Victory Lab, after the election, or at least what he published, but may have known prior.

Issenberg contends that the G.O.P. will have a narrow pool of this cadre of individuals for a number of sociological and continuity reasons, and it’s why Republicans had such a difficult time in getting the data picture in this year’s election. In Issenbergs’s “A Vast Left-Wing Competency,”  he says about Republican takes on the 2012 loss and the failures that can come in an inability to simply get the data aspect right:

For Bush, this proved a unique opportunity to synthesize information from consumer-data warehouses with voter registration records and apply some of the same statistical modeling techniques that companies used to segment customers so that they could market to them individually. In Obama’s case, the continuity provided by a re-election campaign encouraged a far broader set of research priorities, perhaps most important the adoption of randomized-control experiments, used in the social sciences to address elusive questions about voter behavior.

Following their 2004 loss, Democrats found it relatively easy to catch up with Republicans in the analysis of individual consumer data for voter targeting. By 2006, Democrats were at least at parity when it came to statistical modeling techniques, and they were exploring ways to integrate them with other modes of political data analysis. Already the public-opinion firms of the left saw themselves as research hubs in a way that their peers on the right didn’t, a disparity that stretched back a generation. When polling emerged in the early 1980s as a new (and lucrative) specialty within the consulting world, the people who flowed into it on the Republican side tended to be party operatives; former political and field directors who had been consumers of polls quickly realized that it was a better business to be producers of them.

Those who went into the polling business on the left were political consultants, too, but many of them also possessed serious scholarly credentials and had derailed promising academic careers to go into politics. Now that generation — Stan Greenberg, Celinda Lake, Mark Mellman, Diane Feldman, among others —preside over firms that see themselves not only as vendors of a stable set of campaign services but patrons of methodological innovation. When microtargeting tools made it possible to analyze the electorate as a collection of individuals rather than merely demographic and geographic subgroups, many of the most established Democratic pollsters in Washington invested in developing expertise in this new approach. Their Republican rivals, by contrast, tended to see the new tools as a threat to their business model.

The Republican party will not find itself in a favorable position in the arena of political analysis any time soon because of the factors laid bare by Issenberg: One, republicans haven’t valued the ability to slice, dice, collapse and collide data the way those on the left have. As he says, the Republican polling consultants of the 1980s and 1990s who’ve encountered this movement viewed it as a threat to their industry.

Second, prominent G.O.P. consultants aren’t as accomplished nor have the rigorous academic training in the social sciences. Further, [a conjecture of my own, that I'd only entertain in a blog post], while young enterprising liberals became attracted to social sciences in college generally, and would become political operatives as they became older and earned advanced degrees, there’s no natural affinity for Republicans — outside of economics — to relevant fields of study. The post-election articles that have praised the factors which have produced the Obama campaign’s “Dream Team” of social scientists and behavioral science gurus, and helped lay the groundwork for his victory, echoes this.

Probably most critical, though, to a generational swing to the left by data nerds, Issenberg points to an infrastructure funded by none other than the likes of George Soros — who draws Koch Brothers and Sheldon Adelson-like ire from hardcore Republicans — dedicated to research and producing left-aligned social scientists looking to contribute.

And within the academy, the collaboration of once social science rivals on the left, motivated to work together after the successful 2000 and 2004 Karl Rove engineered victories, has transpired. They’ve further found a home in this new infrastructure, that has cultivated a research culture targeted to not just winning an election cycle, but understanding what motivates voters. This produces a sustainable framework, not concerned with immediate victories, but long gains. As Issenberg writes:

Concern that the technical supremacy of Rove and his crew would ensure the Democrats’ future as a minority party drove consultants who usually competed with one another to collaborate on previously unimaginable research projects. Major donors like George Soros decided not to focus their funding on campaigns to win single elections, as they had in the hopes of beating Bush in 2004, but instead to seed institutions committed to learning how to run better campaigns. Liberals, generally in awe of the success that Republicans had during the 1980s and 1990s in building a think-tank and media infrastructure to disseminate conservative ideas, responded by building a vast left-wing campaign research culture through groups like the Analyst Institute (devoted to scientific experimentation), Catalist (a common voter-data resource), and the New Organizing Institute (improved field tactics).

With an eager pool of academic collaborators in political science, behavioral psychology, and economics linking up with curious political operatives and hacks, the left has birthed an unexpected subculture. It now contains a full-fledged electioneering intelligentsia, focused on integrating large-scale survey research with randomized experimental methods to isolate particular populations that can be moved by political contact.

“There is not much of a commitment to that type of research on the right,” says Daron Shaw, a University of Texas at Austin political scientist who worked on both of George W. Bush’s presidential campaigns. “There is no real understanding of the experimental stuff.”

If Republicans brought consumer data into politics during Bush’s re-election, Democrats are mastering the techniques that give campaigns the ability to understand what actually moves voters. As a result, Democrats are beginning to engage a wider set of questions about what exactly a campaign is capable of accomplishing in an election year: not just how to modify nonvoters’ behavior to get them to the polls, but what exactly can change someone’s mind outside of the artificial confines of a focus group.

Mr. McGovern


Photo Credit: The Washington Post

I TWEETED about George McGovern’s death saying — in what now feels to be a short, impersonal and cold medium for such a thoughtful guy — that the once congressman and senator from South Dakota was my idealized self: A B-24 bomber pilot who joined the Army Air Corps out of his desire to serve his nation in World War II, he was also a historian with a doctorate from Northwestern and an official flag-bearer for liberal ideals for nearly a quarter century, and then a progressive advocate for another thirty after that.

His story reminds me of what I should strive more to be like; and what I want for this nation and desire in temperament in more for us. I don’t think it’s dramatic — no matter where one stands on the ideological divide — to say we lost a great one on October 22nd. For he was an honorable man able to keep the faith and courage of his convictions in a rough trade, something all thoughtful citizens should understand is difficult. Or he did so in as much as anyone can be honorable in the entire world of complexity we inhabit, and those contradictions baked into the system of modern American politics.

He will always be remembered nationally for his 1972 drubbing in the Presidential Election to Richard Nixon, a race that he lost partly because of his honest manner and inability to play cards closer to the bottom of the deck. He held steadfast to his beliefs though, and held beliefs that were unpopular and way before his time, or well-meaning, but somewhat impractical.

McGovern, in his nomination to the Democratic ticket, came from long odds with a message of populism, social justice, a promise of a significant reduction in defense spending, amnesty for those who dodged the draft and a withdrawal from Southeast Asia; towing with him a grass-roots coalition of students, anti-war activists and new liberals. From what I can tell the coalition was a template (coincidentally) reminiscent in small-bore of 2007 and then-Senator Barack Obama.

McGovern was a liberal’s liberal in a center-right nation, which made him seem radical to establishment types, and he was never afraid to say what he was. He didn’t do Clintonian “triangulation” or uncharacteristically play to the middle to appeal to the more strident, and it appears — since his political career was before my time — he thought the better of us, or the America of his day.

He held in his optimistic idealism that world peace is a real possibility along with a government that straight-up grinded and hustled for the interests of the many and the little versus the moneyed, big and powerful. Unlike the many liberals of now — which I proudly associate myself with — but not indicative of this  – McGovern had true faith in the American to be upstanding, smart, kind and free of the poisoned negative discourse of now. He was a liberal without condescension. In many ways, I see in his death at 90, an allegory in his life.

He grew up poor in the Dust Bowl, flew a bomber officially called the “Liberator,” learned how to fly from a training base named “Liberal Army Airfield,” got his education through the G.I. Bill and became an educated servant for the public interest. His story was a steep climb so perfectly connected to the symbology of his biography and was as American as anything, with a life often positively influenced by the government and him paying that forward.

And in that, all the world seems entirely possibly, but then his political career was ended by the machine, by Nixon, by a nation slow to react to the ravages of long wars and unreceptive to his brand of politics. It seems to be a story that parallels our national history and governmental history, with our noble goal to be the most inclusive and generous in opportunities to all citizens; constantly attempting to perfect, often being overwhelmed by the foul play of powerful money interests and intractable military conflicts: The progress in the domestic, often being rolled back or halted by the machinery of the establishment with opposing interests, but he, like our nation, fought on.

Joey Bada$$, ‘Waves’


“Waves,” Joey Bada$$

LIGHT anything: beer, jazz, versions of any kind of junkfood snack, all tend to be terrible. Hip-hop is equally plagued with this somewhat unnoticed truth, but Joey Bada$$’s “Waves” has consumed my head space for the better part of the year now, and is… Actually… “Light.” Joey is a 17-year-old kid from Brooklyn who has made sizable noise with multiple forms of hip-hop from laid-back to downright raucous, along with some already interesting visuals such as the Vashtie Kola treatment for “Waves.”

“Waves” is a piano-laced track that feels a little Mos Def and D.J. Honda circa 1999, (think: “Travelin’ Man“), and with a mixtape titled “1999,” taps an era that he was only a tyke for. But in the age of the ‘Net and the ability to download entire artists’ catalogs in minutes, he has an obvious affinity with late 20th Century hip-hop. But what happens in Joey Bada$$’s 1999 mixtape is more than an inspiration-tapping kid looking for a muse in a time before him; he seems truly of it’s spirit, and especially since Mos Def is a Brooklynite himself; in Joey Bada$$ there is some kind of continuation of the indie Brooklyn rap lineage.

The Forgotten North Korean Propaganda of Americans


Photo Credit: Vice

IN NETFLIX’S streaming roster there’s a 2006 documentary called Crossing the Line, which explores the lives of American deserters who joined North Korea’s regime during the peak of the Cold War. The four defectors were American soldiers who infamously braved mine fields, opposing troops and their own United States Army, to make their way to the communist side. All of the young men had troubled pasts, some confusion in their lives and a marginal position in the American social hierarchy.

Upon acceptance by North Korea the Americans were lionized, however, by Kim Jong Il’s government who proceeded to use the American soldiers, once guards of the joint American and South Korean side of the D.M.Z., as pawns in a political and international propaganda game, casting them as “White Devils” of various types — generals, intelligence officers and so on — in North Korean state-film epics that touted the perseverance, resistance, solidarity and greatness of North Korea in the face of Western and American imperial aggression.

For decades these men lived in North Korea as strangers in a strange land. They were — as we come to find out — in order to stay in line, bullied and intimidated by one of their own, James Joseph Dresnok, who’d been given special privileges, celebrity and a new stature within the North Korean military establishment. The four defectors were but a small part of a vast North Korean program to use foreign nationals for various purposes, often taken from their countries or from inside South Korea and Japan by North Korean agents.

It was as odd and fascinating a documentary as you could imagine, watching these men who committed treason on a whim — essentially, because they were dissatisfied with their lives – in grainy clips and preserved on the cover of state magazines as celebrities in North Korea’s Hollywood. (Whatever “North Korea’s Hollywood” actually consists of.) It was even more odd when their life is contrasted against the life they lead as old men, married to reluctant abducted foreign spouses in North Korea: their children being developed into spies; being taught English, solidly middle class and simply being given privileges not many North Koreans have. It is the Twighlight Zone of geopolitics, knowing the modern reality of the nation.

To watch the denials of one man, Joseph Dresnok, the strong-arm, speak of the regime which he carries the water for is just as eerie; because of the denial in him that he gains from, versus the other surviving member, and his lack of understanding of how he was taken advantage of. He, unlike the remaining member of his group of defectors, two of which had passed — one of whom Dresnok might’ve killed — is a vestige of a time long ago where it could be skeptically believed that North Korea was taking care of North Koreans. But interviewed in the mid-2000′s about his times and the government’s influence on its people reminds me of another North Korea documentary I had watched, National Geographic’s Inside North Korea, helmed by the journalist Lisa Ling.

In it, Lisa Ling, who entered North Korea undercover as part of a medical relief organization dedicated to curing visual impairments, speaks to an elderly woman who is in the midst of thanking the supreme leader Kim Jong Il’s picture — but this was before his passing — for the curing of a visual affliction, not realizing that her condition was the direct effect of a terrible economic model, poor governance and the sanctions placed on the nation for defying nuclear nonproliferation treaties. It’s a kind of Stockholm Syndrome on steroids, and a striking portrait of the lengths of sociopsychological manipulation individuals and a society can subsume.

Read a review of Crossing the Line at the New York Times [Here]

On ‘Savages’ and ‘End of Watch’


Photo Credit: The New York Times

THE NARCO-WARS of today have begun to see the big screen. Oliver Stone’s recent release Savages sets itself inside the designer dope game, where special strains of West Coast Buddha produced by a biochemist and his war-vet buddy, provides Cali’s connoisseurs with highs that elicit particular moods and experiences so good that it threatens the dominance of a powerful Mexican cartel. David Ayer’s End of Watch has two Los Angeles street cops exposing how a connection between gangs, their street politics, a narco cartel and a human traffic chain are interrelated, and places them in a predicament. End of Watch also makes a brief mention of the tensions between the immediacy of local police concerns and the longer-term goals of federal-level law enforcement operations. (E.G. Patiently moving to dismantle an entire network, versus busting up the low-hangning fruit involved in lower level illegal activity.)

Savages and End of Watch are somewhat a rarity in the prevalence of the movies of now, since most recent drug-crime flicks are about the past, like Blow and American Gangster, which were history-based retrospectives and period-pieces, though not in the Merchant Ivory film way. Savages is a somewhat fantastical soap-opera look at the drug operations of West Coast weed distributors and their barbarism (hence “savages”), with both sides, one Californian; the other Mexican, performing superlative acts of violence. It’s narrated by a young Orange County girl, the Shakespearean, “Ophelia,” who is the third node in an open love triangle between her, a Cal Berkeley world-saver, biochemist grower, and a former soldier in the War on Terror. They all just so happen to accept and love each other and are legitimately friends who run a multi-million dollar bud business, but become ensconced in a literal hostile takeover that leaves the bodies of those crucial to a rival Mexican cartel in their wake.

End of Watch, the better of the two films — because there is none of Oliver Stone’s funny, but slightly ridiculous sense of humor, or an unfocused plot — is a half-cinematic experience, and half first-person, cinema verité ride; because of a plot that features a young war vet, beat cop, who films his outings for a graduate-level course. End of Watch is equally gruesome in its portrayal of the life out there affected by cross-border cartels, especially focusing on the viciousness of their tactics and their reach into local communities via gangs, deployed to exact revenge; which falls upon the two officers. What makes End of Watch is its portrayal of the partners, as they are made heroes for numerous acts of valor and are great friends, committed to each other in the way cops are, sharing their lives while trapped in an L.A.P.D. black-and-white. The film follows them on routine community-relations duties, saving babies from burning homes and whatnot, as well as responding to backup calls and involved in crazy urban shootouts.

End of Watch owes its existence and success to David Ayer, a director whose produced several works on Los Angeles’s gang and street life, most notably, Training Day. Ayer, who partially grew up on Los Angeles’s mean streets, is able to effectively capture the banter and camaraderie between two cops in the midst of an ongoing battle for civility in the city. It’s a camaraderie that is not unlike that of soldiers in the battlefields of now, when speaking of a frequently-engaged L.A.P.D. unit like Rampart, where Ayer’s two cops: one white, a graduate student and single; the other, Latino, responsible and married, both highly skilled at their job, working in what is a battle space between gangs, drug dealers, hustlers and the police. It’s also the place where they kick up plenty of dust and rile hornets’ nests for fun.

Both Savages and End of Watch are rides submerged in violence and testosterone, that are strong portrayals with different aims. End of Watch’s goal appears to be to produce a glowing, apolitical tome to service and the partnership and brotherhood of cops within the cold worlds they inhabit, amid the toughest assignment and division of the L.A.P.D. Savages is an epic discussion of the mind state of those in the cannabis game and cartel bosses’ lives in the upper-reaches, that is only halfway-to-great. However, both do not disappoint in providing a fairly true portrayal of a world which happens right beneath our noses, outside our doors, or at the production end of the “broccoli” our nation and particularly the West Coast consumes to a degree that — unless nationwide legalization occurs — we’ll see the continuing violence and ugly influence of.

Read a review of Savages [Here]

Read an interview with End of Watch‘s David Ayer at the New York Times [Here]

Read a review of End of Watch at the New York Times [Here]

11 Years in Our Ideals Conflicted Post-9/11


I REALIZED that I had never reflected on the consequences of that moment, eleven years ago now, in regards to the life of Americans. And I believe that the perspectives of those who came of age in the midst of that time; those early post-9/11 years that have defined us, are less common to hear than the rightfully valued commentaries of analysts, security figures and politicos; who many times had personal distance from the longer view of the moment, the decisions and the future. They wouldn’t likely see all the changes that would affect a lifetime from war to politics and policy, though. 9/11 has defined American foreign policy and security concerns in those intervening years, and will probably do so for coming decades, and maybe the century.

It’s most obvious consequence is that we’re more likely to intervene in areas that we never would have before to thwart a threat assessed as existential. The Americans most affected by 9/11 policy are the ’80s and early ’90s kids, raised on M.T.V., Nintendo systems and Playstation; who spent their childhood under an unprecedented time of peace; and in the denouement of the Cold War. They have fought the 9/11 wars, endured much of the resurgent stereotypes that were a response, lived under the specter of anti-American terror abroad as students and the anti-Americanism that arose in response to what some in the world saw as heavy-handed policies and a stubbornly solitary American proclivity to “go it alone,” embodied by two Bush administrations of flinty-faced hawks.

Getting past the idea of how different it is in this prolonged state of emergency — in the literal sense, in regards to the National Emergencies Act, and the figurative sense — is no easy task. Some things are all over but small and imperceptible. In my personal sphere, the changing reality crept in. I listen to something as prosaic as Biggie Smalls’s “Juicy” and his, “Now I’m in the limelight, ’cause I rhyme tight; time to get paid, blow up like the World Trade” now, so differently; especially taken aback when I hear the end of that bar. The moment adjusted my senses to, then, a seven-year-old song. I always followed global news, but I followed it with an academic’s focus after 9/11.

On the landscape of cultural media, the rise of spy shows from Alias, Homeland to 24; Spooks and Strike Back, in Britain, to paranoid serial dramas like Lost (that tap into the terror of shadowy foes), and police procedurals which weave jihadist terror into story lines, indicate how much our collective conscious has ceded to the anti-terror struggle. I think about the security of the globe more now as we all do — fully connected by the Internet, markets and mass travel — and worry about bio-terror, the security of food sources, energy wars, cyber-security, Web-based terror propaganda, immigration and refuge issues: Along with the oppressed, jobless and easily manipulated in desperately poor countries.

I can see the potential storms which may brew and which governmental and social maladies contribute to, raising the chances for a symbiosis of terrorism and failed states. There’s no doubt that we are now in a crisis that every generation after 9/11 will never not know. “Things done changed,” to return to Biggie. The 2001 World Trade Center attack that effectively turned airliners into guided missiles eight years after a somewhat unsuccessful — according to al-Qaeda’s goals — 1993 bombing, is a circle of ravens loitering our heights above; leaving the populace ever-fearful of the next strike from its franchise of jihadists’ terror.

That difference, I can feel it, smell it; taste it. A part of the American flavor is different now. We limp because of the tarnish from our wars, and because of the hurt that is economic, strategic, political and spiritual. Freedom — our freedom — is different even. We’re still “free,” but not like in the ways of before, it’s a different definition. And certainly we are not “free” in the way Lenny Bruce or George Carlin would probably argue. Particularly if you’re of particular ethnicities and have been, by stereotype, mapped to the specific kind of terror we focus on, or appear to be.

In speech, particular kinds of jokes just don’t fly over the phone, the airport, in e-mails or I.M. chat now. The fear that you could be accused or ensnared in counter-terror policing is fairly high and there’s reporting of policies which border on entrapment (or are). We seem to intuitively know of a formal and informal architecture meant to monitor seemingly disparate and random, potential threats. At the ground level, the suspicion of particular names is obvious, along with misidentification of those in specific ethnic garbs which characterize anyone from a Sikh to a Buddhist, to a Hindi, for the intended targets of suspicion, Muslims. Temples, Mosques and houses of worship are not completely safe havens from the ugly in the world, as much as we’d like; as we saw in the recent shooting of a Sikh temple in Wisconsin.

The more savvy of us, when traveling, have learned to check the State Department lists of Islamists terror groups operating in our destinations in places from Asia, Southeast Asia to the nations of North Africa. All of which is part of an incredibly undefined war with numerous fronts both physical and psychological, that has changed us and our lives. And as far as individual freedoms, I barely had lived before the attack, just out of my teens, and then mostly on military installations. So my perspective is probably not wholly understanding of the sense of fuller freedoms and security that existed in our borders, being limited now.

There are misconceptions about that Tuesday in 2001 and why our country was attacked which speaks to our lack of understanding of the multi-pronged motivations and nature of Islamist terror. The most destructive one was a glib analysis — due to the Bush administration’s felt platitudes and simple worldview; claiming it was motivated by a hate of our “freedom,” and then imploring us to continue to shop — as it was thought then we were potentially going to suffer heavy economic losses, as a result of the terror’s chill on consumer demand. And there were even horrible misconceptions about who attacked us, even within the Bush administration. (A Harris poll in 2005 — four years after the attack — had 41% of Americans polled believing Saddam Hussein was directly involved. Reports soon after the attack, implied that the administration was looking presumptively for al-Qaeda links to Iraq.)

What I believe is the most obvious sociocultural impression left for the short, medium and ongoing of our post-9/11; is that our collective wound is so deep that we can’t muster honest discussions about it and how it is we should’ve responded, leaving us signing off on whatever a security state and our of-the-moment political winds will. We’ve become disconnected from the world, as a result, and have traded much of our liberty for ineffective precautions; while engaged in conflicts all over the globe. (After the world embraced us with compassion, until a misguided adventure in Iraq.) Eleven years after we are at a point where the rawness of 9/11 overrides when speaking of the world influenced by the tail of 9/11.

Our response has roiled parts of the world where fundamental Islam has a strong grip, which was inevitable. But some things were avoidable, from accidental burnings of Qur’ans by military personnel to targeted-killing by drone strikes that has produced the loss of innocents, to the Abu Ghraib situation. All of which will recruit Islamists fighters for the foreseeable future. After 9/11, many who were rightfully angry scoffed at the idea that we should look to understand why the attack happened. “Wrong is wrong,” they said, but even wrong has reasons and perhaps knowing those reasons could help. Our defensive posture was a symptom of hurt that hasn’t subsided.

But the lack of acceptance to need to understand the philosophy that produced this is counterproductive, because inquiry is a temperament demanded when fighting extremism. It’s also ideally American to ask more of ourselves than we’d ask of others, especially in the application of military force. The pain of 9/11 is to the point where some use it to justify attacks of the most American of virtues in a knee-jerk, intolerant manner. Our official line is that America is not at war with Islam, President Bush made pains to stress this, and as President Obama took the oath, he had also done so, especially in his “New Beginning” speech in Cairo; where he outlined Islam’s contributions to the world. During his inauguration, President Obama further offered to Islamists, while drawing distinctions between political Islam and its faith tradition:

To those leaders around the globe who seek to sow conflict, or blame their society’s ills on the West — know that your people will judge you on what you can build, not what you destroy. To those who cling to power through corruption and deceit and the silencing of dissent, know that you are on the wrong side of history; but that we will extend a hand if you are willing to unclench your fist.

But the national face does not necessarily hold consistent to the dialogue, practices and policies, such as a highly dubious F.B.I. training program which had a consultant suggest that counter-terror policy should target Islam. Another example is the politicization by some of what is derisively called the “Ground Zero Mosque.” Officially known as Park51; it is a planned Islamic cultural center open to the public, designed to be an education center and religious site; holding as much a right to be in the same relative area of Ground Zero as anything else. Certainly there were Muslims who died in those towers. And in principle, it’s what America’s forefathers and the framers of our constitution fought to have; religious freedom and religious tolerance. The idea that somehow it is unseemly to have an institution meant for interfaith dialogue in the vicinity of “where the towers fell,” is simple-minded at best and just plum idiotic in truth, but symptomatic of the effect of 9/11′s lasting sociopsychological pain.

What happened on September 11, 2001 wrought plenty of misconceptions concerning Islam which provided a visa for those with misguided views. And for some, the event tarred the religion, judging it by its extremists elements in a way that other faiths have been given a pass on. In response, as much as there should be a long-lasting monument honoring 3,000 lives lost in the towers, there can be a legitimate argument made that a nod to heal with American exceptionalism the fissures of religious intolerance that’s sprouted, and that it wouldn’t be improper, in respect to the attack on the ideals of religious diversity and tolerance that were an unintended consequence of that day.

There was no true road map for the post-9/11 years, which produced unforeseen moral quandaries. And so there has since September 11th been revelations and soul-searching. It was a difficult path. America set out to prosecute a military conflict against a criminal and terrorist organization; a far cry from the 50-year template of an armageddon between a democratic alliance of  N.A.T.O. countries and communist Warsaw Pact states. The models of how the British and the I.R.A. resolved their issues and more crucially, Israel, continues to handle a constant struggle, were our sole frames of reference. Understanding how to apply military and judicial power against terror networks presented a blind curve we’ve had to forge a unique path to, and the judicial procedures and protections provided (and not provided) to suspected terrorists and “enemy combatants,” moreover, produced an understanding of how to engage in this conflict with sometimes poor results, morally and sometimes logically.

In the prosecution of war, the privatization of military functions by aggressive private security firms (such as the formerly named “Blackwater,” now “Academi,” who were embroiled in controversies over excessive force), and worked hand-in-hand with the government under less-legal constraints and culpability; the methods which high-level figures of terror networks are nabbed in what was known as “extraordinary rendition,” and, sadly, the sometime companion of those extraction methods to gain intelligence, “torture”; had both political sides asking tough questions on how to handle terror consistent to principles we’d expect a comparable nation to follow.

More recently, the rise of drone warfare and the secrecy involved in their use in covert operations known as “targeted killing,” and the decision-making process determining which terrorists are deemed important enough for such low-risk weapons militarily and politically — which had a high-potential for civilian casualties and became a political lightning-rod in the countries they operate over — has become problematic. In May, the New York Times ran the elucidating piece “Secret ‘Kill List’ Proves a Test of Obama’s Principles and Will,” that lengthily detailed a “nominations” process for terrorists, which ends at the president’s desk.

The longest lasting, tangible and salient legacy of 9/11, beyond a terrible human toll on all sides of the conflict and wars in multiple nations, resides in a raft of policies on the national security front, along with the laws and complex that have been built around it in support, which amounts to another layer to the Intelligence Community (e.g. Department of Homeland Security, Director of National Intelligence); with the intention of sharing information between multiple agencies and law enforcement. Particularly as the wars of 9/11 resolve in some way and move from our everyday and combat troops return home for good, but left to face their overwhelmed Department of Veterans Affairs, this new bureaucratic element will forever be a reminder.

While in the abstract, the injured and defensive collective American psychology will be a shadow of what it once was, right now it remains. It is for now a politically measurable element that will remain a mnemonic pain producing strong reactions, just as what somewhat exists still after Pearl Harbor. However it will pale in comparison to how much the structure of our government changed, the lessons we learned in our response and the sensitivity we strive for in the messages our policies send. Yet parts of this shift and re-organization in government and the policy legacies, such as the Patriot Act and the Intelligence Reform and Prevention Act, have been blackened by a widened, sometimes indiscriminate, surveillance state and a sprawling counter-terror bureaucracy well-detailed by The Washington Post‘s “Top Secret America” series; that is armed with powers unlikely to be curtailed. Revelations of private citizens sometimes being caught-up in Federal Intelligence Surveillance Act (F.I.S.A.) wire-taps and email collection over the years, in the electronic monitoring of communications of suspected terrorists, have been startling for many who paid attention to civil liberties and the diminution of individual privacy after 9/11.

In the positive, 9/11′s bequest is the re-focused efforts towards our institutional understanding of cultures and the political environments that foster terror. But also it is in the human intelligence, language skills and nuts-and-bolts of counter-terror intelligence; being on the ground in dangerous places has been pushed to the fore, after a retreat to technological intelligence the decade prior. Further, outside of the redundant nature of this new bureaucracy, the positives are seen in how government decided to re-organize after seeing failures in the intelligence loop that led to 9/11, and then developed systems of analyses to meet the demands of how the world works now, and not against a communist foe from the 1950s and 1960s.

The responses of our last decade-plus since 9/11 were the latest expression of an ongoing historic debate between balancing individual freedoms and the government mission to provide security. For now, the debate has its scales tipped towards security at the cost of some freedom. We have never squared the political theater and security theatre which continues, as well; as all of us sleep better because of the sense of comfort provided by the idea that something is being done, and a notion that precautions are being taken that perpetuates it; whether it be a laundry list of airport and air travel measures or the near-permanent nature of the national no-fly list and the growing, governmental powers in a complex world against a nebulous enemy; that are slightly unnerving when examined.

Grimes, ‘Oblivion’

Grimes Blog Image

“Oblivion,” Grimes

I HAVE NO IDEA what genre Grimes falls into, though “Post-Internet” has been bandied by Pitchfork. And particularly as the avant-garde music of 2005 and beyond, took an entirely new path which relied more on pastiche and electronic influences than the eras before it; there are no real static and rigidly confineable genres any longer. What I do know is that she is able to effectively take the fun and scary futuristic elements of a Kubrick film and Japanese anime; melding them with what sounds like the windswept atmospheric pop-songs of a carefree Swiss Alps’ teen. She is a Bjork-like android, to be sophomoric. Her 2012 album Visions, released in February, from whence the track “Oblivion,” above, comes from, is a subdued vocal effort which makes the larger case that artists with legitimate vocal abilities can produce with electronics artistic expressions encompassing of an entire universe.

Debating Dream

Dream Team Blog Image

Editor’s Note:

This post was updated on November 27, to amend a mistake that stated it was solely U.S.A. Basketball’s (U.S.A.B.) decision to include American basketball professionals in international play. It was not U.S.A.B.’s decision at all, however, but its governing body Fédération Internationale de Basketball (F.I.B.A.), who did not allow American National Basketball Association players to compete.

A WHILE BACK I wrote a magazine piece about the original Dream Team and U.S.A. Basketball’s (U.S.A.B.) shift to professional players, where I talked about how a series of devastating losses shocked U.S.A.B. and led Fédération Internationale de Basketball  (F.I.B.A.) secretary General Boris Stankovic, who had long believed it was unfair that rules did not permit the United States to field pro-players against countries’ very best professionals, was a sea change for American international hoops. As the former arrangement, which attempted to level the balance of power, was increasingly a losing proposition even if other nations’ professionals were merely competitive with America’s college kids, because it meant that the world had practically caught up.

It was a near crisis for American basketball — and its best minds — then, which produced the outsized nuclear-level response in the form of the 1992 Dream Team, a collection of the greatest basketball talent of the time, all mostly in their prime. Since 1992 American professionals have competed in every Olympic and international basketball tournament, as a result. But the early and mid-2000s teams brought some backlash to the policy internationally.

Where 1992′s team was always met with open arms, smiles, autograph-hounding and star-worshipping fanfare by their opposition, international teams ten years after lacked any sense of novelty or idolization of America’s boys, and saw them as true competitors looking to eat off ‘em like a pack of wolves. The change of heart regarding American pros, was likely attributable to a shift in the perception in the brand of basketball that had been played during those years, which engendered ill feelings towards the American hoops’ program as a byproduct of the amplified brashness and increasingly braggadocio-laden style of the newer generation players, coupled with their surprising disappointing results.

The 2000 U.S. Olympic team — the one for which this Vince Carter dunk will forever be remembered — had struggled but managed to snag a gold medal in a tight contest; the 2002 World Basketball Championship squad was viewed by many as a historical flop, finishing in sixth place; the 2004 World Basketball Championship and 2006 World Basketball Championship teams were the final straws and the game-changers; both of whom only earned the bronze. For a game invented by Americans, it was appearing to belong more to the world, ironically as a result of a not-so-gentle push towards globalization by the 1992 team. (Once the world played against Michael Jordan, Magic Johnson and Larry Bird, it wanted more. It didn’t want to just compete, it wanted to roll them hard, not satisfied with simply being on the court with legends like them any longer.)

There was a feeling that the U.S. program had become entitled. Especially in the way it assumed it would recruit only stars and present them as a unit, with little forethought about cohesion or respect to the long development cycles of the international basketball community. There was also a notion that the professional players had played less like good-natured, honorable ambassadors of the game, and more as the “ugly American” stereotype that was supported by a new sense of the nation in light of its post-9/11 political life. The ego, the strutting and the streets that the N.B.A. marketing department had sold locally, wasn’t appropriate for the nation’s team. Five years ago, newly appointed U.S.A. Basketball president, Jerry Collangelo, made a bold move to address concerns.

He decided to design a full program that addressed the shortsightedness of the earlier teams of the 2000s and infuse a lost ethic of service to the nation, over personal glory. Beginning with the hire of Duke’s Mike Krzyzewski  (“Coach-K”) — a West Point man — the idea was to produce a long-term U.S.A. basketball developmental program which identified its young professionals and asked of them multi-year commitments to be filled earlier in their summers than before, in order to produce teams with chemistry and pronounced cultures. Coach-K took this a bit further by having the teams meet with soldiers and infusing a sense of pride in serving, that is apparent in the military. With a nation at war, it was necessary for the players to understand the commitments they were making by wearing a uniform with “USA” emblazoned on it. The idea was to have a self-perpetuating national system that presents a long-term dynasty for United States hoops.

2008′s Olympic Basketball “Redeem Team” was the new program’s true start, though the fundamentals of it were set in the 2007 F.I.B.A. Americas Championship. The 2008 Beijing Olympics team was filled with the N.B.A.’s newest generation, all-nearing their prime. I thought then that it was possible, in four years, that it could be better than the collection of talent which made the 1992 Dream Team; only because of the advancing skill level of the players today and the amount by which the international competition has risen. In 2008, U.S.A. Basketball was led off-and-on by the quintet of Kobe Bryant, Chauncey Billups, Chris Paul, Dwayne Wade and LeBron James. Its de-facto leader was Bryant, though, then the world’s best player, a reigning M.V.P., and the nearly undisputed best player of the decade.

So when Kobe Bryant said in early interviews for this year’s iteration of the U.S.A. Basketball team for London 2012, his news current group — filled with wing players and guards but few legitimate big men, as they were all injured — could beat the 1992 Dream Team that was stocked with big men, similar athletes and a fair amount of shooters; I was still on his side, because I understood the design, purpose and spirit of the new U.S.A. Basketball program. It, unlike any other U.S.A.B. team, was years in the making.

Even if most ridiculed Bryant’s statement as absurd, since mathematically it doesn’t even add up in these sentences, given 1992′s advantages I’ve laid out, what I saw in the opinion later supported by Bryant’s colleague and the current best player of this generation, LeBron James, is that there was an accounting for the new athletic advantages, the invention of innovative schemes which neutralize Dream Team’s size advantages, and the development of the skill advantages of now. While 1992′s team had great skill players for its time, the absolutely evolved ball-handling abilities, versatility of offensive options and zone defenses of now; along with the multi-pronged offensive stars with long-range and mid-range strengths; could make them better than their counterparts of 20 years ago. And again, given the culture of the new U.S.A.B., I honestly could see where Bryant was coming from.

The problem for me, as more or less an empiricist, is that there is no true data available on whether this belief of mine is even remotely realistic. The 1992 team was simply a well-rounded machine, beating its opponents by an average of 43.8 points a game, a stat frequently broached in this debate. [The 2012 team beat its opponents by an average margin of 32.0 points.] This is a problematic stat however, since it is based on the international competition of the time. Whereas 1992′s N.B.A. placed on the floor 23 players of international origin from 23 different countries, the [2012-13] N.B.A. of today will put on the floor 82 players from 37 different countries. International players have risen immeasurably and take up much more of the N.B.A. population now, and their basketball experience is greater as well. They tend to have participated in more international tournaments and Olympic play than their American comrades.

Because of the 1992 Barcelona Olympics team’s impeccable timing which became their signature fortune — in 1992 almost all of those of the 1984 draft class who were on the team, considered the best draft class ever, in combination with the subsequent draft years who produced strong players as well; were fully in their primes at that time (only one of 1992′s players could be said wholly past his prime: Larry Bird) — this is another point in the 1992 team’s favor. The Dream Team further fielded far more players with Hall of Fame reputations, making this factor of them being close to “prime” even more formidable, and it could just be a potential Trump card.

But this year’s team is much closer to its primes than that 1992 team, I’d argue, even though the team is not half as deep or half as accomplished. (Only Kobe Bryant and Dwayne Wade are multiple champions, as oppose to Larry Bird, Magic Johnson, Michael Jordan and Scottie Pippen.) And because its top and mid-level talent is far better than any competition the Dream Team faced; I am still willing to believe Bryant and James as honestly not simply looking to stir the pot. The 2012 U.S.A. Basketball team could beat the 1992 Dream Team, the question for me is whether it would be an aberration. And what I’ve just mentioned has not absorbed all of the advances outside of skill in the game of today: from the training methods and moves, which also have significantly grown. Some things now are unguardable and certain things of yesteryear such as Michael Jordan’s unstoppable fadeaway that relied on his athleticism, are now minimized by the longer athletic players today who would match up with the 1992 team quite well.

Kobe and LeBron’s statement was fodder for the sports bloviators and the talking-heads complex, almost entirely dismissive of his statement. In part, this is due to the 1992 team being a nostalgic beacon girded by the stories and its spectacle. In any other sport, if you’d say that the players of today were generally better, most would agree. But because the heyday of the N.B.A. was the 1980s and it is perceived that the league was much more skilled and competitive from the 1980s to the early 1990s, then it is today, many still cling to the idea that today’s best are just not as good as yesteryear’s.

Two articles from university professors, with two different perspectives, take on this debate. First, Wired‘s “Kobe is Hoop Dreaming When He Says His Team Could Beat Jordan’s” takes a (somewhat flawed) statistical look at the teams by using players’ win correlations from their respective N.B.A. teams in 1991-1992, adjusted per position — 1990-91 is rightly substituted for Magic Johnson, since he did not play in 1991-92, but this hurts the baseline, because Johnson was better the year prior than he was after sitting out — and compares them to those same indices for the players of today. The conclusion of comparing the wins per 48 minutes — another flaw, since Olympic games are 40 minutes — of each member and then averaging them for each team, leaves the higher average of 0.247 for the 1992 Dream Team and 0.193 for 2012′s squad. It concludes that while it would be much closer than many would assume, the 1992 team should be favored. But what was more interesting was a takedown of this by a commenter by the name of “John Melon”:

This is such a fundamentally flawed statistical analysis that it is an embarrassment to you and your editor. Comparing the winningness of players in the year prior to their Olympic appearance really has very little bearing in the overall analysis of the matchup. First of all, the overall talent quality of the N.B.A. in 1992 was much lower and because their was less development in high school and college and reduced science to facilitate the performance of athletes. Secondly, the dispersion of talent on different teams is much different now than it was in 1992 and thus the ability of a player to win in the modern era has many more factors than just his personal abilities. Finally, basketball in the NBA is fundamentally about match ups. This is why their will be amazing upsets in the playoffs because distinct match ups give certain teams the ability to beat other teams that are on the whole a more winning team. In conclusion, please try to do more than summarize some mediocre state school professors abstract statistical analysis of NBA greatness before posting in the future.

And in the interest of balance, another comment in support of the methodology by “Jason Turbow”:

Wins Produced is based on simply on box score statistics. Players who facilitate scoring are rewarded as much as those who actually score, while those who display inefficiency (via things like low shooting percentage or high turnover rates) are penalized. It comes closer than any other statistic to explaining team wins from any era since the league began compiling full box scores after the A.B.A. merger.

It’s fabulous that you brought up Dennis Rodman, who according to this stat is one of the most dominant players in N.B.A. history. He (and to a slightly lesser degree, Bill Laimbeer) were the bedrock of Detroit’s championship teams. (Note that the moment they either got traded or got old, that team fell apart.) Rodman was an essential component for the three Chicago championship teams for which he played. And while there’s no statistical explanation for his refusal to put on his shoes at key moments in San Antonio, he was equally important on the court for the Spurs.

It’s probably true that while a team full of Dennis Rodmans would dominate any WP comparison, it would probably not fare equivalently well on the court, simply as a matter of imbalance. (That’d be an interesting question for Berri.) But that’s not how the Olympic teams are constructed; the U.S. has quality players at every position, who do (or don’t do) what they’re supposed to — again, at their position — to help the team win. The comparison between ’92 and ’12 is exceedingly valid.

The second article, which appears at C.N.N., that looks at this question, “Kobe’s Right the Dream Team Would Lose,” argues that in every sport and competitive environment athletes improve over decades as elite competition produces consistently better results, and therefore the 2012 team should be better than its predecessor. As argued:

There is a general principle within elite performance systems, including everything from Scripps National Spelling Bee Championships to world-class modern dance companies, scientific research communities and professional sports. In competitive systems that offer participants great incentives, peak levels of performance progressively elevate.


Transport any N.B.A. legend 20 years into the future and he would have to compete against a new breed of athlete. Which means we can presume that:

Walt “Clyde” Frazier, Willis Reed and Phil Jackson, players on the 1973 Knicks championship team, would be too slow in their positions to help any N.B.A. team win a championship in 1993.

John Havlicek would not steal the ball in Game 7 of the 1985 Eastern Conference Championship like he did in 1965 because he is comfortably seated on the bench.

Oscar Robertson in the 1960s is an indomitable force; the same Oscar Robertson in the 1980s would be a serviceable journeyman.

Magic Johnson in his prime would be too slow to play point guard in today’s locomotive landscape, just as Larry Bird would be too slow to guard any of today’s elite small forwards.

Can you imagine Magic chasing after Team U.S.A.’s speedy guard Russell Westbrook, or Bird trying to contain Carmelo Anthony? It wouldn’t be pretty.

Which of these two articles is right, I don’t know. Both the empirical data from “Kobe is Hoop Dreaming When he Says His Team Could Beat Jordan’s” and the 1992 team’s posited numbers at the time, versus international competition, says that the Dream Team is better, but those numbers and assessments are sealed off from the context of today. Players are so much more athletic and often longer and taller than their average counterparts of 20 years ago that any argument of their respective skill and knowledge being not as great as the players of yesteryear, is unreasonable. The game is more instinctual now, and it could be that this instinctual play combined with the otherworldly athletes of today, at almost every position, would be an encroaching tsunami to the best players of 1992. (It seems.)

Read “Kobe’s Hoop Dreaming; Says His Team Could Beat Jordan’s” at Wired [Here]

Read “Kobe’s Right: the Dream Team Would Lose” at C.N.N. [Here]

On Frank Ocean and ‘Channel Orange’

BeyoncÈ's message to Frank Ocean.

I WROTE HERE briefly about Frank Ocean’s improbable rise from the ranks of obscurity to quick stardom on the strength of his debut alternative R&B mixtape Nostalgia Ultra, his odd circumstance of being a singer in Odd Future; essentially a shock-hip-hop collective — though that term generally implies gimmickry, Odd Future is not truly about that — and how it was an interesting circumstance.

I argued then, Frank Ocean’s ability to draw in more fans than Odd Future’s mainline of hardcore hip-hop listeners — but also essentially gain name identification because of his somewhat token high-profile membership in an “of the moment” rap crew, was potentially a career-making boon to his personal profile, maybe even helping him land his guest spot on Jay-Z and Kanye West’s “No Church in The Wild.”

All of that analysis may have been right, but when Frank Ocean came out of the closet in a very personal letter — presented as a screenshot of his actual statement from his Mac notes — on his Tumblr, my entire calculus of Frank Ocean’s profile and abilities to draw-in casuals was turned on its ear altogether; and made for an even stranger professional setup than the one I initially spoke of.

Odd Future — Frank Ocean’s collective — was often embroiled in heated discussions concerning inflammatory content in the last couple of years, which generated numerous blog posts and thought pieces about the patently offensive homophobia in their work. It especially inspired numerous op-eds on their use of the term “rape,” in their lyrics, specifically by Tyler the Creator; the group’s most visible media face.

As I took it, Odd Future’s insensitive use of such terminology was a post-P.C., post-hyper-tolerance stance by Tyler the Creator, a young black kid, who was the most complicit in these lightning-rod transgressions, though other members have also been guilty, and Tyler even went as far as emphasizing the non-conjugated white superiority term “nigger” in his songs in apparent ironic tone, as oppose to, so the argument goes, the youth culture and internally African-American community co-opted form, “nigga.” He does this with a brash irony, not easy to understand for many, in whatever “ironic” means nowadays. In the metric of being so detached and submerged in “cool” that one is beyond sacred cows of  the tired, old politics of identity and oppression, is what I’d assume. Particularly as this relates to the highly multicultural and more progressively tolerant world of Millennials.

To them, being worried about terms that have historically applied to groups in ways meant to harm disempowered minorities, is from another struggle and another stretch of time, as their circles have been far past that era, even as the outside world hasn’t reached that level of social progress. (There is plenty of data that suggests that progress.) Psychologically, it’s also interesting to note where such a disposition would come from, as these terms have existed in their binary forms of right and wrong, but also as cultural signifiers of insider group dynamics in hip-hop for three decades now; and so the last way to push their edge of acceptability — if it is artists’ inclination — is to use them in a way that acknowledges their ability to be ugly and oppressive, especially if it is someone who is of the traditionally oppressed.

As a hip-hop group, Odd Future’s music is forward-pushing while remaining a throwback, combining the large group formula of the past, a bygone emphasis on displays of skill, and (at times) the party atmosphere of old-school hip-hop. And behind the scenes, in its formulation, it is bordering on the completely unheard of. That is because most of Odd Future’s tracks and some of Frank Ocean’s work are produced by a young woman who just so happens to be a lesbian, by the name of Syd the Kid; something that is beyond rare in hip-hop. Syd also performs under the Odd Future umbrella as one-half of the act, The Internet. (The other member being her Odd Future production partner Matt Martians.) As well as an on-stage performer with Odd Future, as their D.J.

All of this background makes for an inscrutably complicated reality for Frank Ocean and Odd Future, as they straddle opposing realities: One where they are progressives of hip-hop inclusion, but also sadly playing into the dark elements of hip-hop’s misogyny and homophobia; a symbol of what Odd Future’s most literally interpreting critics saw as a sign post in the death of civility and tolerance in the recently usually tolerant popular art and media space. Which of those two is true: Odd Future being a progressive hip-hop force or Odd Future being a sign of anti-L.G.B.T. and misogynistic discourse bubbling in popularly accepted youth media, I don’t know. But in holding in its critical creative structure out-of-the-closet homosexuals/bisexuals, and one who never was in-the-closet (Syd the Kid), Odd Future presents an air of being legitimately progressive.

While it is changing, hip-hop has never been tolerant to believed outsider identities, homosexuality or bisexuality; just as its largest artists’ core; young, African-American male society hasn’t been, for perhaps any number of explanations. This being a blog piece, I do not want to delve deeply into the academic research and theories, but, for example, the breakdown of the traditional nuclear family in the black community, as a result of high incarcerations of black males has created fears of an erosion of the traditional “values” of the family. This could produce a hyper-overcompensation of traditional presentations of male identity.

Further, the socially conservative streak in African-American society, which originates from its Southern roots, and a historically devout religious participation, as well as the general homophobia whose origins may come from minute and large-level emasculation on the economic front (i.e. inequalities in wealth and discrimination in hiring, wages, and promotions), all being forced to square with the traditional breadwinner role of males and producing more of an insecurity towards traditional identities because of it.

But Frank Ocean may prove to be a figure who will “move the needle” in this difficult to address arena, towards something of even greater acceptance in hip-hop. The reactions concerning Ocean’s announcement even seemed muted and largely praising, which was surprising, particularly taking into account the kind of content Ocean handles; ostensibly songs about women and largely discussing of only heteronormative courtship. I can’t imagine, though, he would have been given those listeners he has gained, if his songs weren’t at the outset, perceived not to be about women.

Nonetheless, the greater question now, is does his announcement give Ocean room to talk even more honestly in his work than he has before? This seems to be the truer test of whether he is actually accepted, or if it is based in the context of his work only fitting the constructs of heterosexual relationships. If his music about love and loss is as honest and introspective as it’s already been; will sometimes or mostly be in a context that is expressly about his own experiences and not what is seen as most relatable, that would be the even more courageous stance. This remains to be seen.

Those who want to play the role of cynic may ask why just as Ocean’s Channel Orange releases, his first full album, does it coincide with his announcement, but a critic’s letter — basically an editorial message — from the New York Times writer who profiled Frank Ocean, following him for more than a week around Los Angeles, retrospectively provided elucidation on why his announcement may have come to the fore. The profiler, Jon Carmanica, mentions that in interview sessions when attempting to personally use the pronoun “she” for his follow-up questions, he instinctively felt that Ocean might not have been fully transparent, and that Ocean did give the vibe that Carmanica’s assumptions about Ocean’s relationships weren’t necessarily true.

Writing the piece, I made a point of avoiding referring to the sex of the people Mr. Ocean had been discussing. It was an instinctual choice. Something about the quiet way Mr. Ocean spoke about love was in fact loud enough to stick with me a couple of days later, as I reviewed the transcript of the conversation and began to write.

When I spoke with him again on the phone, an hour or so before finishing the article Tuesday evening, the issue of whether he’d been avoiding full transparency was still a cloud in my mind, not fully formed. I didn’t press the point.

A Musician and a Critic Make Some Comments About Love,” N.Y. Times

Attempting to get into another’s mind is always flawed as a writer or as a person within basic interpersonal dynamics, but Ocean may have felt that with the onslaught of interviews coming about his album, the management of truth and lies would become increasingly difficult. I’m willing to buy that, than feed into the idea that a promotional stunt — combined with his own actual truth — is to blame. That loathed term of mine, “buzz,” was already in play for Frank Ocean anyhow, as anticipation for the critically praised Channel Orange was strong.

Channel Orange is a sterling debut with amazing slow-burn songs (“Pink Matter” ) which produce moods beautifully, dance-oriented tracks (“Pyramids”) and mid-tempo head-bobbers (“Forrest Gump,” “Sweet Life”), that bring an unmitigated joy when listened to with the dials cranked all the way to the right. If one never even considers Nostalgia Ultra, Channel Orange would be nothing but a good omen, but in the context of this being an actual second release or exposure to Frank Ocean’s virtuosity as a solo creative storyteller who brings the freshest of perspectives to urban music, it greatly implies a long and promising future filled with honesty, something lacking in the genre.

At first listen Channel Orange seems less edgy, oddly, than Nostalgia Ultra, since tracks like “Novacane,” where many first heard of Ocean, spoke of pronounced altered-states and a particular archetype of fast-living California girls who submerge the pain of boys and young men in ways both physical and pharmaceutical. But in crossing the finish line at 17 tracks deep and with expert pacing in between, through an employ of short interludes, this is still an elegantly crafted narrative of a young man’s journey. The only thing that is missing, mostly, is his recent revelation. (Substitute a pronoun here or there though, and it truly makes no difference.)  Smartly, there is little Odd Future in this release with only another frequently written about compatriot, Earl the Sweatshirt, making an appearance in “Super Rich Kids.” The remaining guest spots are left to John Mayer who performs an entire guitar interlude and Andre 3000 on “Pink Matter.”

What is achieved in both Channel Orange and Frank Ocean’s announcement is a revelation, taking R&B and hip-hop forward in small relative increments. (While acknowledging the still very thorny issue of his own group’s use of charged terms.) I don’t believe that even five years ago, an-out-of-the-closet male singer in urban music would be accepted in any truly dignified way. Nor do I think that Odd Future as a collective would allow it, but the wheels of progress grind forward in both Frank Ocean and the future of hip-hop and urban culture on both the artistic front and social front, here. Next stop, a little bit closer to enlightenment.

A Problem in Covering the Secret War

Training to hunt

Photo Credit Defense News

I HAVE tangentially spoken about President Obama’s secret drone strikes in Afghanistan-Pakistan (Af-Pak), which have only been marginally reported in the traditional media, while the strikes themselves have ramped-up over the last two years, along with Joint Special Operations Command (J.S.O.C.) night raids on suspected terrorists in and around the Af-Pak areas of Waziristan; as the administration has looked to put an end to al-Qaeada in Afghanistan completely. And this is probably even more the case since the death of Bin Laden and the “treasure trove” of intelligence retrieved from the raid on his squalid, in-plain-sight compound in Abottabad. (What the effect of this potential dismantling of the network in Afghanistan has on al-Qaeada in the Arabian Peninsula, and the larger network is a whole other question. There have been drone strikes outside of the Af-Pak area, however, particularly in Yemen.)

While this war is not like our last big-time secret conflict in Laos and Cambodia in the mid-1960s — amid a similar type of asymmetric battle — since the ground component seems a bit more limited in this version, the aerial part of it, the drone program, is still a fairly huge undertaking which is mostly run out of the C.I.A., and can be seen as in conjunction with the raids by J.S.O.C., under a man considered to be a special-operations genius, General William McRaven. It’s a new arrangement which portends a level of civilian-military, C.I.A.-J.S.O.C. cooperation on these critical matters that is unlike anything ever before, after some years of tension and distrust between the military’s special operators and their intelligence people in Washington, and which appears more efficient in meeting goals.

So much so, that it will probably last past our current conflicts, as we’ll most certainly see our continuing presence in these theaters after conventional force deployments are drawn down in 2014. The new collaboration also helps explain the fairly recent Obama administration shake-up and the moving of longtime Washington insider, Leon Panetta — formerly head of the C.I.A. and the Office of Management and Budget — to the role of Secretary of Defense to follow the extremely-effective Robert Gates; as a natural choice, as we are now in an extra-clandestine phase of war-fighting, where both the military’s J.S.O.C. and C.I.A. are more prominently involved and sharing in the burden of fighting terror. (And these joint operations may be here not just after the 2014 draw down in limited form, but as a primary strategy for the foreseeable future, even following the end of this part of the Global War on Terror.)

This was actually foreshadowed nearly a decade ago, a nearly-all-covert-war, when the Twin Towers fell, but somehow the country found itself in conventional military misadventures and nation-building projects. But there are significant downsides to this arrangement and these kind of lighter footprint covert actions in regards to government openness. Currently we only know a shadow of the effectiveness of this secret war, and particularly the large-scale aspect of it, the drone missions. And this could become even more of an issue in the future as the defense sector and the intelligence community work more and more together closely. In 2010, the great national security reporter at The Washington Post, David Ignatius, did give some enlightening insight to the frequency and the change in drone operations.

All told, according to U.S. officials, since the beginning of 2009, the drone attacks have killed “several hundred” named militants from al-Qaeda and its allies, more than in all previous years combined. The drones have also shattered the leadership of the Pakistani Taliban, which has been waging a terror campaign across that country.

On a typical day, there are roughly a half-dozen Predators in the air over the tribal areas of western Pakistan, looking for targets, sources say. This intensive coverage is possible because the Obama C.I.A. requested more resources for the drone attacks last March, during the initial review of Afghanistan-Pakistan policy. By the end of this year, the number of drones available will have increased by about 40 percent since early 2009.

What the Partisan Squabbles Miss in Obama’s Terror Response,” David Ignatius, The Washington Post

In his first two years in office, President Obama had almost doubled the use of drones as compared to former President George Bush, as reported by Ignatius. This figure is much higher now, as the operation has exponentially increased between 2009-2012. The New America Foundation’s “The Year of the Drone” estimated that for 2010, the low estimate for “militants killed” in drone strikes is 581 and the “high” is 939. Comparatively, the figures for 2004-2009; the low estimate is 481 and 770 “militants killed,” on the high side. 2011 brought an estimated 336-535 “militants killed” and a 89-92% death rate. For 2012 [updated Nov 14], the estimated “militants killed” is 209-328, with an astounding 99% death rate. These are extremely wide variances between “high” and “low” estimates, and are too impercise.

Furthermore, in the non-covered ground element, reports coming out of the Bin Laden operation had said J.S.O.C.’s on-the-ground raids also increased quite precipitously from where they once were, going to many raids a night. It seems that S.E.A.L.s, Delta Force and other special operations units hadn’t just been practicing their close quarters’ battle negotiations for the elimination of Bin Laden; but that the operation was for the mostly part of a new recent routine, of sending in tier-one operators to handle the formerly basic functions. So, in essence, other than the high-value of the target that was Osama and his location: deep within a Pakistani garrison town and its possible political ramifications, the Bin Laden operation is in some ways, a new status-quo. Though the stealthily, modded-out MH-60s and some of the other toys are not probably consistently employed.  (I have no idea.)

If there’s anything we’ve learned about all of these developments from the opaque fog machine of details, in the days following the Bin Laden mission; it is that hard, verifiable fact is going to be much more difficult to come by for the press in the new joint “covert wars” set-up. And it is especially true concerning the drone program. Not only are we unable to fully know how the program works, (e.g. Who are deemed worthy of priority targeting?), but we do not know just how effective they are, as they are routinely criticized by those inside Pakistan for civilian casualties and resulting in potential blowback.

As this part of the war is designed to operate literally and figuratively in the dark, journalists are left at the whim of official administration reports on what is going on. And while that is a somewhat understandable expectation, to maintain what is known as “operational security,” it still abrades the interests of transparency, “truth,” and the obvious role of the press in democracies to act as some form of oversight.

Yet rather than demand more consistent transparency from officials or undertake investigations that delve into the program, journalists often have simply relied on what U.S. and Pakistani officials have told them. When reporters depend too heavily on government sources to report on a war, they end up following the narrative that White House officials have created, and in this way provide a one-sided view that obscures reality. The aerial strikes in Pakistan have been underway for nearly a decade, and yet many questions surrounding their use remain unasked and unanswered.

Covering Obama’s Secret War,” Columbia Journalism Review

Concerning this vacuum, what we are left with then as dedicated, engaged information consumers and producers concerning our government is just drips and drabs, pieces of the truth, on efficacy and what has been their successes and what has been their failures. Failures that, as we are all well-aware, often result in some civilian deaths and become counter-productive to our overall goals. And if the number of civilian casualties is quite high, as many argue, then the secret aspects of the war itself tends to be Sisyphean.

The Columbia Journalism Review‘s (C.J.R) “Covering Obama’s Secret War” looks to report on the issues of the difficulties in finding truth in a secret war. Since getting full-pictured information about it from the government is near impossible — unless they’d like to use it for positive political gains — C.J.R. explains that Western journalists are forced to resort mostly on a network of Pakistani journalists and to complete much of the work in this area, by way of re-wording those journalists’ published accounts.

They (Pakistani journalists) are essentially the only press on-the-ground in the region, and the only persons who have covered the drone mission’s effects first-hand, in perhaps the most dangerous place to report on earth: the tribal areas between Afghanistan and Pakistan, known as Waziristan. A place that is unruly, barely hospitable to these journalists themselves, many of whom grew up there. And their reporting has only led to being in sticky situations, resulting in their kidnap or death. The former usually preceding the latter.

What all of this produces is a fragmented look at the nation’s longest war, fought in our name; both the drone war and the total War on Terror. There is simply no way to bring the secret drone war into the total assessment of our situation in the Afghanistan-Pakistan theater, particularly when the official reports tend to only come when there is notable success.

How are we to know if we are winning or if we are on target for successful withdrawal by 2014, without this data point? And particularly if withdrawal is completed with the objectives being met, other than the “Afghanistan security forces being able to stand on their own”? Will it be when there is just a smattering of militants in this area, or is this not a part of the official draw-down? And since it seems drones are the new primary go-to strike weapon, shouldn’t we just be able to know if they work as claimed?

“Drones are here to stay,” explained the New Yorker’s Mayer. “So being for or against their use isn’t really where the interesting controversy is at this point. The argument is over who is a legitimate target, how that is decided, what legal framework covers this sort of warfare, and how many innocent lives can be justified as so-called ‘collateral damage’ in a drone strike—morally, legally, and politically.”

Some of the most resourceful reporters in the news business have pushed hard for more access to information about this remote-controlled battle and a few have made some progress. But too often, journalists have settled for only meager morsels to fashion their stories. A more whole-hearted pushback is in order, with top newsrooms banding together, backed by their legal departments, to try to force a more substantive and open public policy debate on whom and how the U.S. decides to kill with the push of a button.

Covering Obama’s Secret War,” Columbia Journalism Review

Read “Covering Obama’s Secret War” at Columbia Journalism Review [Here]

Kalashnikov Everything; Everywhere

Gold AK-47 Blog Image

IN its first anniversary issue GOOD — a Los Angeles-based media group focusing on the animating passions of the socially conscious – made an odd editorial decision: They opted to run a cover story different than the granola-style and domestic design-oriented subjects that they normally ran. This time, going with an AK-47 set against a luminescent pumpkin background, they asked, “Is there ever design this good that doesn’t kill people?” It touched on something that I have thought about for the longest time, about the unfortunate legacy and trace of the AK-47 and its progeny: from mom-and-pop, garage-made bastards of regional gun markets, to state-produced modifications of the classic killing tool, in what is the firearms’ equivalent of the machete.

The AK found its ignoble genesis through the innovative design skill of Mikhail Kalashnikov, who has often implied regret about the arm’s ultimate success. Initially a Russian tank-seargent, Kalashnikov began designing small-arms in 1942, from a hospital bed, following a wounding at the Battle of Bryansk. He took a job in the Red Army’s firearms’ design wing following his recovery, and his rifles rose to eventual prominence through open government design contests and several iterations, aimed to fill the need of a hearty assault rifle; following decimating battles against Nazi troops and the Strumgewehr 44 in the Second World War.

By 1947, Kalashnikov’s journey to design the infantry rifle that the Soviet Union had hoped for — meeting its needs for power and combat durability — found a fit in the model “47.” On the heels of a process which produced the AK-1 and AK-2; the AK-47 — or Automatic Kalashnikov, patent 1947 (or Avtomat Kalashnikova, 1947) — combined multiple existing technologies of the time, including that of the Strumgewehr 44; the American Remington Model 8, the M1 Carbine and M1 Garand. The latter two, standard issue weapons of the United States infantry.

Through implementation with Russian forces and allies like China and a robust international market, the Kalashnikov series improbably became the most popular rifle in the world, and a totem of the underground cultural discourse; most often emanating from the lines of hip-hop artists who reference it, but also frequently appearing in mainstream movies. It was most memorably a refrain in Ice Cube’s “Today Was a Good Day,” which borrowed — even if unrealized — from the dangerous allure carried by so-called revolutionaries of the Third World and the separatists of Russian client states; to well-documented government terror cells like Afghanistan’s Taliban, Palestine’s Hamas, Iran’s Hezbollah, and now even, the many unaffiliated who simply employ it as the prime destabilizers in societies; from criminal trade organizations to modern-day nautical pirates.

Ice Cube’s, and more recently Meek Mills, Freddie Gibbs’s and even M.I.A.’s [in 2006's $20 Dollar] lyrical proclamations of possessing the globally-favored arm, supports an obvious artistic affinity to shock and create pangs of fear, while reflecting some well-known realities. Not to mention that such actions and motivations produce an unalloyed terror in conservative, establishment, Law and Order America; whom these artists purposely attempt to rile, by getting them to quietly notice: “A menacing, (supposedly) revolting minority with a Commie gun.” And miraculously this is simply achieved by invoking this weapon of the disestablishment and militant folk-heroes. The fashionably edgy hype surrounding AK-47s is so souped now that images of it float virally around the Web where it can be seen bespoke in Gucci prints, painted diamonds; used as a medium for and the subject of sculptures, and even plastered in funny hyper-paint color schemes by street artists like Damien Hirst.

Despite its cultural presence, it is for all intents and purposes now an outlaw’s choice, in the real-world calculation: The very carbine hell-staff which ominously appears in Osama bin Laden’s hands, America’s erstwhile Boogey Man of 13-plus years, as he’s seen dropping shells and dispensing slugs towards an off-screen target — in completely played-out file footage — kneeling in a robe, posing like a drugstore, army-man toy. Among “Capos” — the heads of narco-trafficking cartels — and their minions of hired muscle in Central America and South America, Kalashnikovs are a preferred tool for inducing mayhem, often purchased through straw-man sales from the American side of the border, where they are sometimes modified, before being smuggled South and back into their hands. (It should also be noted that American military weapons are used by cartels as well, as the ties between the American-assisted South American governments’ armies and paramilitary anti-narcotics teams are infiltrated by the criminals, and become a revolving door between the two worlds, where lines between the army and the criminals blur.)

Kalashnikovs have even inspired a nickname in those AK-soaked regions, as the drug trade has become so ubiquitous its almost a wall-paper over the life there, earning the nickname cuerno de chivo or “goat’s horn,” for the mold of its signature banana clip. Gold AKs are usually collected by Capos for the purposes of acting as trophies; both a spoil and a talisman of sorts for the ballers and high-rollers, indicating placement within cartels’ upper-level operations. The 2003 invasion of Iraq left American forces astounded by the multiples of Sadaam’s 24-karat gold-plated Kalashnikovs, as well as those which came out of Gaddafi’s personal caches in Libya in 2011, retrieved from his multitudinous quarters; leading to pics of Libyan rebels posing with said rifles, appearing like removed, de-pixelated images from Nintendo 64′s GoldenEye.

In C.J. Chivers’s history of the AK-47, The Gun, the war reporter writes that initial U.S. Army field studies surmised the AK to not be a threat to Western forces, while also miscategorizing it as a “sub-machine gun.” In typical shortsighted fashion of early evaluators of things which are new, it was ridiculed by weapons’ analysts for its lack of potency, perhaps because it was not seen as an assault rifle, and judged on different criteria; nor did they understand the nature of the coming guerrilla wars and their swarm tactics. In those early AK-47 years of the late 1950′s, it was viewed beneath the quality of weaponry assigned to American infantrymen. (But certainly seen as fine for rag-tag armies of a socialistic stripe.)

But the dogma on the AK was summarily disputed and overturned by those in-the-field. First it happened in Guns September 1956, when regarded weapons’ journalist William Edwards made claims of having fired it, first labeling it as a “PPK-54″ and an “Avtomat-54.” He mainly observed it was easier to operate than the N.A.T.O. options. Positive in-the-field assessments of the Kalashnikov further began to congeal when Dutch forces recovered AKs from Indonesian paratroopers in Western New Guinea and praised it. But it wasn’t until American soldiers who’d become frustrated by their newly distributed M-16s and their jams in firefights, began to peel Kalashnikovs off dead Viet Cong; that the tide had begun to change the AK reputation. In his global introduction to the AK-47 in Guns, Edwards made another observation concerning what separated its design from service-rifles of the time; saying that the AK which used an intermediate-sized cartridge — the bullet, or more precisely, the bullet’s delivery system, which was not as powerful as that of other battlefield rifles but more powerful than a pistol’s — was a “bold step towards a uniform ordnance supply.” That less powerful cartridge also produced better control because it reduced its recoil.

The proliferation of Kalashnikovs on battlefields, and later the greater criminal culture, can be linked to some key features: First, as intended, it is reliable and highly-durable because of its chromed insides and simple design, which allow it to be abused, wholly neglected, but still remain operable. It also handles the extremes of environments quite well. There are stories of AKs being buried beneath the sand for long periods of time and upon retrieval, being able to still fire. It is also uncomplicated and affordable, which has helped to make it abundant; frequently reproduced and duplicated with just mere scrap metal recovered from battlefields and skilled craftsmens’ touch. There is further a value in the versatility of it, in its presentation and options; since it is a fairly short arm.

Its AKS version — originally designed for Russian paratroopers, comes ready with a collapsible metal stock, making it a terror to spot and stave its trafficking; and all AKs are able to switch from semi-automatic to automatic fire. This gives skilled and unskilled operators the option to be precise and conservative, or to dump rounds, depending on their needs. Most importantly, the AK is easy to use and maintain, because of its intuitive assembly and big, limited parts (of which only eight of nine are moving), making it easier to clean than other weapons. It also happens to have a nearly non-existent learning curve, which has facilitated its placing in the hands of child-soldiers in most of the insurgent conflicts of the 20th Century. In nations like Afghanistan, where the rifle’s history is deeply connected to the war-immersed history, it is said that even to kids, its basics can be taught in under a full hour.

Though Kalashnikovs the world over aren’t known for a William Tell-ish accuracy, because of their poor aiming system  – and the unskilled who’ve tended to use them — it does provide a high rate of fire and is powerful, armed with a round that is heavier than its Western counterpart, the M-16. It’s shorter barrel and slightly off-center charging assembly are also to blame for inaccuracy, both of which produce upward recoiling, causing errant fire. However, it incites incredible fear, due to the reputations of those who’ve used it, its known power and the scads of personal stories of encounters with the fearsome weapon. Furthermore, because it is so highly identifiable, with a protruding serpentine banana clip, its fear-factor only augments with time; always signifying some level of doom to come, no matter the locale.

The “Kalash,” as it is known in Russia, and its ravages, are well-known to those who pay attention for any number of reasons, from human rights groups, law enforcement, aid workers to military personnel. The AK-47 and its improved AKM series and now the prevalent Chinese produced Type 56 knockoffs, far exceeded early American assessments. It’s now the gun du jor of the modern age: Sold for $180.00-$400.00 USD in gun markets in Darra Adam Khel, Pakistan or Bakaara, Somalia. It’s the weaponized face of revolutionaries — to the point that its even honored on Mozambique’s flag — armed resistance movements and criminals. It’s sadly became the major purveyor of the pandemic disease of small-arms proliferation with an estimated 75-100 million having been produced in total, hitting all sectors. It is ironically, a socially irresponsible capitalists’ gem, when you think about it, created by a communist.

A worldwide fascination with outlaw culture, the AKs availability, and a user-friendly design has fostered a rise — against all similar arms —  with an inertia pushing it that began with newsreel footage of Vietnam being beamed into homes, to video from Third World hot zones, a growing gang culture problem, Narco-traffickers and concurrently, YouTube clips from jihadists, posted on message boards and blogs. All of which supports a sense of the rifle being the presumptive choice. In short time the AK became the primary rifle for the Soviet Union, her satellites, and Warsaw-Pact nations, South American drug cartels, terrorists of all stripes and the highly recognizable, crack-infused gangland battles of the 1980s; spreading its sales as liberally as Vodka. And it is now an example of a design whose success has been truly and ultimately unfortunate.

The Roots, ‘Do You Want More?!!!??!’

Do You Want More Blog Image

THE ROOTS’ DO YOU WANT MORE?!!!??! was an epiphany in the consciousness of hip-hop; a quiet-at-first, sophomore slingshot — as opposed to a slump — operating as “something other than.” What “something other than” means is a myriad of things about hip-hop in general, but also what was going on in the scene then. It was the pill that altered hip-hop’s conscious, reality, landscape and boundaries, in many ways.

(And not just for the banal debate over “conscious raps” versus “reality raps,” that The Roots somehow informally, became flash-points of and unintentional flag-bearers for, after being attributed with that “conscious tag,” and being cited as the model for a “white-friendly” approach, specifically for their popularity with the college set.)

The album carved a new space for emcees and bands to work together and do so well, and it said in no uncertain terms that live instrumentation did not necessarily mean “art-house,” in the way that “art-house” becomes an albatross around the neck of hip-hop, for a musical form (now) forever focused on a level of the undefined, ground-level “gangsterism.” (Or at least the image, history and threats of it.)

Do You Want More?!!!??! was every bit as strong and portentous an agent of shift as Nas’s Illmatic, I believe, that holy work of urban culture’s premier — no pun intended — cipher-slaying gods, released around the same time; that morped a rap capital-city out of New York again. Though the The Roots crew’s album’s import was more subtle: It was globally-influencing, and not just geographically, but in regards to tapping the many perspectives within hip-hop, which were unrealized, at the time.

While Illmatic was a strafe of listeners’ ears from a style clip that was first manufactured and delivered by Rakim, and rained lethal projectiles from a dream team of industry giants who bandwagon-ed to help a prodigy in Nas — with the intention of targeting those who bemoaned New York’s deflated presence – Do You Want More?!!!??! was a subdued masterpiece which augmented Illmatic‘s pledge to the resurrect the East, from the once-laughed at Philadelphia.

The album was the first culturally important melding of raps to live instrumentation and extension of jazz on the largest critical but commercial scale, that wasn’t a contrived product looking to pay homage or educate, a la Guru’s Jazzmatazz set. What made it work, was that it was so seamless; tailored to a fast-thinking, but slower-vibing cadre of aficionados. It blended young streetwise kids and corner hustlers’ verbalizations with the silently cool aesthetic of jazz, into a sculpture of the smoothest contours.

Do You Want More?!!!??! was less in-your-face than Illmatic, while still plenty credible in the respective community. This is something quite hard to balance: the aggression needed for core-audience credibility, while asserting the sophistication of the work. Do You want More?!!!?! was less thuggish, nowhere near the autobiographical book on being, as those works from young-hungry-wolves, but not any less threatening. It was a meander through the city, free of drama, but not disconnected, unaware or Pollyannish.

And it was the precursor and the period ending the sentence which transformed a lukewarm caravan of hip-hop experimenters into a real thing: the rap band. Just a year and a half or so prior, we knew that it could work by watching L.L. Cool J expectorate lyrics whilst backed by a live band for M.T.V.’s Unplugged series and Digable Planets very, very successful albums. But The Roots were not Ladies Love Cool James nor fronted by a luxurious, beautiful, coffee-house, street poet named “Ladybug,” with skills comparably better than most guys in rap. Nope, The Roots were a knotty-dread and a core of toughs from Philly.

And their vibe was gritty and intellectual, but not in a condescending or ostentatious way. One got the feeling that they might’ve even been kids from nearby UPenn, freelancing without a care to “make it.” They were of a similar ilk as Digable Planets and from relatively the same region, but not seen as much as a “rap band,” and with a perspective that seemed less exhibiting of a crossover appeal than “Digable,” perhaps because of their look and the semiotics. Or the fact that their wasn’t a woman to soften their edges; but also their lyrical content was much more bravado-laden. And what they became, as a result of masterful efforts, was the lasting symbol of a notable subset and movement of a genre.

Smart Dust


Photo Credit: List 25

ABOUT a decade ago, there was a palpable and growing excitement about the potential of populations of teamed micro-sensors, in the form of dust clouds, being dispersed all over the globe to meet all kinds of needs. It was a revolutionary and sci-fi inspired hypothesis, which made us think of a N.A.S.A.-inspired future. The Economist in 2002, 2003, 2004 and 2010 talked of a panoply of possibilities, spurred on by the thoughts of a computer science professor at the University of California by the name of Kristofer J. Pister, who fleshed-out a sort of futurists’ notion of not just hard, stable, in-house computer networks; processing and powering information transfers, but what would be billions and trillions of wireless dust-sized microcomputers collecting, transferring and transmitting data.

This is particularly interesting now, because, unlike other forward-looking pieces by The Economist, like that which talked of a coming 3D printing takeover – which came to fruition much more quickly than I had expected — nearly ten years after the initial “The Smart-Dust Revolution” claims, and about 15 years after they were first tinkered with and researched at Cal-Berkeley, the technology still has not taken shape. “Smart dust” came out of the Defense Advanced Research Projects Agency (D.A.R.P.A.) in the late 1990′s and the studies by the University of California, University of Michigan and the University of California at Los Angeles. It is all made possible by what are known as microelectromechanical systems (MEMS), miniature machines — that can be microscopic — and are built the same way that integrated circuits are. As explained in a 1999 article from the New York Times‘s “Dust That May Have Eyes and Ears,”

Dr. Pister’s smart dust, and dozens of other tiny machines, are made possible by a technology known as MEMS, short for microelectromechanical systems. MEMS are miniature machines, some smaller than a human red blood cell, that are built in the same way as integrated circuits. Materials are deposited in a three-dimensional stack on a silicon base and whittled and shaped using photolithography, in which ultraviolet light is used for etching. An acid bath at the end washes away unwanted pieces, leaving tiny hinges, rotors or other mechanical elements of the minute silicon machines.

As futuristic as mite-size machines like smart dust may sound, several commercial applications of MEMS have already made the leap from the laboratory to the prototype stage and into production. Many more are expected in the next decade.

The Economist claimed in 2002, in its “Desirable Dust” piece, that according to a projection by Intechno Consulting, the “smart dust” industry would reach $50 Billion USD by 2008. (Not so much, yet.) “Smart dust’s” potential military application in what is known as Intelligence Surveillance and Reconnaissance (I.S.R.) —  such as the always necessary real-time data about weather, (terrorist) and troops’ locations, assets, movements and so on — is obvious. That’s not a surprise: Imagining a dust cloud rolling over a battlespace, that isn’t a dust cloud, in what is a near-ultimate form of camouflage; relaying crucial information to a battlefield general is game-changing. Further, using such systems as early-warning sensors off coasts or as deep-penetrating signals intelligence platforms behind enemy lines and/or working as undetectable entities in sovereign nations, as well as augmenting security on the perimeters of clandestine bases would be ideal.

What’s more promising, though, are the potential benefits of “smart dust” to human society. While even private companies are intrigued by their potential ability to track consumers’ buying patterns or to facilitate quicker checkout — imagine going to your supermarket and a Skynet, Terminator-ish cloud computing architecture applying your coupons and tabulating your bill in real-time, and then forwarding it to your PayPal, or charging your check card, and your receipt being e-mailed to you — it’s even more interesting how “smart dust” can potentially be used for environmental monitoring, or crisis monitoring, in support of governmental agencies like the Environmental and Protection Agency, the National Oceanic and Atmospheric Administration and the U.S. Geological Survey. Or even United States Agency for International Development in disaster areas and war-torn areas.

In C.N.N.’s ” ‘Smart Dust’ Aims to Monitor Everything,” there was discussion of the many potential systems which could be created in what could precisely be labeled  “wireless-sensor networks” or  (smart) “meshes,” as C.N.N. implies some scientists are arguing for a smaller-bore descriptive moniker. Since the technology doesn’t imply Kristof J. Pister’s theorized dust clouds, and the technology tends to diverge from the mental images it once conjured:

The sheer number of sensors in the network is what truly makes a smart dust project different from other efforts to record data about the world, said Deborah Estrin, a professor of computer science at the University of California, Los Angeles, who works in the field.

Smart dust researchers tend to talk in the millions, billions and trillions.

Some say reality has diverged so far from the smart dust concept that it’s time to dump that term in favor or something less sexy. “Wireless sensor networks” or “meshes” are terms finding greater acceptance with some researchers.

Estrin said it’s important to ditch the idea that smart dust sensors would be disposable.

Sensors have to be designed for specific purposes and spread out on the land intentionally — not scattered in the wind, as smart dust was initially pitched, she said.

Though “wireless-sensor networks” doesn’t sound as sexy, and in fact, to the ear, comes off hum-drum and straight boring; it’s a fair request. While we haven’t seen the revolution of “smart dust” — at least in the way that it’s something out of a Phillip K. Dick novel, it has been making strides with an ambitious project by Hewlett-Packard. As C.N.N. points out:

The latest news comes from the computer and printing company Hewlett-Packard, which recently announced it’s working on a project it calls the “Central Nervous System for the Earth.” In coming years, the company plans to deploy a trillion sensors all over the planet.

The wireless devices would check to see if ecosystems are healthy, detect earthquakes more rapidly, predict traffic patterns and monitor energy use. The idea is that accidents could be prevented and energy could be saved if people knew more about the world in real time, instead of when workers check on these issues only occasionally.

The New York Times also ran articles on “smart dust” and in 2010′s “Smart Dust? Not Quite, but We’re Getting There,” it provided some reasons as to why we haven’t had that revolution take shape as was presaged. The main obstacle to ubiquitous wireless-sensor networks is the problem of power. While “smart dust” sounds amazing, these tiny instruments would actually have to have batteries and so — for now — that would entail them to be much bigger than flecks of micro-particles. (Maybe more like grapefruits, or if it progresses well-enough and fast-enough, grapes. At least, to start.) And it’s not so much the microprocessors, as one would presume, as almost any article on “smart-dust” without fail speaks of Intel’s Gordon E. Moore’s famous observation of computing known as “Moore’s Law,” which recognized the trend in computing which resulted in the doubling of processing power every two years. As The New York Times implies, this obstacle of power being an issue is changing:

Power consumption has long been the Achilles’ heel of sensor-based computing. Smart dust, observed Joshua Smith, a principal engineer at Intel Labs in Seattle, proved impossible because the clever sensors needed batteries. Instead of dust, he said, the sensor nodules would be the size of grapefruits.

But the power barrier, Mr. Smith says, is rapidly eroding. Advances in sensor chips are delivering predictable, rapid progress in the amount of data processing that can be done per unit of energy. That, he said, expands the potential data workloads that sensors can handle and the distance over which they can communicate — without batteries.

At Intel, Mr. Smith is doing sensor research that builds on commercial RFID technology (for remote identification) and adds an accelerometer and a programmable chip — in a package measured in millimeters. Its power, he explains, can come from either a radio-frequency reader, as in RFID, or the ambient radio power from television, FM radio and WiFi networks. (For the latter, Intel is developing “power-harvesting circuits,” he adds.)

“The ability to eliminate batteries for these sensors brings the vision of smart dust closer to reality,” Mr. Smith says.

In this model of computing, the sensors are servants. They exist to generate data. And the more sensors there are, the better the data quality should be. When mined and analyzed, better data should in turn help people make smarter decisions about things as diverse as energy policy and product marketing.

In our idealized world, I think we can all agree that such an invention, when used especially to work on the fronts of environment, infrastructure and humanitarian crises; could be a giant step forward. “Smart-dust” would help us better understand things in such a deep way, that there could be a rise in an Internet of the physical environment, as theorized by many scientists, where we would be immersed in searchable, cataloged real-time data on everything from our bridges and buildings to the local food supply. In New York Times’s, “Dust That May Have Eyes and Ears,” there is also the idea of how “smart dust” could work together with one team of “smart dust” sensors handling one duty: such as being sprinkled around the perimeter of a military base and then relaying vehicle movements in a battlefield, and another team of “smart dust” sensors being deployed to ask what other “smart dust” sensors may have seen.

From an intelligence gathering perspective, while this may add unnecessary nodes in what is known as the “intelligence loop,” it makes sense that this could or would be the case in regards to a real-world application, as it reduces the number of personnel needed to analyze all of that data, which would otherwise create an information overload, and pare down what is known as “chatter.” (Or unnecessary information). This is also a great model for conducting searches of missing children or manhunts, where “smart dust” networks could be peppered in multiple areas of interest and probability, providing even more eyes and ears. Regardless of the lack of full-development and use and maybe even their costs, “smart dust,” or whatever name it could conceivably go by in the near-future, presents so many intriguing possibilities that it cannot be avoided. At some point, this will be the next explored frontier of our Information Age.

On ‘The Wire,’ Season 1


DOING THIS six years late, particularly after these recent remarks*, reminds me (somewhat) of, how as a condition of certain behavioral treatments and recovery from addiction, people attempt to rectify the things they said and did, when they weren’t on the straight and narrow, with a mere apology — sometimes adding full acts of contrition — only to find those harmed or offended holding misgivings about full-hearted forgiveness. While it’s not that dramatic, with considerations to my political and cultural interests and erstwhile sociopolitical musings, not having uttered a word-processed sentence on The Wire is a sin of omission unfathomable.

Reviewing David Simon’s and Ed Burn’s drug war opera seems imperative to me, because it covers just so much of our contemporary sociological issues. As a series it’s so sweeping in its legal, institutional, criminal, surveillance and law enforcement scope that it has no equal. Though similarly formatted “wide-angled,” examining procedurals in Britain (Traffik) and Canada (Intelligence), provided similar investment-payoff ratios and edification on social matters. It even might be belonging to that unquantifiably volleyed title “best television show ever.” It may not be the most exciting show and certainly it was not the most watched in history, at first — with its revere largely cordoned-off by gender, education level and those with premium cable, along with a “media intelligentsia” and “intellectual snob” variety who we snicker at — but it is probably the best combination of an ambitious, entertaining, dramatic but accessible work on one of the greatest public policy debates of these times.

The series opens with Jimmy McNulty an on-the-outs, swashbuckling Baltimore homocide detective sauntering into a city courthouse to observe a ruling on defendant D’Angelo Barksdale, a nephew of yet-known East Baltimore drug kingpin, Avon Barksdale, in a homocide case. D’Angelo is found innocent — skating on a murder wrap under odd circumstances — when the state’s key witness amends prior statements while on the stand, leaving Barksdale sliding off into the sun. (And back to slinging again as a street-level manager in East Baltimore’s deadliest housing project.)

McNulty, in turn, being a crusading Don Quixote, striving for absolute right to be done in his professional world, is frustrated by the outcome. Because, beyond justice, McNulty’s personal life is a just-stepped-on pressure-plate attached to an I.E.D. of his own making. His wife’s divorced him and hectors him, he drinks the night away, and due to his long hours, he barely sees his sons. He’s basically looking and feeling like a fuck-up. In Baltimore’s world of overworked murder case-crackers, however, McNulty, is “good poh-leece,” something uttered in the series with a colloquial cadence, as a complimentary title ascribed to the get-it-done-right cops.

McNulty, on a hunch and with the bad taste of the ruling concerning D’Angelo Barksdale, decides he should enlist an old friend: a judge, who to the judge’s political detriment agrees to green-light a sprawling investigation, culling a rag-tag bunch from across the divisions of the department to look into D’Angelo’s uncle, Avon Barksdale, and his associates, all on the possibility of intimidation of a state’s witness in a murder trial: Ordered by a man who oversees a narcotics ring in a housing project, and whose enterprise carries several murders to its name.

From there, the onion layers peel by the episode. Over thirteen chapters, the Barksdale’s criminal and economic influence — totaling in the double-digit millions in East Baltimore’s dope trade and washed in and by multiple fronts — sketches a complex, multi-generational drug operation where viewers become witness to their keen abilities to elude law enforcement, sell in public, produce counter-surveillance and prosecute Baltimore’s street code.

Told through season one’s arc are the “inside baseball” games of Baltimore’s police: As they work within a counterproductive results-based tracking of cases that simply focus on the rate of “clearances” (closed cases), and creates internal and city-wide political pressures which thwart honest police work by lazy and case-trading officers, worried about their own clearance rate numbers and careers. It’s just as if the constant standardized testing in schools, that takes away from lasting teaching**, had been moved on to law enforcement.

Viewers of The Wire are also privy to a less-flattering depiction of the everyday work-life disposition of detectives who disdain truly grueling investigative work, and avoid prima facie unsolvable cases, not-so-affectionately referred to as “Whodunits,” which are frowned upon by division heads. Since those cases consume inordinate amounts of time and are no stop-gap against the cascade of murders in a murder-rich environment. The Wire also demonstrates how interwoven personal ambitions can hurt what small progress and peace can be achieved on the streets through policing, as much as incompetence and a constellation of long set-in institutional dysfunctions.

Of the many strengths of its inaugural season is a feat that its forerunner in H.B.O.’s acclaimed flagships of original programming, The Sopranos, had achieved; which is to humanize the believed to be utterly despicable high-level criminal. Just as we saw the smarts and skills necessary to juggle family dynamics and crime with Tony Soprano, the audience of The Wire similarly bears witness to D’Angelo Barksdale and the operation’s kingpin, Avon Barksdale, along with the sharply cerebral lieutenant, Stringer Bell.

Watching The Wire is a Rorschach test on personal perspective. For those 17-year-olds who watch Cocaine Cowboys with a romantic awe and aspirationally bounce to Rick Ross and The Clipse, it is another engrossing Grand Theft Auto-ish way to escape the mundane aspects of life. For politicos or ideas and information consumers, it is a confirmation of data parrotted over the last two decades about just how much is wrong with drug enforcement policy. For the lay, who are less plugged into the realities of an American criminal underclass, at both the popular culture level, and as the targets of actual policy; this show could simply be a Bible on the matter.

And for those in the intersections who hold a reformist’s bent towards what is quickly being the nation’s least talked about, most important domestic issue, aside from security, though they are entwined, the first season of The Wire is a realization of just how much must happen — just how many pieces of a puzzle must fall into place — in order to achieve significant results against the tide of illicit drug markets. But, real shit, no matter who watches The Wire, they will know that the War on Drugs has failed, and they will know it intuitively, without a political harangue being bellowed between the lines of its characters.

According to the Pew Research Center in a study conducted eleven years ago, 74% of Americans agreed with the statement “We are losing the drug war” and 74% also agreed with the statement “Demand is so high we will never stop use.” In 2012, it’s hard to imagine people would agree less. The amount of incarcerations in the war has amounted to nothing but a war on minorities. The Bureau of Justice Statistics at the Department of Justice data in 2011** supports this. In December 2011, the estimated percentage of blacks incarcerated on drug sentences at the state level was 21.1% of the total population of state prisons, and 19.5% of the estimated total number of those incarcerated on drug offenses at state prisons were Latino.

Prisoners under federal jurisdiction who were sentenced on drug offenses in 2009 amounted to 96, 735 and 94,472 in 2010. Those federal drug incarcerations dwarf the next highest category of incarcerations for those same years, public order offenses, a category which comprises immigration crimes, weapons charges and a miscellaneous “other,” that amounted to 63,714 in 2009 and 65,873 in 2010. A New York Times op-ed titled “Numbers Tell Failure of Drug War,” from July, further hammers the unnecessarily high incarcerations rate home:

And the domestic costs are enormous, too. Almost one in five inmates in state prisons and half of those in federal prisons are serving time for drug offenses. In 2010, 1.64 million people were arrested for drug violations. Four out of five arrests were for possession. Nearly half were for possession of often-tiny amounts of marijuana.

Harry Levine, a sociologist at Queens College of the City University of New York, told me that processing each of the roughly 85,000 arrests for drug misdemeanors in New York City last year cost the city $1,500 to $2,000. And that is just the cost to the budget. Hundreds of thousands of Americans, mostly black and poor, are unable to get a job, a credit card or even an apartment to rent because of the lasting stigma of a criminal record for carrying an ounce of marijuana.

There’s just no other way around it, in analyzing that data, or watching The Wire, that one cannot see that this war is a futile endeavor. As the end of The Wire‘s inaugural season notes, following a 13-hour — in viewer’s time — painstaking dragnet that lands a kingpin and his third in command, into custody. Yet little changes on the street: Another generation of youth who grew up as junior associates in Barksdale’s company happily move up the organizational ladder in reward for their willingness to commit rather gruesome acts; all of them perfectly trained in what is not much different than an internship, only the lessons are in the drug economy. The young bucks who take the hard lessons from their imprisoned boss, and the new leadership, keep the wheels of narcotics sales moving just like a corporation in post-crisis; holding true to their foot soldier status and newly-appointed mid-level manager positions, in a literal cut-throat industry.

In its present day of 2002; a decade ago, now, The Wire‘s first season served as a journalistically-mirrored artist’s depiction. But not much has changed in the very world it attempts to accurately portray with passion and a journalist’s scope. It’s a meditation on many of the Drug War’s fronts still suspended in time, preserved in amber; a fossil of urban decay and the permanency of the institutional failures of then, which are so wholly indistinguishable from the now. It might as well be a time capsule from last week in many cities.

The neighborhoods in our poorest communities are stocked with kids from families who’ve done nothing but sell and hustle drugs in someway — as D’Angelo Barksdale points out in one scene, laying out the matrix of his notorious and multigenerational family business – all of whom with no real hope to be on the legitimate path, mostly due to a generation of draconian incarceration policy, financially abandoned communities and failed educational systems. And so they are left to embrace the only opportunity they believe they have to make it in the world. For them the drug game is the company in their company town.

The takeaway from season one isn’t a shining coat of arms for cops, Baltimore or the American central city. It’s an indictment of the War on Drugs policy’s once-good intentions becoming that platitude of a “path to hell.” From the many failures of policing to the unfortunate and disconnected, deeply segregated communities that sprang up in housing projects becoming a lesson on what not to do in law enforcement and urban planning, season one of The Wire is an examination of what can be done to effect small change, while not ever truly dismantling illegal drug markets.

And to be frank, it is hard to grasp what the objective of the War on Drugs was other than to curtail abuse and sale, since outright eradication seems ridiculously unrealistic. You soon realize in watching The Wire that what the War on Drugs is now, can’t remotely be the objective; producing an unending loop of incarcerations and escalating violence.

The War’s blowback is counterproductively providing life-support to all kinds of ills from global terrorism (which networks use the sale of drugs to fund), to the reason for the grotesquely violent narco-terror. It’s also the lifeblood for all sorts of gangs and the interrelated nodes in the points of criminality in-between these categories: from the elastic murder figures of some cities, to, partially, why there are so many guns on the streets. Because one of the most common ways to make money on the streets is the drug sales. And the trade’s most important personal defense tool against the risk posed by other drug pushers, and addicts on a hunt to catch an easy fix, is the handgun.

Further, the war is exactly why there’s so much money involved and why the stakes are so high. Because — to bring in the fundamentals of supply and demand economics — the drugs are not so hard to secure and their demand isn’t remotely in danger of drought; it’s then safe to presume that the inherent risks involved are what make for the cartoon amounts of muscle and firepower. It’s the cost of conducting business, and the stakes rise with every successively higher level of product and dealing that dealers commit to. These paragraphs from “Numbers Tell Failure of the Drug War” are precise:

Yet the presidential elections on both sides of the border offer a unique opportunity to re-examine the central flaws of the two countries’ strategy against illegal narcotics. Its threadbare victories — a drug seizure here, a captured kingpin there — pale against its cost in blood and treasure. And its collateral damage, measured in terms of social harm, has become too intense to ignore.

Most important, conceived to eradicate the illegal drug market, the war on drugs cannot be won. Once they understand this, the Mexican and American governments may consider refocusing their strategies to take aim at what really matters: the health and security of their citizens, communities and nations.

Prices match supply with demand. If the supply of an illicit drug were to fall, say because the Drug Enforcement Administration stopped it from reaching the nation’s shores, we should expect its price to go up.

That is not what happened with cocaine. Despite billions spent on measures from spraying coca fields high in the Andes to jailing local dealers in Miami or Washington, a gram of cocaine cost about 16 percent less last year than it did in 2001. The drop is similar for heroin and methamphetamine. The only drug that has not experienced a significant fall in price is marijuana.

And it’s not as if we’ve lost our taste for the stuff, either. About 40 percent of high school seniors admit to having taken some illegal drug in the last year — up from 30 percent two decades ago, according to the Monitoring the Future survey, financed by the National Institute on Drug Abuse.

The use of hard drugs, meanwhile, has remained roughly stable over the last two decades, rising by a few percentage points in the 1990s and declining by a few percentage points over the last decade, with consumption patterns moving from one drug to another according to fashion and ease of purchase.

A pithy observation to glean from The Wire‘s opening act is the cunning and intelligence needed to run a high-level drug ring. The logistics, the precautions against an in-place legal framework, the robust investigation methods and resources, the unpredictable human element, which is wildly inconsistent, due to the involvement in one of the highest stakes businesses around; it seems if all things were equal, and if participants were once steered toward another path in a different environment, many of these fictional characters would’ve succeeded somewhere else in life, on much more linear paths. And it’s easy to assume that this would be true for those involved in the actual drug trade, outside of the capsule of an Ed Burns and David Simon teleplay, since The Wire isn’t some fantastical display of police cunning and truly daft criminals. It is a portrayal of humble triumph achieved at the hands of many smart police and well-honed investigation methods, over institutional dysfunctions, and a local drug franchise weighed down by human variables. And even then, that operation wasn’t dismantled.

It’s obvious that the economically downtrodden communities with nothing to lose tend to have more visible drug abuse — in the form of open sale — and are more actively policed. But what’s especially awful is that the victims of the War on Drugs are victims of the system already, before they even take on the trade, and so the war is in effect a double-whammy. As these are the working-class and poor, suffering from the generational divestment in the inner-cities, outsourcing, the loss of jobs due to increasing computerization efficiency, widening education gaps, a digital divide, and the outright death of American manufacturing, all ripping a hole into the lower-ends of the American economy.

That inequality only creates incentive to use as a way to cope, or for them to participate in an otherwise booming, recession-proof, tax-free business, with low-entry risk, but which also happens to act as a trap. Any “War on Drugs,” would generally be in some way a war on the poor. Sure there’s some rhetoric in there; but the entire War on Drugs enterprise has only provided a shaky amount of justification and far too many inconsistencies. (Such as the once-active sentencing disparities — amended by President Obama through the Fair Sentencing Act  –  in powder cocaine sentences and the more urban-consumed crack, which lead crack offenders facing a penalty that was equivalent to 100x times the carry-weight of cocaine.)

And much like in The Wire, in real life, there is pressure for law enforcement to “put drugs on the table,” meaning show the fruits of drug busts garnered from stash house raids and the like, in order to win the very obvious political game of police captains and mayors. The war has just been far too narrow, focusing on small-time busts rather than the elevated and out-of-reach kingpins. If wars, as one of the younger detectives in the first season notes, have a beginning and end, then this hopeless undertaking is no war at all.

Such as the “War on Poverty” or “War on Terror,” it is a twilight struggle, but unlike the one geared towards poverty, as a social policy it has mostly wreaked havoc, and you could argue the same about the War on Terror. But this war has not provided any tangible evidence of enough of actual lives saved, or of order being upheld, or even established. To merit its continuance in the way it is currently constructed, seems illogical.

Add to it all the yet mentioned: mandatory sentencing, a lack of drug education, porous borders that facilitate trafficking, our open society (whose unintended consequences) has greased the rails for smuggling operations; the shame our culture places on those affected by drugs, our aversion to focusing on prevention — along with a lack of focus on treatment — and finally the prison-industrial complex, which have all placed the nation in this awful place. No family, community or city has been left untouched, it seems. The Wire hits on so many of these elements in just its first stanza, as it connects the dots, while providing a tertiary experience in the form of a dramatic arc, for all those informed, and those with a mere passing knowledge of what has gone on in the American city and law enforcement policy for the last three decades.

In my defense to David Simon’s protestations and “weariness” about the whole cottage industry of blogging concerning The Wire; as a “blogger”/writer on the topic, I was too young and in college during The Wire‘s early run. And I further argue, blogging about The Wire without the totality of my education in sociology and political science, any writing wouldn’t have been skillfully executed. As he argues, the series really does not pay out in the minutia or end-of-season calculus, but does so in the sum-of-its-parts, it’s Gestalt.

** Right table, “Standardized Testing : Con,” bullet points 4-7.

*** The report was edited and updated in February of 2012.

Understanding Keynes’s Expertise in These Dire Times

Keynes Blog Image

Photo Credit: The New Yorker

PRAISING the ideas of novelist Ayn Rand and her principles of a completely free-market has become a 21st Century Tea-Party and a new-ish political right motif, but it was a growing fashion of those educated at the university after the 1950s and 1960s, when the Captains of Industry were beginning to be viewed as unquestionable gods; and government’s influence upon the economy was beginning to be treated with suspicion. (Especially as the economic downturns of the 1970′s in America began*.) Then this middle-late 20th century de rigueur further became apparent in our late 20th century deregulation and neoliberal policies.

The worldwide postwar development and the fat times, all seemingly erased the memory of  government’s intervention in the economy during the Great Depression, which had slingshot Western Democracies who committed to regulation and intervention of their economies to prosperity. And the larger ideological battle of the Cold War, may have further contributed to the ennui in the ’50s and ’60s concerning governments influencing the economy, as market interventions became conflated with communism.

However the 1980′s excesses, most characterized by aspirational yuppies largely devoid of civic leanings, slick business men and the Savings & Loans Crisis  — and perhaps the apropos film Wall Street and its undeniably symbolic “Greed is good” asserting, “Gordon Gecko” character — assisted in the erosion of the vision of the positive aspects of unfettered capitalism and vaunting of the free-market. The 1990′s and early 2000′s anti-corporate (and maybe) post-corporate subcultures, which looked to deconstruct the traditional corporate institution, all further managed a less-sterling perspective. Further, an increasing rise of greenwashing and an ethos of a new age of “corporate responsibility,” as well, produced a new impression: That even within the corporation, there was some thought to the possible negative effects to unregulated capitalism or, at least, its effect on the reputation of corporations.

During that time, recognition of scandals involving the environment, workers’ rights and human rights, left largely unaddressed by governments — which became viewed by some as unintended consequences and seen by others as the actual blind-spots of unchecked capitalism — began to gain currency. Which is where we are at today. With wide agreement that capitalism is an undeniable good, but unchecked capitalism is less good, if not outright bad. But as previously mentioned, there are now many who tout — particularly on the right — those once fashionable Ayn Rand-ian ideas again, seemingly out of the blue, partly in opposition to a believed-to-be socialist president. These figures argue, while advocating austerity measures, that capitalism near-wholly unregulated is sacrosanct and ideal.

And to many supporters of the unshackled market, there is even an assertion that in concert with faith organizations, nonprofit organizations (N.P.O.s) and nongovernmental organizations (N.G.O.s), that it can solve almost anything and everything on its own — from addressing the ills of social inequality to the problems of the market itself — and so the sorts of regulation and intervention of the economy supported by those on the left isn’t needed. The belief among free-marketeers is that firms who practice economically unfeasible business or even practice discrimination, cannot and will not survive the market. And so the eventual dissolution of firms in response to their inability to address the buffeting put to them by the market, produces a self-perfecting system.

Even though this belief has not matched the historical record. Companies have practiced in discriminatory ways (i.e. “redlining“), they have acted unscrupulously, and even incompetently, while still turning a profit. This notion of a self-regulating and self-perfecting market is based on basic economic principles like “the invisible hand” and “economic rationality,” which propose that there are measurable goods which redound to society by individuals and corporations simply doing what’s best for themselves. This argument is increasingly problematic in light of the recent financial crisis. Since those who attempted to do what was best for their interests in the less-regulated market (due to the repeal of measures like Glass-Steagall), such as take on home loans which they had no concrete means of paying, or doing what was economically best for their businesses — for example those investment banking firms who bundled risky loans to mitigate risk and traded them — led to a larger problem of an overleveraged American housing market and banking sector. Ultimately this aggregated personal and corporate gambling into a historic-level disaster.

British economist John Maynard Keynes and his ideas on government’s role in economies and their saving from collapse (known largely as “Keynesianism” or seen abstractly in policies like quantitative easing), is the accepted macroeconomic philosophical opposition to the free-market, neoclassical and neoliberal principles and the Tea Party, Ayn Rand, proponents. The Keynesian perspective on political economy had been the dominant theory in political science during most of the of the 20th century. And it was retreated from, off-and-on, for the last forty years, with the more recent Keynesian rejection of the ’70s having lasted until recently in much of the policy discourse. But the 2007-2010 financial crisis resulted in its revival. And even lay people supported its principles, as Pew Research Center notes, especially in regards to the auto bail-out. Though the actual stimulus is less supported. In that same cited study, only 39% agreed with the full recovery measures in February. And to be an honest broker in these matters, the stimulus (officially known as the “Recovery Act“), is more purely Keynesian, when compared to the much supported rescuing of the American automotive industry [56% according to Pew, were in favor of  the auto-industry rescue in February of 2012], and so it isn’t a complete adoption of the ideas by the general public.

Keynes’s general theory on government intervention centers on their responsibility to spend in crises; even as there are no means to do so. This intervention, in theory, comes in the form of spending and public works projects, which Keynes argued to be critical in determining the fate of severely hampered economic environments. Although the Kenynes-like measures of today are weighed against and blunted by the corollary of their unfortunate political ramifications, with no regard to the full intended economic effect, even in their dilution, three years after implementation, Keynesian policy has worked; preventing an absolutely horrible economic situation from becoming another Great Depression. But in much of the current free-market acolytes’ critiques on these matters there is a clever bait-and-switch which argues Keynesian policy has failed, since employment has yet to reach pre-Great Recession levels. (Additionally, there is the stubborn notion that President Obama had uttered employment would be at 5.4%, as a result of the measures. When in fact, he never said that, and it was a projection by surrogates who analyzed the potential impact of the measures with caveats.) The critique also doesn’t take into account the magnitude of the financial crisis, nor how much these measures were limited by the political environment, or that we did not all end up in the breadlines after hemorrhaging 800,000 jobs per month.

Why is it that Keynesian-style policy is frowned upon and seen as an out of control big government’s infringement on “freedom”? Is it really the idea that the market is simply not to be interfered with? Even if, in dire straits, the market presents negative outcomes for a nation? To me, it just can’t be that. It is improbable that anyone could hold such a cold, winner-take-all, “that’s the way the cookie crumbles” view of economic consequences. I believe the rejection of Keynesianism and the embrace of the deregulated, free-market are due to it being simple to digest and explain. It’s a perspective which views economies as independent microclimates free of government influence, where companies simply rise and fall on their own talent. But economists from both sides of the divide, believe in some form of interventionism during downturns. Conservatives generally regard tax cuts as the proper response in crises, and liberals propose that government spending is crucial to animate otherwise decelerated economies.

To free-marketeers and fiscal conservatives, a “free” economy is an extension of their views of the economy at the micro-level, and the idea that we rise and fall as individuals within the context of capitalism, simply based on a combination of talent and effort. But such rhetoric does not account for the nuances in a person’s life which have less to do with the individual and more the institutions or their biography, or the nuances of a global economy with high levels of interconnectedness; nor a national economy where what happens in the investment banking sector can affect something as seemingly removed as the aerospace industry. And more linearly, what happens to home mortgages affects construction workers. Nor does that perspective take into account the abilities of a nation when times are good, especially one that is a global leader and (in America’s case) the world’s currency standard, to pay down the debts it incurred during bad times.

Much of the rhetoric on this matter views the math of governance too simplistically and assumes that what is true at an extremely granular level — with allusions to the government budget being similar to our personal checkbooks and family budgets — directly translates to macro-level analyses. Government intervention in financial crises is simply necessary when global economies fall apart and are trapped in the resulting ugly cycle of businesses being unwilling to spend and hire, due to the uncertainty of the times, and their banks failing to loan. All of which, creating further wear on the individual confidence, by way of people hoarding money or them making a run on the banking sector, and bankrupting the entire banking sector. That situation creates stagnation which becomes a tough economic rut, with no incentive for businesses to hire or spend, a potential death of the banks, and with a drought in the flow of capital, with no end in sight.

But the complexity of explaining how economies work when things go bad, and how they get out of it, becomes a stumbling block to Keynesian measures. And when things are difficult, the simplest understandings or arguments — no matter how wrong — tend to win the day. That is why the eleventh-hour emergency stimulus in 2009, inexplicably involved tax cuts, in order to quell fiscal conservatives, and in that emergency, wrong-minded, long-term focused deficit hawks on the right; because it makes it more palatable to those dismissive to the economic philosophy of Keynes. During the most recent crisis there were even those who publicly argued to let many of those auto companies fall and those banks fail. A Republican presidential candidate, in fact, had done so, as though it was simply Social Darwinism at play. But with a housing market already set ablaze by a mortgage crisis and investment banking firms like Bear Sterns, who bet on aggregations of risky home loans falling, or were in tremendous nation-threatening trouble, like American Insurance Group faltering; allowing the auto-industry to fail, would have been apocalyptic. Further, losing the American auto sector would have smash whatever confidence was left in the economy and psychologically devastate the country.

But some of those auto jobs were indeed saved, and the auto-industry is now on sure-footing. Keynes has proven to be right over and over. As the bank bailouts, the stimulus and auto bailout prove. Are his propositions ideal? No. But they are generally for times of crisis. Sending messages, particularly to large banks and companies which present a significant national interest, that they can gamble and play loose in regards to turning profits, because the government will be their safety net, is scary. Particularly as this holds American citizens hostage to companies’ desires and the honorability of banks.

Though Keynesian measures, while the most needed in these stunted economic times, are often tarred as “communist” and are anachronistically viewed as a distinctly un-American approach, even as they were employed to great success in the New Deal, similarly, the current stimulus, the Toxic Asset Relief Program, the recent bank and auto bailouts and even the New Deal were seen as bad politically, to a degree. But those cash infusions, public works projects that get people back to work and produces jobs quickly, and also create an economic shot in the arm which have a greater effect (known as a “multiplier”), which leads to more spending by citizens and more revenue for struggling economies, in the form of paid taxes. As The New Yorker points out in “What Would John Maynard Keynes Tell Us To Do Now?“:

This jibes with history. Immediately before and during the Second World War, the U.S. government borrowed unprecedented sums to finance the military buildup, and the economy finally recovered from the Great Depression. In 1937, one in seven American workers was jobless; in 1944, one in a hundred was. A wartime economy may present a special case, but a recent working paper published by the National Bureau of Economic Research looked at data going back to 1980 and found that government investments in infrastructure and civic projects had a multiplier of 1.8—pretty close to Keynes’s estimate.

It’s ever harder to argue against Keynesianism after his thoughts became central to policy makers before their retreat and the migrations to neoliberalism and this newest rise of austere deficit hawks and Tea Partiers. Keynes’s success was seen across the globe in the postwar period and in the march out of The Great Depression, and in the 1990s’ Japanese Financial Crisis, and it has occurred again in our recent trouble — cautiously saying this, as growth is steady [taking into account the stock market and consecutive months of private sector growth] but limited — in the the Great Recession.

If one believes things were wretched these last couple of years — as they were and currently are, compared to previous domestic outputs — imagine what the business world and economic climate would’ve been like with even more people out of work during these years, if nothing was done. Otherwise Keynes’s opponents are arguing for the possibility of something greater, based on their belief that government not intervening, other than cutting taxes, would have produced a better outcome, versus what actually occurred: We avoided another Great Depression.

The bailouts and stimulus undoubtedly kept jobs from disappearing. And the non-partisan Bureau of Labor Statistics found in 2011 that in the nine months after the auto-bailout, the industry added 45,000 jobs. After $80 billion-plus in aid to American auto-manufacturers from our government to those companies they were found to all be in the black last year, and that money was paid back. A simple historical analysis says that Keynes simply understood the economy. His philosophy that government is necessitated to intervene in times of tremendous economic woe, where the private sector no longer holds the reins of an out-of-control economic system, nor the money, or the gumption and desire to assist it, as they are mostly self-interested, plays out in most of the financial crises of our times. As The New Yorker points out again, Franklin Delno Roosevelt held out on Keynesian measures for far too long, and it was precisely because F.D.R. could not buy the complicated argument of what is also called deficit spending, in order to address a lack in aggregate demand. (Meaning a slowed economy.):

In Keynes’s day, many people—including politicians sympathetic to Keynes—were suspicious of the multiplier. The whole thing smacked of sophistry. Wapshott, in a long overdue and well-researched book that usefully gathers together much hitherto scattered information, recounts Keynes’s 1934 visit to the White House, where he expounded the logic of the multiplier to F.D.R. After he left, Roosevelt remarked to Frances Perkins, his Labor Secretary, “I saw your friend Keynes. He left a whole rigmarole of figures. He must be a mathematician rather than a political economist.” Despite the enormous public-works projects of the New Deal, F.D.R. didn’t formally adopt deficit spending as a policy tool. Indeed, he kept a keen eye on the red ink. In 1937, with the economy on the mend, he ordered tax hikes and spending cuts, which caused the economy to crater again. President Truman was even more suspicious of Keynesian theorizing. “Nobody can ever convince me that Government can spend a dollar that it’s not got,” he told Leon Keyserling, a Keynesian economist who chaired his Council of Economic Advisers. “I’m just a country boy.”

It turns out Keynes not only understood macroeconomics, though, he also had an excellent grasp, not surprisingly, of microeconomics, excelling at personal investment. According to the Atlantic in “John Maynard Keynes Was the Warren Buffet of His Day.” And it seems to be yet another abutment for the philosophies he espoused in this call of constant cutting  and calls for nonintervention — in the midst of recession — and his ability to understand innately, such a murky subject. With Warren Buffet being the great and honorable standard-bearer and outlier of business in America today, who actually advocates Keynes-like measures and argues for economic justice, often being seen as the kinder face of the biz-world, not to mention a down-home, everyday sage of economic matters; it’s interesting to note that Keynes’s ability to predict the winners of the market is far better than Buffet’s much-touted ability. Both applied similar understandings of value in a precinct where gambling is a particular characteristic disposition. As the Atlantic points out:

Keynes famously took a dim view of how well unregulated markets allocate capital. “When the capital development of a country becomes a by-product of a casino, the job is likely to be ill-done,” declared Keynes in The General Theory of Employment, Interest, and Money. The problem is that the mania of the markets often drives prices away from their fundamental values. Keynes compared it to a newspaper beauty contest. Imagine a competition where you’re supposed to pick out the six most beautiful faces from a group of 100. But remember, this is a contest. The winner is the person who picks out the six most popular faces. It’s not a question of which faces you think are the most beautiful. It’s a question of which faces you think other people think are the most beautiful. Of course, being a savvy individual, you realize this — so you pick the faces you think other people will think other people will pick. And so on, and so on. The same applies to stocks. [...] Of course, there’s a third option: buy undervalued stocks. It’s not easy, but so-called value investing is how Warren Buffett managed to catapult himself up the ranks of the world’s richest individuals. It’s also how Keynes managed to generate such outsized returns during the turbulent period from 1924 to 1946. Which makes sense. If you think the social dynamics of markets push prices away from their fundamental values, you’d look for cases where that’s happened — and your risk of losing money is low. In other words, hold and buy stocks that the market underappreciates.

Read “John Maynard Keynes Was the Warren Buffet of His Day” at The Atlantic [Here]

Read “What Would John Maynard Keynes Tell Us To Do Now?” at The New Yorker [Here]

Paradigm Shift: Bad Schools Produce Bad Communities?

Bad Schools Blog Image

Photo Credit: The Daily Record

IN 2010′s WAITING FOR “SUPERMAN,” a documentary focusing an examining lens on our failing national public school system, there was a new sociological perspective concerning the influence of schools upon communities which caught my attention, since I was one who always thought the obverse to be true: that bad communities leads to bad schools. But school reformers are beginning to believe that failing schools produce even worse communities.

Without a doubt, there is one-to-one relationship between troubled communities and outright terrible schools. The question of now that the documentary points out, though, is which of these two influences is the greater factor that produces these symbiotic outcomes we see; that where there is a downtrodden community it will almost without fail, have a poor performing school?

Is it that blighted-out communities produce woefully underperforming schools, as was thought? Or do underperforming schools produce terrible communities? The former makes the most linear sense and the latter is counterintuitive, since it is troubled communities who are the feeder for bad schools. But a new theory is beginning to emerge. Public school reformers are beginning to argue that it is underperforming schools who produce terrible consequences for communities, which then makes for a cycle of bad communities and unfavorable schools.

This makes sense, eschewing data, since there are many communities that have what are known as “drop-out factories,” schools which fail to graduate the majority of their students, and as such, those young dropouts are left unproductive with lessened employment prospects and are left stagnant as weights on the respective communities. This also leads to a lower investment in the communities they live in, since they do not produce as much as graduates who are employed, and it thusly produces largely troubled communities with a young population, mostly fit for crime.

Draper’s Carousel

Drapers Carousel Blog Image

IT WAS one of the seminal defining moments of Mad Men and the mysterious Don Draper’s story; a candid look into the character’s genius and history at-once, and this particular Okey-in-the-Big-City marketing genuis’s ability to tap his personal life — a house of cards built on lies — to sell emotive and imaginative affects in his accounts’ products.

And in actuality, the entire series is about selling lies, with Draper’s identity seemingly a larger commentary on marketing and how we lead our lives in small slices of similar deceptions, but employing such little pieces of truth as it to be indistinguishable from fabrication, or the bringing of our larger story to the table, everywhere it is we go; and this scene was typifying of it.

It was the perfect bow for season one with Don Draper or, really, Dick Whitman: marketing man who came from out of nowhere to become a creative big shot as someone else, talking about the marketing of himself in the subtextual; speaking on moving from the actual man he was (Whitman) into the idealized man he wanted to be (Draper), when just coming off of the battlefield in Korea; talking about his grand deception and the cost of it all; the lies being weaved with lies.