Out of the Storm News

Syndicate content
Free markets. Real solutions.
Updated: 32 min 27 sec ago

Snus nicotine lowers risk for multiple sclerosis, may be therapeutic for other nerve disorders

June 26, 2014, 11:00 AM

New research published in Multiple Sclerosis Journal and authored by Anna Hedström of Stockholm’s Karolinska Institute of Environmental Medicine confirms that snus users have a significantly lower risk for multiple sclerosis than nonusers of tobacco. Five years ago, I discussed the researchers’ earlier findings on this subject here.

Hedström’s study is based on some 7,900 Swedes with MS and 9,400 controls. Compared with never users of tobacco, snus users had a lower risk for MS (odds ratio OR = 0.75, 95 percent confidence interval, CI= 0.63 – 0.90). Hedström also showed an increased effect at higher duration-dose levels of snus. For example, users with greater than 10 packet-years (the number of snus doses per day and years of use) had an OR of 0.45 (CI= 028 – 0.68). Smokers had modestly increased risk (OR= 1.49, CI= 1.40 – 1.59), a finding that is similar to that reported in Hedström’s previous study.

Scientific research is methodically unveiling the benefits of nicotine and smoke-free tobacco use with respect to degenerative brain diseases. A finding that nicotine may improve performance in people with mild cognitive impairment has resulted in calls for more research on nicotine’s effect on dementia.

The impact of nicotine/tobacco use on Parkinson’s disease is well documented. An American Cancer Society study provides clear evidence that smokeless tobacco use may be protective for Parkinson’s disease (RR = 0.22, CI = 0.07 – 0.67). In fact, nicotine is being discussed as therapy for this disorder (see here, here  and here).

Alzheimer’s disease is the sixth-leading cause of death in the United States and Parkinson’s disease is the fourteenth. The role of nicotine and smoke-free tobacco in reducing risk of or treating these disorders is of significant import.

Let it grow

June 26, 2014, 8:30 AM

Last week, the Washington Post detailed a phenomenon which, to many twenty-something residents (or former residents) of the District of Columbia, is all too familiar: young people are leaving Washington proper in droves:

They were once a part of the free-spending group of young people who jolted Washington’s economy. Now older and with more financial strains, they are trying to find a new place in it.

Amid the talk of young newcomers and their fondness for social leaguesand artisanal-coffee shops, another reality exists: Many are struggling to keep pace with the city’s rising cost of housing. And as new millennials move into the District, older members of that generation — loosely defined as ranging from 18 to 34 years old — are heading out.

This odd, migratory pattern among younger D.C. residents is undoubtedly a problem, and one with which this author can personally identify, having moved last year from a squat, overpriced studio in Kalorama to a spacious two bedroom in Arlington, when the combination of lower taxes and split rent became simply too attractive to turn down. For others, such as those cited in the article, similar factors are no doubt at work.

Since the article’s publication, its author, Robert Samuels, has posited a number of potential remedies for the relevant problem. For instance, he has suggested making neighborhoods more walkable, or reforming and expanding the public transit system (which, in WMATA’s case, is decades overdue). Lower crime rates also would be essential for some of the more affordable areas of the city to attract new residents, Samuels notes.

All of these are good suggestions, as far as they go, but there’s one obvious solution that is completely ignored both in the original article and its sequel.

I refer to the century-old Height of Buildings Act of 1910, which restricts buildings in the District of Columbia to the rather diminutive height of 110 feet. While this particular height probably seemed formidable in 1910, today it only recalls a hilariously anachronistic phrase from the Rogers and Hammerstein musical Oklahoma:

Everything’s up to date in Kansas City

They’ve gone about as far as they can go

They went and built a skyscraper seven stories high

About as high as a building oughtta grow

Here in D.C., regulators appear to believe eight stories is about as high as a building ought to grow. What seems to concern them less is how high this causes rent to grow. As Danny Vinik noted in The New Republic in May:

Zillow, a real-estate database, rates D.C. as the seventh-most expensive metro area in the country for residential renters. Office rents are third-highest in the U.S, according to commercial real estate firm Cassidy Turley. Increasing the supply of housing would bring down these rents significantly. That leaves more money in consumers’ pockets to spend on goods and services and more money with companies to boost wages or invest. That means more jobs, including many from the construction boom that would result from a relaxation of height restrictions. The increased density would lead to a larger underlying tax base and boost revenue to put toward city services. And, as Matt Yglesias has often written, allowing companies to cluster near each other has significant economic benefits.

One doesn’t have to be in favor of transforming the much-beloved D.C. skyline into a Tokyo-esque hive to see the problem, nor why even a modest increase in the height limit could drive down rents and allow more young people to stay in the city. Such an improvement would be at least as important as, say, updating the city’s public transit.

Of course, as in any policy fight, there are both economic interests and inherent resistance to change to be overcome. On the economic interests front, one can easily see how landlords and homeowners prefer a policy that sees their rents rise every year, irrespective of the age of the tenants who happen to be paying them, or that keeps the value of their homes increasing, irrespective of whether anyone can afford them.

As to resistance to change, one need only look to the fuss that more senior residents of the district raised over the addition of late night bars and bike lines to neighborhoods like Cleveland Park. Shortening their beloved skyline, even by a few stories, may be a bridge too far.

Still, D.C. has a choice: Embrace the future by letting its buildings grow up along with its younger inhabitants, or simply serve as one temporary stop for those young people on their way to environments with less regulatory meddling. It may be that many residents probably are all too happy to choose the second option, but they may need a reminder that economic growth also implies growth within the city. In this case, that growth may have to be vertical.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

University of California has strong opinions on peer-to-peer business

June 26, 2014, 7:50 AM

Earlier this week, the University of California’s Office of the President (UCOP) determined that, as a result of a perceived shortfall in regulation, it would not reimburse faculty members who use transportation network companies (Uber, Lyft, Sidecar, etc.) or other peer-to-peer services like Airbnb while engaged in UC-related business. The message is recreated below:

Dear Colleagues,

UCOP’s Office of General Counsel has determined that third-party lodging and transportation services, commonly referred to as peer-to-peer or sharing businesses, should not be used because of concerns that these services are not fully regulated and do not protect users to the same extent as a commercially regulated business. As the market matures and these businesses evolve, the university may reconsider whether reimbursement of travel costs provided by peer-to-peer or sharing businesses will be allowed.

Therefore, until further notice, please do not use services such as Uber, Lyft, Air B&B or any other similar business while traveling on or engaging in UC business.

What or who would drive the president of the University of California (Janet Napolitano) to steer recklessly into a collision with the regulatory decision of another duly delegated California state body that has far greater expertise? Why does Ms. Napolitano even have a position on how TNCs should be regulated?

This is an instructive example of the unexpected reach of state bodies and their ability to choose winners. The UCOP has decided to augustly articulate a legal, but policy-deficient, rationale for their judgment about the sufficiency of the current regulatory regime.

What precisely does the UCOP mean by “not fully regulated”?

Setting aside the other peer-to-peer services, in California, TNCs are regulated by the California Public Utilities Commission (CPUC). The CPUC has promulgated preliminary regulations and insurance requirements to TNCs which, while still being refined, do hold the force of law. In fact, the CPUC has already issued licenses to operate to five separate providers of TNC services. As far as the CPUC is concerned, it has enough of a handle on the situation to allow for TNCs to continue to operate.

The sufficiency of the regulation surrounding TNCs is more about resolving hitherto latent ambiguities which have been emphasized by incumbent industries. No doubt, such concerns deserve serious attention from regulators and policy-makers alike. But, such attention does not signal that TNCs are somehow without “full regulation.”

More frustrating still is that the UCOP has determined, with no or limited subject-matter expertise, that other “commercially regulated businesses” offer consumers a greater degree of protection. Here, the UCOP is conflating different activities. TNCs are under the regulatory purview of the CPUC because they are considered charter-party carriers – a designation populated by limousine services. They are not regulated as taxis are, by local authorities. Since TNCs have a different regulator and a different business model, comparisons between their regulatory environment and the regulatory environment in which taxis exist is awkward and misleading.

The UCOP, if utterly compelled to act, should have at least done so with an eye on the angle that the California Legislature is taking. There is little doubt that, whatever the final product looks like, there will be a meaningfully cognizable difference between what is required of taxis and what is required of TNCs. The University of California, unfortunately, is doing its part to express a policy preference for disregard of other state regulators and apparently for onerous regulation. California-licensed TNCs are lawful vendors. In fashioning its policy judgment, the UCOP erred.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

The risks of allowing Internet censorship

June 25, 2014, 2:16 PM

From NCPA:

The Supreme Court of British Columbia has ordered Google to remove information from the internet, reports Zach Graves for R Street.

The Canadian case involves the sale of counterfeit products, but it is similar to a ruling by the European Court of Justice last month, which ruled that search engines could be required to remove links that infringe upon a person’s privacy rights.

The decisions are distinct, but both have a wide reach:

  • The European decision applies even to factual information in the public record.
  • Unlike the European decision, the Canadian decision applies beyond Canadian sites, requiring Google to remove information that can be assessed from anywhere in the world.

The ruling, writes Graves, leads to some important questions. What if a ruling in one country conflicts with another? He notes that China, Korea and Russia are already pursuing ways to possibly censor the Internet.

People depend on search engines for information, and allowing a country to set restrictions on the type of information available on the internet is concerning.   

Source: Zach Graves, “The Dangerous Proliferation of the ‘Right to be Forgotten’,” R Street, June 18, 2014.

Don’t take the Texas GOP’s crazy platform too seriously

June 25, 2014, 9:00 AM

In recent weeks, more than a few groups – including one on whose board I serve – have denounced various aspects of the Texas Republican Party’s platform.

Indeed, there’s a lot to criticize. In addition to its absurd call for so-called “reparative therapy” for homosexuals and bans on pornography, the platform draft includes nativist language on immigration and an attack on vaccination. There’s also some conspiracy theory garbage opposing Sharia law and the United Nations’ Agenda 21.

A few parts of the platform seem downright sloppy: one provision calls for the repeal of all laws “regarding the production, distribution or consumption of food.” I’m sympathetic to what the writers of this were probably thinking, but taken literally, the provision would make it legal to label jars of baby food as containing “carrots and peas,” even if what they really contained was fermented gerbil vomit.

It also includes some foreign policy planks that somebody must care about but that seem quite out of place for what is, after all, a state party.

For all its real flaws, however, the platform is a pretty decent summation of current streams of thought among the populist, socially conservative right. The current draft calls for the outright repeal of the Patriot Act as well as the National Defense Authorization Act provisions that allow the use of military tribunals for trying terrorists. Both provisions would probably get more votes in the Democratic caucus than the Republican one. Previous iterations of the Texas GOP platform have also called for usury laws—government price controls on interest rates—and mandatory labeling of genetically modified food.

Frankly, much of the platform’s weirdness and its strong populist flavor come from the unusual way that Texas drafts its platform. The platform comes from a drafting committee, just as most other state platforms do. But where most party platforms typically are written by insiders for media consumption, the Texas GOP platform is debated and rewritten by anybody who takes time off and pays the fee to attend the party convention. The result is that it’s a true “grass roots” platform that reflects the feelings of the party’s activists, rather than its officeholders.

People with a pet issue can usually get it in, so long as it isn’t too contentious a topic. And I know this for a fact. In 2012, the Heartland Institute’s then-Texas director, attending the convention in her own private capacity, got some language into the platform on property insurance that I helped her write.

This method of writing the platform serves to tell office-holders what their grassroots are really thinking, rather than serving as a manifesto written by those officeholders. The platform may well pull Texas office-holders in a populist, social conservative direction, but it doesn’t necessarily prove much about how they would govern.

A group of Democrats coming together and voting in the same way would likely call for vastly higher taxes; a straightforward government takeover of health care; a forced conversion to “green energy” that would wreck the economy; an end to secret ballot elections for unions throughout the country; imposition of racial quotas on private employers; government bans on “unhealthy” food; outright confiscation of guns; laws against “hate speech”; Internet censorship; denial of broadcast licenses to “unbalanced” (read: conservative) media; new restrictions on prayer in public; and taxpayer funding for partial-birth abortion. And there would probably be a long Noam Chompsky-inspired rant against corporations thrown in somewhere as well.

Only a handful of Democratic politicians currently in office support any of these things in public and most would probably oppose them if asked. Many Democratic voters would probably oppose them too. But all are popular with certain parts of the Democratic base and a left wing populist platform would probably write a platform containing all of them.

My point isn’t that the Texas Republican platform is irrelevant: it does reflect the views of a certain portion of the Republican Party and may influence their views. But nobody should mistake it for a manifesto on how Republican officeholders in Texas or anywhere else plan to govern.


This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Insurance by/at the pound?

June 25, 2014, 8:00 AM

It has never been more of a dog’s world. Consider, 45 percent of households own dogs.

With such a high rate of cohabitation, insurers have seen dog-related claims rise. Today, according to the Centers for Disease Control and Prevention (CDC), there are approximately 4.5 million Americans bitten by dogs each year. Of that 4.5 million, 885,000 people are wounded seriously enough to require medical attention.

It is undeniable that dogs injure an enormous number of Americans. People with dogs in their homes are associated with a higher likelihood of being bitten by a dog and people with two or more dogs in their homes are five times more likely to be bitten than those without any dogs.

The Insurance Information Institute reports that one dollar of every three paid out in homeowners’ claims was a result of a dog-related claim and that the average cost paid out for such a claim was $27,862.

Whether or not the associative risks or costs of dog ownership are recognized, the popularity of dogs persists.

Insurers, as their business demands, assess the cost of the risk that dogs represent, and have compiled information related to dog-related claims. From this information, some insurers have chosen to adjust their underwriting practices to account for the risk profiles of different breeds. This activity sometimes leads to a policy showdown.

Two states, Pennsylvania and Michigan, have chosen to prevent insurers from distinguishing between the risks that different breeds represent. To forestall what proponents of such bans describe as “breed discrimination”, both states have prohibited underwriting practices that are sensitive to breed. There are two rationales behind such bans, one is emotive and the other is policy-based.

First, such bans are, to a dog-loving populous, intuitively attractive. Let’s face it, though legally property, dogs are so much more to those who love them. For those who love dogs, the term “breed discrimination” holds a great deal of rhetorical power. Semantically, “breed discrimination” sounds odious. And politically, fighting against discrimination is almost always good … right?

Well, not really. Risk classification of all types is, in a literal sense, the “quality or power of finely distinguishing.” AKA, legal discrimination. In spite of the legality and inevitability of risk classification, the issue remains sensitive.

Second, even without access to the proprietary data that insurers may have, underwriting according to breed – in the strict sense – is problematic. Though the CDC attempts to measure the health risks posed by one breed versus another, they confess the shortcomings of such an approach. Data about attacks is self-reported and prone to inaccuracy. Further, the majority of dogs in U.S. households are not pure in breed. The existence of mutts and customized cross-breeds (for instance, any dog known as a, “fill in the blank”-doodle) complicates easy classification.

From the perspective of insurers, who are interested in pricing their products competitively so that clients posing a low risk pay a lower premium, forbidding the use of dog breed data is problematic. In every state, insurers are required to underwrite on an “actuarially justified” basis. This means that only legitimate cost factors may be taken into account. Dog breed data, though imprecise, meets this threshold because it is a proxy for indicia of associated risks that are statistically correlative.

More specifically, it is known that male dogs bite more frequently than female dogs; that non-neutered dogs are more likely to bite than neutered dogs; and, that chained dogs are more likely to bite than unchained dogs. Further, it is known that larger dogs are capable of causing greater injury than are smaller dogs. Thus, to the extent that some breeds are more likely to possess any number of the enumerated characteristics, some insurers have come to believe that there is a meaningful correlation between that breed and heightened claim risk.

To be certain, insurers are not underwriting based on breed guided by a normative judgment.

Still, many insurance customers will find such reasoning unsatisfactory because, without access to the data that undergirds it, the correlation will, in their view, not rise to a sufficient level of statistical relevance. Fortunately, there is recourse available to those customers and it lies in the realm of the free market.

Some companies, like State Farm, forego breed-sensitive underwriting in favor of adjusting premiums after a dog has demonstrated itself to be a risk. It is not inconceivable that State Farm, by accepting a certain number of losses that it may have otherwise avoided by employing breed sensitive underwriting, has gained a reputational advantage that could well offset the costs and increase market share and profits. State legislatures, instead of succumbing to the temptation to ban breed-sensitive underwriting, should recognize that the market has a solution to unpopular underwriting practices.

With 885,000 people a year wounded seriously enough by dogs to require medical attention insurers are justified to classify dog owners of all types, as a higher risk than those who do not own dogs.

If legislatures must pass laws to save dogs and their owners from the effects of breed discrimination, perhaps they would allow insurers a rating exception as they pass their laws. The exception would be a straightforward approach to assessing the risk that a dog embodies by simply weighing the dog. We know that the larger the dog the more damage it can do. Can scales predict what breed is not allowed to?


This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Checkmate for E3′s critics

June 24, 2014, 9:00 AM

Imagine the following: A new form of entertainment enters the market, which engages its players in intense competition, sometimes to the exclusion of their social lives, and which seems to many to be a waste of their energy when they could be accomplishing more useful things.

In fact, several argue the intense competition and warlike elements of this new form of entertainment make its participants prone to violence, and urge that it be removed from polite society in favor of the older, more respectable forms of social entertainment.

I am referring, of course, to chess, a game which is regarded today as an eminently respectable pastime requiring great reservoirs of strategic skill. Chess champions are international celebrities, and often take up (and bolster) political causes, as in the case of Garry Kasparov’s advocacy for the Magnitsky Act. Yet, as io9 documents, there was a time when statements like the following were written without irony about the game:

The great interest taken in this warlike game — the importance attached to a victory — and the disgrace attending defeat, are exemplified in numerous instances handed down to us by various writers, of which the most worthy of notice are the following….

Richlet, in his Dictionary, article Echec, writes, ” It is said, that the Devil, in order to make poor Job lose his patience, had only to engage him at a game at Chess.”

Would that we could look at similar accusations against video games with similar derision, especially given that the video game industry recently held its annual event showcasing the coming year in technology and games, known commonly as E3.

Unfortunately, it seems that every time the video game industry dares to show its face in public, a hyperventilating article is never far behind. This year’s E3 was no exception, as the New York Times‘ Nick Bilton fretted about the convention:

But it is hard to argue that there isn’t some level of desensitization after a day spent at E3. At the main entrance of the Los Angeles Convention Center, where the conference was held, people lined up to play the new          game Payday 2. In this game, you team up with friends to rob a bank. Killing police is a big part of succeeding.

As I watched people picking off cops and security guards with sniper rifles and handguns, news broke that a real-life shooting in Las Vegas had resulted in the death of two police officers and three civilians (including the two shooters).

I asked Almir Listo, manager of investor relations at Starbreeze Studios, which makes Payday 2, if he felt in any way uncomfortable about making a game that promotes shooting police.

“If you look hard enough, you can find an excuse for everything; I don’t think there is a correlation,” he said. “In Sweden, where I am from, you don’t see that stuff happen, and we play the same video games there.”

After the Sandy Hook shootings in Connecticut, when it became clear that Adam Lanza was a fan of first-person shooters, including the popular military game Call of Duty, President Obama said Congress should find out once and for all if there was a connection between games and gun violence.

“Congress should fund research on the effects violent video games have on young minds,” he said. “We don’t benefit from ignorance. We don’t benefit from not knowing the science.” Yet more than a year later, we don’t conclusively know if there is a link.

In the event that Payday 2 is ever played competitively at the same level that Chess is (or, for that matter, that Starcraft is in South Korea), one can only hope that articles like Bilton’s will be held up to a similar level of ridicule.

Where to begin? Perhaps with the fact that Adam Lanza’s favorite game – the one that he was, in fact, said to be “obsessed with” in the Sandy Hook Crime Report – was the thoroughly non-violent Dance Dance Revolution. Or that Bilton offers no evidence the shooters in Vegas had ever heard of video games, let alone Payday 2. Or perhaps the fact that, like so much of what President Obama suggests Congress fund, there is no need for research on the effects of violent video games on young minds. There have been scores of such studies already, and every one not either funded by anti-video game activists or so vague in conclusions as to be meaningless shows no significant effect that video games, violent or otherwise, have on young minds (or, for that matter, adult ones.)

That Bilton, whose coverage of the rest of E3 was relatively balanced, should fall for the chicanery that video games can cause violence – or, as the new and far less menacing cries of alarm would have it, “decrease empathy” – is sad, but not unexpected. Video games are a convenient bogeyman for a society that labels even the Western canon with “trigger warnings” and frets about whether even a single sexist joke could somehow lead to mass acceptance of rape.

Video games are unapologetic in their gore, their violence and their transgressiveness. Despite all that, they are and remain harmless. In this era of oversensitivity, the imperviousness of video games to the mindless censoriousness of our politically correct, morally panicky current culture is a refreshing checkmate in what often looks like a losing battle for free speech.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

R Street urges state and local governments to take steps on e-cigarettes

June 24, 2014, 8:05 AM

WASHINGTON (June 24, 2014) – State and local governments should take steps to curb cigarette use and promote e-cigarettes as a real driver of tobacco harm reduction, said the R Street Institute in a paper released today.

Authored by Dr. Joel Nitzkin, R Street senior fellow and public health expert, “E-cigarette primer for state and local lawmakers” lays out the benefits associated with using e-cigarettes as a means to quitting traditional cigarettes, while promoting controls through regulation to keep all tobacco products out of the hands of minors. 

“Adding a tobacco harm reduction component to current tobacco-control programming is the only policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years,” said Nitzkin. “Sensible FDA regulation will be needed if e-cigarette makers and vendors are to present the level of risk posed by these products honestly. However, any regulation must be evidence-based, practical and reasonably streamlined in a way that will protect and advance public health.”

As draft FDA regulations begin to make their way through the process, Nitzkin outlines several steps that state and local governments can take in the meantime.

First, state and local governments should fully enforce age restrictions on the purchase of all tobacco products, and consider upping the age restriction from 18 to 21 to remove cigarettes from the high school environment. Second, to encourage users to switch, governments should heavily tax cigarettes, but only lightly tax lower-risk products. Third, governments should consider implementing non-pharmaceutical smoking cessation protocols that could prove to be more effective for long-term abstinence. Finally, governments should urge tobacco-control leaders to open dialogue with those in various tobacco-related industries who endorse e-cigarettes as the solution to curbing cigarette use and would welcome the opportunity to partner with those in the public health community in pursuit of shared public health objectives. 

Simultaneously, governments should urge the FDA to sensibly regulate e-cigarettes and other lower-risk tobacco products by prohibiting sales to minors, restricting marketing and assuring quality and consistency of manufacture. They should urge the FDA not to impose restrictions on flavoring or nicotine content that would make those products unpalatable to smokers who otherwise would switch. 

The paper can be found here:


E-cigarette primer for state and local lawmakers

June 24, 2014, 8:00 AM

Cigarettes kill an estimated 480,000 Americans each year. An estimated 46 million Americans smoke cigarettes, the most hazardous and most addictive of tobacco products. Despite our best efforts, these numbers have been consistent, year to year, for more than a decade. Switching from cigarettes to a smokeless tobacco product or an e-cigarette can reduce a smoker’s risk of potentially fatal tobacco-attributable cancer, heart and lung disease by 98 percent or better. This approach is called “tobacco harm reduction” (THR). Adding a THR component to current tobacco-control programming is the only policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years. The e-cigarette family of products offers the most promising set of harm reduction methods because of their relative safety compared to cigarettes, their efficacy in helping smokers cut down or quit and their unattractiveness to teens and other non-smokers. They also promise to be less addictive than cigarettes and easier to quit.

This primer provides evidence in favor of e-cigarettes as a THR modality and a review of the arguments against them. Many in tobacco control oppose any consideration of e-cigarettes because of their dislike of the “tobacco industry”; because they fear that THR will attract large numbers of teens to nicotine addiction; because the case in favor of e-cigarettes has not been proven to their satisfaction; and possibly because of likely harm to the major pharmaceutical firms that now support much tobacco-control research and programming. This primer closes with recommendations for actions state and local lawmakers should and should not consider with respect to THR and e-cigarettes.

The Burkean case for immigration reform

June 23, 2014, 9:00 AM

Demos blogger Matt Bruenig, in an apparently Burkean mood, writes:

The biggest factor in production is not nature, labor, or capital, but in fact accumulated technology and knowledge that comes to us as an unearned inheritance from the past. The marginal productivity of that unearned inheritance accounts for the majority of our economic output. Imagine you held everything else equal in the economy, but then ticked off electricity technology (which nobody alive has produced). By how much would the economy shrink? A ton.

I say Burkean, of course, because of Edmund Burke’s famous passage:

We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that he would do better to avail himself of the general bank and capital of nations, and of ages.

Bruenig correctly points out that it is the bank and capital of nations and of ages that account for nearly all of our economic activity. We owe our know-how, our “lower-level knowledge” as Amar Bhide puts it, to those who came before us. We live off of their accomplishments, and can only aspire to add something meaningful to them.

A drastically more open immigration policy makes a great deal of sense, from this perspective. Global wealth is greatly hindered by the fact that nearly all of humanity is stuck in places that do not have a lot of capital in the bank of their nations, so to speak. From a broad point of view, allowing as many people as possible to move to areas where they can participate in greater “accumulated technology and knowledge” enriches the world. From a humanitarian point of view, it most directly lifts the poorest people on Earth out of poverty, and from a selfish point, it is highly likely to enrich the average American.

The great innovators of the 19th and 20th century in this country were largely either immigrants or the children or grandchildren of immigrants. Who believes that America would have been better off without a Ford or a Carnegie?

Some fear that opening the door to the bank and capital of our nation will do violence to those institutions, but history has not given us reason to give much credibility to this concern. It certainly did not happen when we had drastically more open borders in the 19th century. Technology and lower-level knowledge are accumulated by the sweat of our brows, and the more people we have to get to work pushing the frontier further, the better off we will all be, to say nothing of those who will inherit our legacy.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Patents? Where we’re going, we don’t need patents

June 23, 2014, 9:00 AM

A little more than a week ago, if you were to walk through the lobby of Tesla Motors – not that I ever have, but if one were – you would have admired a wall displaying hundreds of patents belonging to the company.

This week, however, it’s bare.

That’s because Tesla founder and serial entrepreneur Elon Musk has removed them “in the spirit of the open source movement, for the advancement of electric vehicle technology.” In a June 12 blog post, Musk declared that no longer would Tesla enforce protection on their patented technologies. Instead, the company plans to open the doors in hopes that other firms will enter, foster innovation and grow the electric vehicle market.

There are those who have scoffed at this news, believing Musk was embarking on a high-ride publicity stunt. It’s certainly true that Tesla and Musk have gotten the media’s attention. But more importantly, this has garnered more attention for the open source movement – a movement that has been stifled by patent trolls and hungry patent attorneys.

Musk elaborates on that point by writing that patents these days serve only to “stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors.” He goes on to say that receiving a patent “really just means that you bought a lottery ticket to a lawsuit.” A lottery that expenses millions in attorney fees and court costs is no winning one, and one I certainly wouldn’t want to be a part of.

Indeed, as the dust settles on this news it does appear that Tesla’s attempt at promoting electric car programs is paying off. It was reported by the Financial Times that BMW and Nissan, two of Tesla’s biggest rivals, are interested in pairing with the company to expand its network of charging stations throughout the United States. Indeed, investors seem to agree with the company that the network effect from having more electric cars on the road, and thus more charging stations, is more important than the monopoly rights granted by the patents. Tesla’s stock price soared to its highest point in months over the past week and a half.

Obviously, it’s going to take a lot more than one high-end car company standing up to say that they’re tired of the way the patent system works, especially on the infringement front, but Tesla is hopefully paving the way and opening the door for more major players to see that the open source movement can be a winning strategy for everyone involved. After all, has history ever proven that patents were a positive for innovation?

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Current design patent laws stifle innovation and competition, R Street study finds

June 20, 2014, 3:01 PM

WASHINGTON (June 20, 2014) – Current design patent law provides incentive for frivolous lawsuits and abuse, said the R Street Institute in a policy paper released today.

Authored by Ned Andrews, the paper, “Is interactive design becoming unpatentable?,” lays out recommendations for modernizing the design patent system to allow smaller companies to enter the technological market.

“In order to have the kind of ornamental status that could be the subject of a design patent, an object must possess either some entirely nonfunctional feature or be the result of workmanship that does not contribute in any way to its function,” wrote Andrews. “Current definitions falsely equate the aesthetic merit of functionality with that of applied ornamentation. Thus, some inventors seek design protection for aspects of an object that are, in fact, functional.”

Andrews writes that the system creates an incentive for companies to acquire the patent rights for designs that are as aesthetically or conceptually simple as possible. They then wait for another company to develop a product that resembles the original and then file a claim of infringement, hoping that a manufacturer-defendant will agree to an early settlement.

“The parties that tend to come out on top are the biggest players – the Apples and Samsungs,” he wrote. “This interferes with smaller players’ ability to make headway on a useable portion of their own applications, because they can’t afford to risk a lawsuit from or pay the fees demanded by the trolls or big firms.”

Andrews recommends modernizing the design patent system in a variety of ways. First, impose a simple test: if the device would be less functional if the claimed aspect of the design were absent, the claim in question fails the non-functionality test. Second, courts should limit the findings of design infringement to cases in which the similar aspects of the article’s design perform an ornamental purpose, rather than a functional purpose. Third, both the U.S. Patent and Trade Office and the courts should renew their attention to the criteria of novelty and non-obviousness.

Finally, courts should make standard the practice that in “exceptional cases” of bad faith or misconduct, of awarding reasonable attorney’s fees to the prevailing party in a civil case.

The paper can be found here:


Murray Rothbark

June 20, 2014, 12:28 PM

Murray Rothbark is R Street’s distinguished visiting office dog and director of canine policy.

The case in favor of e-cigarettes for tobacco harm reduction

June 20, 2014, 10:06 AM

This paper has been accepted for publication in the International Journal of Environmental Research and Public Health.

A carefully structured Tobacco Harm Reduction (THR) initiative, with e-cigarettes as a prominent THR modality, added to current tobacco control programming, is the most feasible policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years. E-cigarettes and related vapor products are the most promising harm reduction modalities because of their acceptability to smokers.

There are about 46 million smokers in the United States, and an estimated 480,000 deaths per year attributed to cigarette smoking. These numbers have been essentially stable since 2004. Currently recommended pharmaceutical smoking cessation protocols fail in about 90% of smokers who use them as directed, even under the best of study conditions,  when results are measured at six to twelve months.

E-cigarettes have not been attractive to non-smoking teens or adults. Limited numbers non-smokers have experimented with them, but hardly any have continued their use. The vast majority of e-cigarette use is by current smokers using them to cut down or quit cigarettes. E-cigarettes, even when used in no-smoking areas, pose no discernible risk to bystanders. Finally, addition of a THR component to current tobacco control programming will likely reduce costs by reducing the need for counseling and drugs.

R Street commends committee for passing TRIA reform

June 20, 2014, 9:26 AM

WASHINGTON (June 20, 2014) – The R Street Institute welcomed today’s passage of H.R. 4871, the TRIA Reform Act of 2014, by the House Financial Services Committee.

The measure, sponsored by Rep. Randy Neugebauer, R-Texas, calls for a five-year extension of the federal Terrorism Risk Insurance Program, a $100 billion reinsurance backstop originally passed in the wake of the Sept. 11, 2001 terrorist attacks. However, the bill includes important taxpayer-protection provisions that gradually shrink the size of the federal program.

“Rep. Neugebauer’s bill strikes the proper balance between ensuring that sufficient capacity exists for U.S. businesses to insure against catastrophic terrorism, while also guarding against government subsidies that would unjustly enrich insurance companies and major commercial real estate developers,” R Street Senior Fellow R.J. Lehmann said.

Under terms of the TRIA Reform Act, the trigger level for conventional terrorism attacks would be raised gradually from the current $100 million to $500 million by the end of 2019. For attacks involving nuclear, chemical, biological and radiological events, all of which must be covered by law under workers’ compensation policies, the program’s current terms would remain intact.

“Reinsurance broker Guy Carpenter recently issued a report finding that multiline terrorism reinsurance capacity is about $2.5 billion per program for conventional terrorism and about $1 billion per program for coverages that include NBCR,” Lehmann said. “Given those figures, and the continuing growth of capacity thanks to the influx of alternative sources of capital, we think the adjustments called for in the House bill are perfectly reasonable.”

The industry also would be asked to increase its co-payment share of conventional terrorist attacks from the current 15 percent to 20 percent, while individual company deductibles would remain at 20 percent of prior year premiums in a particular line of business. The industry would be asked to repay taxpayers 150 percent of funds expended, up from 133 percent currently, up to a floating retention level calculated by adding the aggregate amount of individual company deductibles.

Lehmann also praised a provision calling on the non-partisan U.S. Government Accountability Office to conduct a study on the feasibility of charging companies an upfront premium for TRIP’s reinsurance coverage.

“Much like the federal Riot Reinsurance Program of the 1970s, the way forward for federal terrorism reinsurance ultimately is to charge companies an actuarially adequate premium,” Lehmann said. “We can never know how much capacity the private reinsurance sector might be willing to commit to terrorism coverage so long as the government provides it for free.”

Is interactive design becoming unpatentable?

June 20, 2014, 8:00 AM

If 20th Century design was inspired by American architect Louis Sullivan’s 1896 pronouncement that “form ever follows function,” the key realization thus far of the 21st Century has been that this is merely a necessary – rather than a sufficient – condition for quality designs to flourish.

We have learned, and the market has confirmed, that an object should be designed in accordance not only with how it functions, but moreover with how it should function. Especially in the case of interactive technology, a description that has grown to describe just about anything, an object should function the way its user expects it to function.

As technology has become more powerful and flexible, the task of matching function and expectations has undergone a change akin to the philosopher Immanuel Kant’s metaphorical Copernican Revolution. For older generations of technology – in which scarce resources limited both what functions were available and the maximum complexity of users’ commands – the steps necessary for users to extract and refine what they could do with a device were explained in thick manuals. The prevailing strategy for more recent generations of technology has been to meet users halfway, competing to efficiently perform functions and effectively implement concepts that users have been had led to expect.

Today’s designs, however, are increasingly able to cut out the middleman, more and more closely conforming to their users’ preexisting intuitions and thought processes and less and less asking users to make those thought processes conform to products’ capabilities.

In other words, the key to success in modern interactive design does not lie in “creating” the best design possible. Rather, it begins with doing the best possible job of stripping designs down to concepts and procedures with which the user is already familiar, preferably through everyday use. Where there is no alternative but to require more input from a user, his or her options are laid out in terms the user already can be expected to know. While the fusion of design and utility has not yet been perfectly realized, industry has become more fully aware of both parts of this process and continues to pursue integration in earnest.

This coevolution of design standards and procedures has clashed, and continues to clash, with the structure of U.S. patent law. The first problem is the potential uncertainty that surrounds the scope and strength of a design patent’s protections. Even in the paradigm case of a design feature that has been aesthetically improved beyond what was required to give the feature its functional attributes, there remains the potential for overly broad claims about what aspects of a design qualify under the law as “ornamental.”

Under section 284 of the U.S. Code’s Title 35, triers of fact may award “non-statutory” damages for infringement of a design patent. But these same judges also may err in determining how much of an object’s value comes from the aesthetic appeal of its ornamental features and how much comes from other sources of value, whether ornamental or functional, and whether patented or unpatented.

The risk of error at each stage of the process – from the initial design patent application to the ultimate test of infringement in court – creates at least some incentive for a designer to overstate his or her case. Fortunately, these incentives are similar to the temptations to make overly broad claims about other grounds for patentability. Regardless what grounds are at issue, the remedy inevitably is better training for examiners and judges in traditional design standards and greater vigilance on their part about those standards’ application.

Ned Andrews

June 19, 2014, 4:53 PM

Ned Andrews is an assistant public defender with the Virginia Indigent Defense Commission and an associate fellow of the R Street Institute. He is a graduate of the University of Virginia School of Law, where he served on the managing board of the Journal of Law and Politics and the Virginia Journal of Law and Technology. He previously received a bachelor’s in philosophy from Yale University. Andrews was the 1994 Scripps National Spelling Bee champion and he is author of the 2011 book “A Champion’s Guide to Success in Spelling Bees: Fundamentals of Spelling Bee Competition and Preparation.”

The fight over TNCs in California

June 19, 2014, 4:39 PM

Legislation regulating so-called “transportation network companies” in California passed the state Senate Energy, Utilities and Commerce Committee by a unanimous 8-0 vote earlier this week, amid a row between the TNCs, insurance companies and traditional taxicabs.

This bill, A.B. 2293, stems from a tragic New Year’s Eve incident in San Francisco in which a six-year-old girl was struck and killed in a crosswalk by a driver who was an UberX contractor.

Under the bill, which moved now to the Senate Insurance Committee, TNCs would be required to provide primary insurance for any driver currently logged in to use their service. The measure codifies the California Public Utilities Commission’s proposed minimum of $1 million of coverage.

The measure is sponsored by Assemblywoman Susan Bonilla, D-Concord, who said she presented the bill to fill a “gap” in insurance coverage and consumer protections, create clear definitions of when commercial and personal insurance coverage is primary, and ensure all drivers are adequately covered during all periods of TNC services.

Bonilla’s legislation defines three distinct “periods” of TNC service, between when a driver turns the application on to when it is turned off.

  • Period 1: The driver turns the app on and waits for a passenger match
  • Period 2: A match is accepted, but the passenger is not yet picked up
  • Period 3: The passenger is in the vehicle

Insurance industry groups argue that the TNCs must be required to provide primary coverage for all three periods. For their part, the TNCs argue that during Period 1, a driver is not active and should not be required to carry the higher coverage standard demanded during the other two periods.

In addition to insurers, the bill also is supported by consumer attorneys, the California Airports Council and the San Francisco International Airport, who each argued for the measure on public safety grounds.

TNCs like Uber and Lyft raised opposed the $1 million minimum as too high and insisted the bill is anything but a compromise. Many Lyft and Uber drivers showed up to voice their opposition, most implying that A.B. 2293 would shut down the TNCs and therefore their livelihoods.

The other source of opposition came from taxi cab associations, who decried the bill for codifying TNCs as different from cab companies and subject to different regulations.

Committee Chairman Alex Padilla, D-Los Angeles, expressed support for the measure overall, although he felt it should be subject to further negotiation. He said he feared requiring $1 million coverage during all periods might be too high, but felt that was a matter on which the insurance committee should rule.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Despite setback in Senate, there’s no reason to give up on patent reform

June 19, 2014, 11:23 AM

The last few weeks have brought both good and bad news to supporters of patent reform looking to reduce system abuse.

Hopes for legislative action were dashed when a major bipartisan reform bill that enjoyed the endorsement of President Barack Obama was pulled from the Senate calendar.

Conversely, there are still opportunities for the executive branch to intervene directly, as well as courts, which have recently been tougher on plaintiffs pursuing patent claims based on suspected invalid patents or outright frivolous claims. It also provides an opportunity to expand the conversation to international trade.

A patent troll generally has one of two goals: to extract a dubious royalty payment or to block market entrance by a potential competitor.

In regard to the former, the troll often attacks a small business or start-up, claiming to hold the original patent on the product or process its target is selling. The start-up, lacking the resources for a long court fight, settles out of court because it’s the better of two bad options. The cost, nonetheless, is passed onto consumers in the form of higher prices.

This practice – albeit slightly different – has spilled over into international trade.

France Brevets, Taiwan’s Industrial Technology Research Institute, Innovation Network Corporation of Japan and South Korea’s Intellectual Discovery are all examples of state-sponsored patent pools. Over time, they have all accumulated thousands of patents, many for products that never made it market. Their aim is to use weaknesses in patent law to favor companies within their own countries while taking legal action against foreign competitors.

Call this a latter-day version of protectionism: If a product from a foreign company threatens your domestic player, sue, ideally in a place where there are legal weaknesses to exploit.

That results in convoluted litigation such as ITRI’s infringement suit against South Kore’’s LG Corp. (and its U.S. subsidiaries) in a U.S. District Court for the Eastern District of Texas – a preferred venue among trolls because it ranks among the highest in the United States in upholding patent claims. This is a further inducement for defendants to settle, even if they have a strong case that the suit is frivolous. And, at the end of the day, consumers pay.

With legislative patent reform dead for the time being, it seems for now the movement to curb this abuse will have to rely on the courts. This means slower movement toward general reform, as court cases often focus on one aspect of the wide range of patent law. But each new ruling in favor of defendants adds to the weight of case law and jurisprudence.

The U.S. Supreme Court dealt a setback to trolling in an early June ruling when it vacated an appeals court ruling of infringement brought by Biosig Instruments against Nautilus Inc., an exercise equipment maker, over heart rate monitors. The Supreme Court said the appeals court, by disallowing only patents that were “insolubly ambiguous,” still left the door open for claims based on vague or indefinite specifications.

For trolls, who in lawsuits often try to stretch broad definitions and descriptions as much as possible, this is a setback, because the Supreme Court essentially raised the standard for a judgment of infringement beyond mere ambiguity, and will force plaintiffs to be more specific about the definitions and functions of a product or device to make an infringement charge stick.

As these and other court decisions add up, the attractiveness of the United States as a venue for patent trolling suits may diminish.

At the same time, the White House can be more assertive in condemning state-sponsored patent pooling. The practice is questionable under current trade agreements, as it involves governments taking an ownership stake in commercial intellectual property. This creates a conflict of interest when the regulator has an interest in the jurisdiction over which it presides. The U.S. Trade Representative’s Office and the Department of Commerce need to be more vigilant in protesting these practices.

Legislation may be on hold for now, but that doesn’t mean pressure for patent law reform should stop. Otherwise trolls will continue to abuse weaknesses in patent law and be a drain on domestic and international economic growth.

R Street welcomes Supreme Court decision on patents

June 19, 2014, 10:05 AM

WASHINGTON (June 19, 2014) - The R Street Institute welcomed today’s unanimous decision by the Supreme Court in Alice Corp. v. CLS Bank International, which ruled that patents filed by Alice Corp. regarding its abstract ideas are not legitimate.

“While this decision was more narrow than some had hoped, in that it does not fundamentally overturn the notion that software is patentable, we are encouraged that the court expanded the reasoning it used in Bilski v. Kappos to rule a broader category of abstract ideas are not eligible for patent protection,” R Street Senior Fellow R.J. Lehmann said.

“The U.S. Constitution offers limited monopoly rights to inventors to provide incentive for progress in sciences and the useful arts, but too often in recent years, overly broad and inappropriately granted patents have been used as the basis of litigation that has choked the courts and stifled innovation,” Lehmann added. “This decision offers a step in the direction of reform, but comprehensive action by Congress is still needed to address these problems.”