Out of the Storm News

Syndicate content
Free markets. Real solutions.
Updated: 3 weeks 16 hours ago

Cameron Smith

August 29, 2014, 11:23 AM

Cameron Smith is the principle of  Smith Strategies LLC, a regular columnist for the Alabama Media Group and a senior fellow with the R Street Institute.

Prior to founding Smith Strategies, Cameron was vice president and general counsel of the Alabama Policy Institute, where he managed all policy, legal and communications operations.

Previously, Cameron had a number of posts in both the U.S. House and U.S. Senate. He was legislative counsel for Sen. Jeff Sessions, R-Ala., on the Senate Judiciary Committee. He ran the House Intellectual Property Caucus and was counsel to Rep. Tom Feeney, R-Fla. He also served as counsel to Rep. Geoff Davis, R-Ky., where his primary legislative project was the REINS Act, which would provide significantly more accountability for Congress regarding the impacts of federal regulation.

Cameron is a graduate of Washington and Lee University and the University of Alabama School of Law. He is a member of the Tennessee and Alabama bars. He resides with his wife, Justine, and their three sons in Vestavia Hills, Ala.

Email: csmith@rstreet.org

Slowing the rise of the oceans

August 29, 2014, 10:28 AM

From Al Gore to the leadership of groups like the Union of Concerned Scientists, environmentalists long have warned that global disaster is certain unless we do something about rising sea levels. The “something” that most on the left want is to remake our energy economy and increase government control over energy use in order to cut down on human emissions of greenhouse gases that cause the thermal expansion of ocean water and the melting of polar ice sheets.

A look at the facts reveals a less alarming, although still disconcerting, environmental picture. When it comes to combating and adapting to rising sea levels, many of the factors most within our control are not directly associated with the climate.

The environmentalists deserve some credit. It is beyond dispute that greenhouse gas emissions are the most important factor behind the global rise in sea levels. Releasing carbon dioxide and other greenhouse gases into the atmosphere, largely from burning fossil fuels, traps heat from the sun. Over the past two decades, global sea levels have been rising at a rate of slightly more than 0.11 inches per year.

But projections about the future extent of the trend remain too imprecise to be of practical use to policy-makers. The United Nations’ Intergovernmental Panel on Climate Change predicts that the “most likely” case is that global sea levels will rise between one and four feet over the next century. A continuation of the trend of the last 20 years (roughly twice the average rate most scientists believe seas rose over the 20th century) would result in total sea-level rise near the low end of the IPCC projections.

Although many models indicate the rate will accelerate, whether it does, and by how much, will make an enormous difference. A one-foot rise would be reasonably easy to deal with in many places, while four feet could be catastrophic. And complex climate models have a dismal record of predicting the future.

It’s also important to note that greenhouse gas emissions are not the only factor in climate change, and that climate change is not the only cause of rising sea levels. In North America, relative sea levels are changing not only because of rising waters, but also because of sinking landmass. The East Coast has been slowing sinking for thousands of years. The intersection of these two phenomena, rising seas and sinking landmass, could make sea-level rise doubly destructive in certain parts of the country.

For instance, along the Gulf Coast of Louisiana, sea-level rise appears to be happening at nearly a dozen times the global rate: nearly an inch a year. The reasons are complicated, but relate to tectonic shifts in the ocean floor. The consequences could be disastrous. Much of southern Louisiana may be inundated in the next century, and parts of Texas may not be that far behind. And, if the projections are right, controlling greenhouse gas emissions would do almost nothing to change things.

Development has made an already severe natural problem worse. A century-long project to control the Mississippi-Missouri River system and prevent flooding has reduced the amount of silt the river carries. This results in “silt starvation” that is slowly eating away at the land in the Mississippi Delta.

Also contributing to this kind of erosion have been the heavily subsidized National Flood Insurance Program and local economic incentives to build in river valleys and along the coasts. Other causes are more bizarre. The nutria or “river rat,” a South American critter that fur farmers brought to the United States in the 1940s, has no natural predators here and feasts on the plant life of coastal marshes. Along the Chesapeake Bay and other areas, river rats eat so many plants that the land is left bare and gets washed away.

For the regions most likely to face dramatic impacts from rising sea levels in the near future, no amount of emissions control will make a major difference. In fact, for some, the only solution may be to relocate people and property away from the coast.

At a minimum, in our most densely populated hurricane-prone areas, like the New York/New Jersey and Miami metropolitan areas, large investments in “structural mitigation,” seawalls and the like, is almost certainly going to be necessary to protect lives and property. Spending several billion dollars to protect Manhattan from rising seas and hurricane-driven storm surges will almost certainly offer a very good return on investment, even if 21st-century weather patterns aren’t significantly different from those of the last century. A vigorous nutria control and eradication effort is also in order, as are local zoning standards that take potential sea-level rise into account.

In many cases, however, government would do best by simply getting out of the way. Subsidies for flood insurance, which Congress recently voted to extend, need to be eliminated, as do all other federal and state programs that provide implicit and explicit subsidies to build in low-lying areas. A comprehensive review of Army Corps of Engineers river control projects, with an eye to reducing silt-starvation, is long overdue.

Climate change presents its own set of challenges on the global level, and we will need ways to respond to that, as well. Some changes to energy policy are likely justified. But the favorite policies of many environmentalists—heavy-handed regulation of carbon dioxide emissions and subsidies for trendy alternative energy sources like wind and solar power—are not effective ways to help the areas of this country most threatened by rising seas and falling coasts. Policymakers can deal with sea-level rise. But they don’t have to follow the environmental left’s playbook to do it.

Cameron Smith

August 28, 2014, 3:57 PM

Cameron Smith is the principle of  Smith Strategies LLC, a regular columnist for the Alabama Media Group and a senior fellow with the R Street Institute.

Prior to founding Smith Strategies, Cameron was vice president and general counsel of the Alabama Policy Institute, where he managed all policy, legal and communications operations.

Previously, Cameron had a number of posts in both the U.S. House and U.S. Senate. He was legislative counsel for Sen. Jeff Sessions, R-Ala., on the Senate Judiciary Committee. He ran the House Intellectual Property Caucus and was counsel to Rep. Tom Feeney, R-Fla. He also served as counsel to Rep. Geoff Davis, R-Ky., where his primary legislative project was the REINS Act, which would provide significantly more accountability for Congress regarding the impacts of federal regulation.

Cameron is a graduate of Washington and Lee University and the University of Alabama School of Law. He is a member of the Tennessee and Alabama bars. He resides with his wife, Justine, and their three sons in Vestavia Hills, Ala.

Email: csmith@rstreet.org

Ian Adams

August 28, 2014, 3:16 PM

Ian Adams is senior fellow and California director of the R Street Institute.

Most recently, Ian was the Jesse M. Unruh Assembly Fellow with the office of state Assemblyman Curt Hagman, R-Chino Hills, while Hagman served vice chairman of the California Assembly Insurance Committee. In this role, Ian was responsible for appraising legislative and regulatory concepts, providing vote recommendations for bills in committee and on the Assembly floor and performing a host of other public affairs duties.

Previously, while still enrolled at the University of Oregon School of Law, Ian was a legal extern with the office of state Rep. Bruce Hanna, R-Roseburg, who was then co-speaker of the Oregon House of Representatives. Ian’s prior experiences include serving as a law clerk for the Personal Insurance Federation of California and as an intern in the office of former Gov. Arnold Schwarzenegger.  He also works pro bono as registered in-house counsel with Transitional Living & Community Support.

Ian is a 2009 graduate of Seattle University, with bachelor’s degrees in history and philosophy and received his law degree from the University of Oregon in 2013.

Phone: 916.751.5269

Email: iadams@rstreet.org

 

In the CDC-FDA e-cigarette study, ‘probably not’ is the new ‘yes’

August 28, 2014, 1:38 PM

Assume that you conducted a survey in which you posed two multiple-choice questions:

“Do you think you will smoke a cigarette in the next year?”

“If one of your best friends were to offer you a cigarette, would you smoke it?”

Respondents could choose from these answers:

Definitely yes

Probably yes

Probably not

Definitely not

You’d add up the “definitely yes” and “probably yes” responses to tally those intending to smoke; and you’d total the negative responses to gauge how many are unlikely to smoke.

This would be a straightforward and uncomplicated task, unless you were a CDC or FDA analyst, milking the National Youth Tobacco Survey for scary numbers.

On Aug. 25, the CDC issued its latest sky-is-falling press release, suggesting that e-cigarettes are driving teenagers to smoke. The release focused on a study coauthored by CDC and FDA researchers whose core finding was:

Among non-smoking youth who had ever used e-cigarettes, 43.9 percent said they intended to smoke conventional cigarettes within the next year, compared with 21.5 percent of those who had never used e-cigarettes.

To reach this conclusion, the CDC-FDA redefined “probably not” to mean “yes, I will.” Adolescents who answered “probably not to either of the two questions were classified as intending to smoke.

The feds used 2013 data that is not yet public, but using the 2012 NYTS, I can show you how much the distorted definition matters.

This table shows the numbers of never and ever users of e-cigarettes who intended to smoke, using the CDC-FDA definition (i.e., “probably not” means “yes, I will”). The percentages in parentheses are weighted to reflect the population of the survey.

  Never Users of E-cigarettes Ever Users of E-cigarettes No intention to smoke 13,312   (76%) 70   (41%) Intention to smoke 4,360   (24%) 80   (59%) All 17,672 (100%) 150 (100%)

 

Using conventional definitions, I produced the chart below. Any two yes responses defined intention to smoke, any two no responses were no intention, and mixed responses were just that, mixed. These are my results:

  Never Users of E-cigarettes Ever Users of E-cigarettes No intention to smoke 17,103 (97%) 128 (81%) Mixed intention      422 (2%) 13 (11%) Intention to smoke      147 (1%)    9 (8%) All 17,672 (100%) 150 (100%)

This paints a completely different picture of the e-cigarette situation. The appearance that adolescents who have ever used an e-cigarette (even one puff) might be more likely to intend to smoke is based on the responses of just nine survey participants.

Carl Phillips has extensive comments on at the CASAA blog here and here.

This is not the first time that a highly questionable definition has been used to fabricate a highly speculative gateway claim. I assure you that this is probably not the last bogus CDC analysis of youth e-cigarette use.

Can Prop 103 handle driverless cars?

August 27, 2014, 8:00 AM

Earlier this year, Google announced the introduction of a completely driverless car. On this blog, Eli Lehrer took time to discuss the insurance implications of such a development. Among other things, he posited that, as drivers become less involved in the decisions made by their car, the associated risks of operating the vehicle will go down.

It stands to reason that a reduction in the risk presented by a driver’s behavior may lead to a reduction in the amount that a driver will pay for auto insurance – provided that the current model of individual vehicle ownership and auto insurance coverage persists.

Insurance products designed to cover autonomous vehicles will likely need to parallel whatever evolutionary course of technological development autonomous vehicles take.

In the near-term, the autonomous vehicles taking to the roads will likely be incremental in their approach to reducing driver involvement. For the sake of continuity alone, those vehicles will look and operate more like the Lexus SUVs driving around Mountain View today than they will the grinning ovoid pods touted on Google’s blog. Early adapters of autonomous vehicles will still enjoy the presence of steering wheels and pedals that will allow them to maintain control over the vehicle.

In California, it is an open question as to whether and how early autonomous vehicle adapters will enjoy auto insurance rates that reflect the reduced risk their limited involvement will represent.

The current system for determining rates and premiums for auto insurance policies is dictated in code by 1988′s Proposition 103. California Insurance Code Section 1861.02(e) lays out with great specificity a list of rating factors that insurers are obligated to use as they develop auto insurance rates. The list of rating factors is divided between mandatory and optional factors. Today, there are three mandatory factors and 16 optional factors. Additional rating factors may be adopted via regulation by the insurance commissioner, so long as those factors have a “substantial relationship to the risk of loss.”

What is significant about Prop 103′s mandatory rating factors is that they have very little relationship to the risk of loss presented by the operator of an autonomous vehicle. Consider, in decreasing order of importance, what the three rating factors are now:

  1. The insured’s driving safety record.
  2. The number of miles he or she drives annually.
  3. The number of years of driving experience the insured has had.

As operator influence over the course and speed of a vehicle wanes, so too will the importance of an operator’s driving record and the number of years of experience they have sitting in their vehicle. Of Prop 103′s three mandatory rating factors, only the number of miles annually driven will bear directly on the risk presented by autonomous vehicle operation.

Because of Prop 103′s rigid control of rating practices, absurd scenarios involving autonomous vehicle insurance policies are not hard to imagine. For instance, an autonomous vehicle operator with a poor conventional driving history who operates her Google car very little could pay more for her insurance than another adopter with a better history who operates his autonomous vehicle a great deal. Both drivers would present the same risk, but old rules would make one pay more, unnecessarily.

To avoid absurdity, policy makers, regulators and stakeholders will have to craft a new system that can accommodate the risks presented by autonomous vehicles. And yet, while it seems inevitable that some changes will be needed, changing Prop 103 is not a straightforward task. On the one hand, a legislative fix would require a two-thirds vote of the Legislature on a measure that courts could deem “furthers the purposes” of Prop 103. On the other hand, interested parties could qualify an initiative and work to convince 50.1 percent of Californians of the merits of their new system.

In either case, the sooner that all parties can agree upon a system and an approach, the better.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Delaware becomes first state to include digital accounts in estate law

August 26, 2014, 4:08 PM

Say you have an iTunes library that’s the envy of the most obsessive music collectors. Or a Facebook account with thousands of friends who obsessively share and “like” anything you post. Or a Twitter account that can drive media discourse due to its massive number of followers.

And then, you die.

What happens to these very real forms of digital and (in some cases) social capital?

Believe it or not, under the status quo, your heirs could (and probably would) be completely shut out of any inheritance of these things. In fact, given that Facebook and Twitter’s current terms of use explicitly foreclose people logging into accounts they don’t own, any attempt to claim a dead relative’s social media account could very well lead to the destruction of that social media account, along with whatever was built into it.

Even worse, iTunes has no mechanism by which ownership can be transferred to an heir, which in the real world is like having one’s record collection go up in flames the instant he or she dies. Something clearly needs to be done to remedy these problems.

Fortunately, in at least one state, something has. This month, Delaware enacted the Fiduciary Access to Digital Assets and Digital Accounts Act, which permits people to leave instructions in their wills for social media and email accounts, blogs, iTunes and cloud storage lockers like Dropbox to be passed onto their heirs. And if Suzanne Brown Walsh of the Uniform Law Commission has her way, similar laws will be enacted in all 50 states.

On the one hand, the fact that such a law passed apparently without resistance is welcome news. On the other hand, the fact that a law like this is only now being pushed, despite the fact that iTunes and cloud computing are both years old, is a troubling sign of how slow the law is to change in an era when planned obsolescence sometimes happens in mere months, rather than years. It also is emblematic of a failure by lawmakers to view digital assets as real in the same sense as actual physical ones, despite the fact that they often mimic physical assets.

Just as iTunes is, in principle, no different from a collection of compact discs, blogs can easily be thought of as collections of correspondence and/or private diaries that, in an older era, might have been passed down in actual physical form. Emails are clearly analogous with letters. Cloud computing of the type practiced by Dropbox may as well be considered the equivalent of a safety deposit box, and no one would contest the right of loved ones to inherit those.

While most of the digital goods under consideration look and act like physical goods of times past, there is another element to the issue that makes it especially puzzling that it has taken so long to address this issue. That is, unlike physical goods, which can depreciate with the effects of time, it’s taken for granted that the Internet is forever. The existence of the Wayback Machine, as well as numerous other means by which archived materials can be recovered from digital netherspace (that is, if it even needs to be recovered at all) speak very well to this sense of agelessness. If a person’s perishable physical assets can be passed down, then surely goods that can last forever without aging or depreciating should be covered the same way.

Whatever the reason that it has taken so long to update the law, hopefully the rest of the nation will look to Delaware as an example. And hopefully in future, the law can change in response to technological shifts at the speed of email, rather than lagging behind it like some imitation of the Pony Express.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Justice Department confirms Lois Lerner emails still exist, proving that IRS officials are a pack of liars

August 26, 2014, 2:53 PM

Well, look at that. The Justice Department has confirmed to the conservative watchdog group Judicial Watch that the emails of several IRS officials, including those of Lois Lerner, still exist, despite claims that they’d been lost due to hard-drive crashes and destroyed backup tapes:

Judicial Watch President Tom Fitton said Justice Department lawyers informed him that the federal government keeps a back-up copy of every email and record in the event of a government-wide catastrophe.

The back-up system includes the IRS emails, too.

“So, the emails may inconvenient to access, but they are not gone with the [broken] hard drive,” Judicial Watch spokeswoman Jill Farrell told the Washington Examiner.

Judicial Watch is now seeking the release of the emails, which Justice Department lawyers say would be hard to find because of the significant size of the backup system.

Yeah, Judicial Watch is probably the least of the Justice Department and IRS’s concerns. The House Oversight and Government Reform and House Ways and Means committees, both of which are investigating the IRS’ targeting of conservative groups, aren’t going to care how difficult of a task it will be to recover these emails.

There’s also IRS Commissioner John Koskinen, who has been uncooperative with congressional committees and investigators. He claimed that the emails had been lost due to technical problems, and there was nothing that could be done to recover them. And, then, the IRS official in charge document compliance revealed that the backup tapes on which Lerner’s emails are stored could still be around. Koskinen confirmed that fact a day or two later.

Needless to say, House committees investigating the IRS scandal could have a field day with this. One would imagine that Koskinen will, once again, be forced to go to the Hill for hearings over the issues, where he’ll be subject to more harsh testimony, perhaps even more assertions that he lied under oath in his previous visit. And the thing is, both Koskinen and the IRS deserve every bit of scrutiny they’ve received and what will undoubtedly continue to come their way.

Ten great policy panels to vote for at SXSW

August 26, 2014, 12:44 PM

I recently wrote a short post highlighting the four panels we put forward for the 2015 SXSW Interactive conference, including the one I’ll be on (along with folks from the Electronic Frontier Foundation, the Center for Democracy and Technology and Internews). Here at R Street, we think this is a great way to bring free-market ideas to one of the biggest annual gatherings of technologists, activists, and entrepreneurs.

To determine its final program, SXSW relies in part on a public voting process to sort through the thousands of submissions it gets each year. Now through Sept. 6, you can vote for all the panel ideas you like. To participate, just go to panelpicker.sxsw.com.

There are a lot of other great organizations putting together policy-focused events for next year. Here are some of the best:

  1. Putting Policymakers to Work for You
    Featuring: Grover Norquist, Americans for Tax Reform; Michael Petricone, Consumer Electronics Association; Adrian Fenty, Perkins Cole; Julie Samuels, Engine Advocacy
  2. Legal Hackers: A Global Movement to Reform the Law
    Featuring: Jameson Dempsey, Kelley Drye & Warren; Amy Wan, Patch of Land; Phil Weiss, Fridman Law Group; Dan Lear, Avvo Inc.
  3. Government Surveillance: How You Can Change It
    Featuring: Harley Geiger, CDT; Rep. Zoe Lofgren, U.S. House; Christian Dawson, Internet Infrastructure Coalition; Ben Young, Peer 1 Hosting
  4. Disruptive Innovators Under Attack
    Featuring: Gary Shapiro, CEA; Robert Scoble, Rackspace; Kevin O’Malley, TechTalk
  5. Keeping America Competitive: Tech Policy in 2015
    Featuring: Sen. Jerry Moran, U.S. Senate
  6. Public Policy and Ridesharing: Lyfting Communities
    Featuring: Rep. Blake Farenthold, U.S. House; Sara Weir, National Down Syndrome Society; Chris Massey, Lyft; J.T. Griffin, Mothers Against Drunk Driving
  7. Privacy Matters: Baking Privacy Into Your Apps
    Featuring: Gautam Hans, CDT; Jon Callas, Silent Circle; Deepti Rohatgi, Lookout
  8. Operation Choke Point and Alternative Currencies
    Featuring: Mark Calabria, Cato Institute; Perianne Boring, Chamber of Digital Commerce; Ashe Schow, Washington Examiner; Cathy Reisenwitz, Sex and the State
  9. Friend or Foe? How the Government Impacts Startups
    Featuring: Rep. Darrell Issa, U.S. House; Rep. Jared Polis, U.S. House; Virginia Lam, Aereo; Rachel Wolbers, TwinLogic Strategies
  10. How Public Policy Protects Patents and Startups
    Featuring: Rep. Peter DeFazio, U.S. House; Vishal Amin, U.S. House; Tim Sparapani, Application Developers Alliance; Kate Doersken, Ditto.com
  11. Bonus: The Changing Privacy Landscape
    Featuring: Nuala O’Connor, CDT; Lee Rainie, Pew Research Center
This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

How big government and cronyism are slowing the growth of solar in the south

August 26, 2014, 12:01 PM

An Aug. 9 article in the Los Angeles Times explored why states in the U.S. South that get plenty of sunlight — such as Virginia, South Carolina and, of course, the Sunshine State itself, Florida — are not embracing solar.

The answer the article found was not the lack of demand. In fact, there is a lot of demand for the energy source. Rather, governments in these states have made it extremely difficult for solar companies to do business through regulations and taxes. Which industry is supporting these taxes and regulations? The utility industry, which largely generates power by conventional and dirtier fuels such as coal and natural gas, and largely enjoys a state-protected monopoly on service.

The first thing that needs to be understood is how a utility’s business model works. While some states allow nominal competition among utilities, the way it works in most of the country is that a utility goes to the state government and essentially gets a monopoly to build and sell electric services within a particular area of the state. In exchange for letting state utility regulators set utility rates, utilities are guaranteed a profit on every kilowatt hour of electricity they sell.

The very recent rise and proliferation of solar companies, new ways of financing solar installation, continued generous federal subsidies for renewables and the falling price of solar panels combine to make rooftop solar a viable option for more and more homeowners across the country. The growth of rooftop solar and increased energy efficiency are threatening the old utility business model. Not only are rooftop solar customers using less energy from the grid, but they’re able to sell excess power back to the grid at favorable rates through net metering laws which have been in place on the federal level since 2005. These factors have traditional utilities up in arms and they have been trying to kill new solar projects, especially in the Southeast, which has lots of sunshine. This has set up a political battle royale between heavily monopolized crony utilities and federally subsidized crony solar companies.

One of the weapons traditional utilities have used against solar companies is to threaten lawsuits against companies who offer to lease solar panels and systems to homeowners. South Carolina until this month was one of the states that outlawed companies who lease solar panels and systems as energy providers who compete with the official monopolies. Florida, Georgia and North Carolina still outlaw solar leasing and/or power purchase agreements (PPAs), which essentially keeps solar a privilege of the more well to do. In the rest of the Southeast, the regulatory status is still unclear, because most states have not explicitly legalized the practice. (Texas is the only state in the South where solar leasing/PPAs are explicitly legal.)

Some southern states also make it difficult to install solar systems by assessing taxes and fees on solar that don’t exist in other places. States in the region also make the solar installation process difficult with burdensome laws and a heavily complicated permitting process. These obstacles have combined to place the Southeast far behind regions of the country like New England, which isn’t exactly known for its sunny skies, in solar-electricity production.

What southern states need to do is simply open up a free market in energy in order to take advantage of the solar boom. State legislatures should repeal bans on solar leasing and PPAs. This will make solar affordable for middle-class homeowners instead of merely a privilege for the wealthy. States also need to repeal any and all special taxes and fees on solar panels and equipment and make solar panels and systems on residential properties property tax exempt. Across the South, lawmakers should streamline the permitting process to install solar panels and systems in ways similar to what California has recently done.

The South could and should take the lead in energy deregulation. Currently, no southern states except for Texas has deregulated electricity. Not only has Texas experienced lower rates since it deregulated in 2002, but a variety of renewable have come to take appeal to Texans who are now able to choose their energy provider. Considering the near obsolescence of the old utility business model, states should consider deregulating their electric grid as well.

The ideal free-market energy program would be to end all energy subsidies, whether for so-called “green” energy or conventional fossil fuels. But until the federal government moves forward on that front, southern states can and should open up their own energy markets to allow consumers to switch to solar energy. The South can and should be an example of promoting both free markets and a cleaner environment. With free-market energy, it can have both.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Sherlock Holmes and the case of the broken copyright law

August 25, 2014, 8:00 AM

Arthur Conan Doyle’s famous fictional detective Sherlock Holmes is fond of saying that one must never alter facts to suit theories, instead of theories to suit facts. Given this, Holmes might have had quite a few problems with the lawsuit involving the Conan Doyle Estate that was resolved this June.

The case originated with an author named Leslie Klinger, who wanted to publish his own book of Holmes stories, and involved a dispute over whether the Holmes character had, in fact, entered the public domain or whether the Conan Doyle Estate could still claim rights to the character.

As far as twisting facts to suit theories, the Conan Doyle Estate practically taught a master class in the business. The Hollywood Reporter describes their argument and the reception it received:

On appeal, the Doyle camp attempted to raise an argument that had been unsuccessful at the district court: that Sherlock Holmes is a ‘complex’ character, that his background and attributes had been created over time, and that to deny copyright on the whole Sherlock Holmes character would be tantamount to giving the famous detective ‘multiple personalities.’

On Monday, after dispensing with the Doyle estate’s other argument that the controversy wasn’t ripe enough to be adjudicated, 7th Circuit Judge Richard Posner rejected that assessment. ‘We cannot find any basis in statute or case law for extending a copyright beyond its expiration,’ writes Judge Posner. ‘When a story falls into the public domain, story elements — including characters covered by the expired copyright — become fair game for follow-on authors, as held in Silverman v. CBS Inc.’

Judge Posner explicitly cited the example of William Shakespeare, who famously adapted stories taken from other sources (“The Tempest” is Shakespeare’s only completely original work), and defended the idea that allowing multiple interpretations of stories/characters will also encourage creativity. Posner handed the decision of which interpretation of a given character is truly the best to the market. He steadfastly refused to treat the Conan Doyle Estate’s request as anything but what it truly is – an appeal for nearly perpetual copyright. Under their theory of the law, Holmes would be under copyright for no less than 135 years.

It is fortunate that Judge Posner saw through the Doyle family’s attempt to keep a character whose legacy they have been profiting from for more than a century (while mostly allowing the character himself to languish, unused) under proverbial lock and key. What is less fortunate is that so many other powerful actors in American politics seem to want a similar degree of punitive power. The Walt Disney Corp., for instance, lobbies frantically to keep the cartoon “Steamboat Willy” (the first cartoon featuring Mickey Mouse) out of the public domain, presumably out of fear of losing creative control of their treasured mascot.

Make no mistake, there is a role for copyright to play in creative works. However, the current regime serves neither the artists it is supposed to protect nor the entrepreneurs who live in fear of lawsuits. Rather, the system is, at best, an outdated relic of the pre-digital age. At worst, it is a system rigged solely to benefit middlemen with influential lobbyists, instead of producing an environment where the creative can be assured control of their work, but the innovative can also be assured of the freedom to build on others’ achievements.

One of the more silly loopholes to survive to the modern day in copyright law is the fact that terrestrial radio stations are not required to pay royalties to the artists whose work they play. The theory underlying this exemption originally was that terrestrial radio stations were a form of free advertising, which was doubtlessly true at a time when radio was the primary means of exposure to new music. These days, this standard serves only to allow broadcasters to profit off others’ work. What’s more, broadcasters themselves seem aware that this “free advertising” logic is silly, given that they reject it when used as a rationale to lower their retransmission consent fees. If radio stations can only survive by freeloading off others’ creativity, their business model needs to change.

Moreover, it’s absurd that it took Rep. George Holding’s introduction of the RESPECT Act to call attention to the fact that artists who are still alive, such as Aretha Franklin, aren’t entitled to copyright on works of theirs performed prior to 1972. Even the most staunch opponent of copyright would probably agree that an icon like Franklin deserves to profit from all her contributions to the arts, and that simply cutting off protection at a certain year is arbitrary and silly.

So we have a system that, on the one hand, protects the obsolete business models of broadcasters at the expense of artists. What about the other side? Here, abuses abound.

To begin with, copyright damages desperately need to be reformed. Consider this alarming paragraph from a San Francisco Chronicle article in 2010:

Did you ever imagine you could be held liable for copyright infringement for storing your music collection on your hard drive, downloading photos from the Internet or forwarding news articles to your friends?

If you did not get the copyright owner’s permission for these actions, you could be violating the law. It sounds absurd, but copyright owners have the right to control reproductions of their works and claim statutory damages even when a use does not harm the market for their works.

Besides these absurd restrictions on entirely harmless activities, the actual amount of money applied in damages for copyright lawsuits can reach blasphemous proportions. One file sharer was ordered to pay $2 million for illegally downloading 24 songs. That’s higher than many wrongful death lawsuits.

The problem here is simple: on this point, copyright law hasn’t been updated since the 1960s, when infringement was usually for widescale commercial purposes. In the same way that treating terrestrial radio as “free advertising” harkens back to a mindset from that era, treating piracy as anything but free advertising is economically and technologically backwards, at least for the music industry. According to a study from the BI Norwegian School of Management, pirates are 10 times more likely to buy music digitally. Another study by Columbia University found that peer-to-peer users purchase 30 percent more music than non-filesharers.

At least some actors within the entertainment industry seem to recognize this, which is why the creators of “Game of Thrones” rightfully responded to news their show is the most pirated on the Internet by being flattered.

Who benefits from these backward rules? Only industries with a pathological aversion to change, such as the film industry, which despite its constant alarmist rhetoric about piracy, is doing quite well, a couple bad summers notwithstanding. That industry’s reaction to the VCR, as well as its support of the totalitarian SOPA/PIPA legislation, should have shown the lengths to which it will go in order to avoid having to adapt. But in a world where freer copyright law gives us Shakespeare, and modern copyright law gives us Transformers 2, there is little doubt from whence the better art comes.

Some language in the final paragraph of this post was altered to reflect the film industry’s recent global financial performance.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Lessons from Napa’s ‘winequake’

August 25, 2014, 7:22 AM

In the early hours Sunday morning, Northern California shook with the largest earthquake it has experienced since 1989. The South Napa Earthquake, as it is being referred to by the U.S. Geological Survey, was a magnitude 6.0 event that was felt as far as 150 miles from the epicenter. USGS estimates that within that radius “15,000 people experienced severe shaking, 106,000 people felt very strong shaking, 176,000 felt strong shaking, and 738,000 felt moderate shaking.”

Preliminary damage reports have prompted the declaration of a state of emergency by Gov. Jerry Brown. Dozens of people have been injured and electrical service has been interrupted in much of the area. In spite of the severity, given the quake’s location, much of the post-event coverage posted online has focused on piles of shattered wine bottles…

Unfortunately for those private homeowners with extensive wine collections, loss of their favorite vintage may not be covered by insurance. Damage caused by earth movement, like damage caused by flooding, is often excluded from standard homeowners coverage. Thus, in the case of the South Napa Earthquake, for those wine connoisseurs with only homeowners insurance, their wine likely will be an uncovered loss. Those more fortunate or prudent might have had their collection insured under a separate “all-peril,” itemized, floater policy.

In the end, earthquake coverage is a sensible proposition for the millions of Californians who live daily with heightened earthquake risk. It is easy to forget that five of the 10 most costly earthquakes to strike the U.S. took place in California. Yet, only a fraction of Californians own an earthquake policy.

The single largest writer of earthquake coverage in California is the California Earthquake Authority (CEA). The CEA is a publicly managed, privately funded, state residual market entity that was created after the Northridge earthquake to provide the coverage necessary to maintain the viability of the homeowners’ insurance market, since California mandates that companies that sell homeowners insurance also sell earthquake coverage.

Whether motivated by cost consciousness or obliviousness to the risk, according to Glenn Pomeroy, the CEA’s CEO, California’s current state of earthquake coverage is “not a rosy picture.” Only 11 percent of Californians with homeowners insurance have an earthquake policy, down from a high of 30 percent two decades ago.

The CEA provides a number of policies that range broadly in price and coverage options so that Californians of all walks of life can ensure their property is covered.

On the lower end, the CEA offers a base-limits policy. This product is designed to provide basic protection against earthquake damage. The policy will pay to repair or replace a dwelling – subject to a deductible – but it excludes some items from coverage such as pools, patios, fences, driveways and detached garages. Only covered, structural damage counts toward meeting the deductible. The base-limits CEA policy pays up to $5,000 to repair or replace personal property and provides $1,500 for any additional living expenses incurred if the home is uninhabitable while being repaired.

More recently, the CEA began offering a new “Homeowners Choice” product, which offers additional selections in coverage and more immediate policy benefits. The product allows a consumer to put together a more customized policy that allows for separate deductibles for the dwelling and the personal property within. Significantly, this policy offers insureds a choice of a 15% or 10% deductible for each coverage. Other coverages include $1,500 in coverage—subject to no deductible—for emergency repairs to protect the covered property from further damage, secure the residence premises, or restore habitability of the dwelling.

Many CEA participating insurers offer CEA’s higher coverage limits to their policyholders. For an additional premium, up to $100,000 in personal property coverage and $25,000 for Additional Living Expenses/Loss of Use coverage is available.

Homeowners in Napa with CEA policies who are currently picking through the remains of their wine collection might take heart. Provided their loss is substantial enough, unlike their neighbors with no such policies, they may be covered.

For other Californians, this could be a teachable moment on earthquake under-insurance. A lesson that they can put into practice immediately, since the CEA does not limit buying or selling CEA products in any way after an earthquake. For policy information, contact a participating insurer today.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Mouth cancer facts

August 22, 2014, 10:58 AM

Baseball star Curt Schilling says he has mouth cancer that was caused by chewing tobacco. His announcement has generated considerable interest in mouth cancer, its frequency and causes.

What is mouth cancer?

Mouth cancer typically appears in the lining of the mouth; it may start as an ulcer or red area that is discovered in a dental or medical exam. The phrase is often used incorrectly to include cancers of the throat.

Schilling did not disclose the location of his cancer, but he did say that he found a lump in his neck. This indicates that the tumor had spread to a lymph node, a condition that more likely suggests a tumor of the throat, rather than the mouth.

How common is mouth cancer?

It is very rare. Mouth cancer occurs with higher frequency in people who have the risk factors I describe below, but it is possible for someone with no risk factors to develop this disease. As I described previously, among 100,000 men age 40 and over, perhaps three or four with no risk factors will develop mouth/throat cancer each year; only one or two of those cases will be mouth cancer.

What causes mouth cancer?

The most common cause of mouth cancer is smoking, which can increase risk tenfold; smokers who drink alcohol have even higher odds. Alcohol abuse raises the odds about fourfold.

Another recognized risk factor is infection with human papillomavirus, a sexually transmitted disease discussed previously. HPV is considered by some experts to be a significant cause of mouth cancer, but precise estimates of risk elevation are not available.

Schilling attributes his cancer to chewing tobacco. There are numerous studies of the risks related to smokeless tobacco. The odds of developing mouth cancer if you use chewing tobacco or moist snuff are about the same as if you didn’t smoke, drink or have HPV. In other words, one or two users out of 100,000 will develop mouth cancer.

Smoking and drinking can produce a cancer anywhere in the mouth, esophagus, voicebox and lungs. HPV is generally linked to cancers of the throat. In contrast, the most common location, by far, for mouth cancer in a smokeless tobacco user is at or very close to where the tobacco is placed, normally between the cheek and gum.

While rare, every case of mouth cancer is unfortunate and potentially avoidable. Have your dentist or physician perform a thorough head and neck exam every year.

Scary armored vehicles aren’t the biggest danger of police militarization

August 21, 2014, 12:30 PM

The problem of police militarization has been in the news for more than a week, as the city of Ferguson, Mo. continues to deal with the aftermath of the police shooting of 18-year-old Michael Brown.

Much of the debate and scrutiny of the police response to the Ferguson protests has focused on the Pentagon’s 1033 program. Created in 1997, the program allows state and local law enforcement to stock up on excess military equipment free of charge. Among the equipment that has been transferred include armored vehicles, assault rifles, aircraft and other military surplus equipment.

Some of these transfers, like the distribution of an MRAP to the Ohio State University Police Department, truly are bizarre, but it is important to note that not all of the equipment transferred from the Department of Defense to local and state police is lethal. Items as mundane as office equipment also are transferred under the 1033 program.

The real problem of police militarization is not, or not primarily, the DOD equipment police can acquire. Many of the arguments against the 1033 program, in fact, sound rather suspiciously like arguments of gun control advocates, in that they presume restrictions on inanimate objects will cause crime – or, in this case, police brutality – to decrease.

In fact, it’s paramilitary-style policing tactics such as “stop and frisk” that really contribute to the distrust of police in minority communities. As I wrote last week in a piece at Rare, police militarization is an attitude in policing that sees itself at war with the people they’re supposed to serve and the community of which they are a part.

We have seen some of this attitude on display in Ferguson. Police have arrested and threatened journalists covering the protests. In some of the most infamous pictures of the violence, police officers confront protesters wearing paramilitary gear and deploy snipers against them. In one instance, a police officer pointed his assault rifle at unarmed protesters and threatened to kill them. When asked for his name and badge number, the officer allegedly replied with profanities. As of this writing, that officer has been suspended, pending investigation.

Another example of the mindset can be found in a Washington Post op-ed written by a veteran Los Angeles cop with the provocative headline: “I’m a cop. If you don’t want to get hurt, don’t challenge me.” Such inflammatory rhetoric from peace officers only serves to separate the police from the people they’re supposed to serve, to make ordinary citizens afraid of the police.

Militarized policing also has led to overuse of paramilitary SWAT teams. The Cato Institute has this interactive map displaying all of the botched SWAT raids that have been conducted over the past three decades. Some of what is defined as “botched raids” include raiding the wrong house, killing a non-violent offender or killing an innocent person.

According to Radley Balko at the Washington Post, it’s estimated there are more than 50,000 SWAT raids in the United States every year. However, only one state, Maryland, requires law enforcement to record when SWAT teams are used and for what purposes. Balko has found that in Maryland, 90 percent of all SWAT raids are used to serve search warrants and that half of all SWAT raids are used in cases where the alleged offenses were non-violent. More states should follow Maryland’s lead and require police agencies keep records on when and why SWAT is deployed.

Many SWAT raids are to enforce “no-knock” warrants, in which a judge allows police to force their way into a residence without knocking or otherwise announcing their presence. No-knock warrants are supposed to be issued only when police believe announcing their presence would result in the destruction of contraband or put their lives in danger. They have led to tragic consequences, both for police officers and people inside the homes.

In May 2014, a SWAT team raid in Habersham County, Ga. – on what turned out to be the wrong house – left a two-year-old in a coma with a hole in his chest, caused by a flashbang grenade that landed in his crib. Incredibly, the county refuses to pay the medical bills of the child who was injured.

No-knock raids can also be deadly for the cops who execute them, as homeowners sometimes confuse them for intruders. In December 2013, Henry Magee’s home in Burleson County, Texas was subject to a no-knock raid by the county sheriff’s office. Magee mistook one of the deputies for a burglar, shooting and killing him. In February 2014, a grand jury declined to indict Magee for murder.

No-knock and SWAT raids need to be reserved for instances where an officer’s life genuinely would be endangered by serving a warrant conventionally. If the raid is botched or an innocent house is raided, there needs to be consequences.

Ultimately, it doesn’t matter how police are equipped, so long as they use the proper tactics and have the proper mindset to serve the public, protect their rights and fight crime. An armed officer with just a revolver and a shotgun can be as abusive as an officer wearing the latest in paramilitary gear and armed with an assault rifle. In the end, what we need most of all is to rebuild the broken trust between the public and the police.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Letter to FCC on Comcast-Time Warner merger

August 21, 2014, 11:25 AM

 

 

Federal Communications Commission
445 12th Street SW
Washington, DC 20554
VIA ELECTRONIC COMMENT FILING SYSTEM

Re: Comcast – Time Warner Cable, MB Docket 14-57

Aug. 21, 2014

Dear Commission Members:

On behalf of the R Street Institute, a Washington-based free-market think tank with offices in Sacramento, Calif., Austin, Texas, Columbus, Ohio, and Tallahassee, Fla., I write in support of approving the proposed merger between Comcast and Time Warner Cable (TWC). Our analysis of this merger is that it is a natural response to changing market conditions, offers significant potential benefits for consumers in both the residential and business markets and that potential harms are either minimal or mitigated by other existing regulations or market dynamics.

The proposed $45 billion merger takes place in an environment characterized by two trends that have hit cable television providers particularly hard in recent years – a shrinking subscriber base for pay-television services and the rising cost of content acquisition.

Comcast has been losing video customers, on net, for at least five consecutive years, down nearly 10 percent from 24.8 million at year-end 2007 to 22.5 million at the end of the second quarter of 2014. TWC has also lost net video subscribers in each of the past five years, falling more than 17 percent from 13.3 million at year-end 2007 to 11.0 million at mid-year 2014.

Cable companies also have seen rapidly escalating costs to acquire content, driven in part by competition from a profusion of video on-demand services like Netflix, Amazon Prime and Hulu, of which Comcast is a part-owner. Intense negotiations for content – including a 2013 dispute between TWC and CBS – also have led to a number of service blackouts, which unquestionably harm consumers. Reflecting trends across the industry, TWC has seen its per-subscriber content costs rise 24 percent since 2010, while Comcast has seen a 20 percent jump over the past two years.

Currently, nine companies – AMC, CBS, Discovery, Disney, Fox, Scripps, Time Warner Inc., Viacom and Comcast itself – control about 90 percent of the $45 billion market for television content. While the content creation market is not itself a monopoly, growing demand has contributed to higher prices. The market for sports content – provided by the likes of Comcast’s own NBC Sports, as well as CBS Sports, Fox Sports, Time Warner’s TNT and TBS and, especially, Disney’s ESPN – has proven particularly thorny for cable companies. The trend toward “cord cutting,” in which consumers eschew any pay-television service in favor of streaming video on-demand, has raised the stakes for cable companies to retain consumers of live broadcasts, tilting leverage further toward providers of sports content.

According to SNL Kagan, fees paid by distributors to carry cable channels are expected to grow from $31.7 billion in 2013 to $40.8 billion in 2016. The market is led by ESPN, which takes in about $5.54 per month per subscriber, compared to about $1 per month per subscriber paid to broadcast network affiliates for retransmission consent, another rapidly growing cost driver. SNL Kagan projects the broadcast networks – including Comcast’s NBC and Telemundo – will pull in about $3 billion in retransmission consent fees in 2015, with the networks themselves taking roughly a $1.3 billion cut and network-owned affiliates getting the remaining $1.7 billion.

The additional negotiating power wielded by a combined Comcast-TWC could potentially serve as a check on rising content acquisition costs, both in carriage fees and retransmission consent agreements. It should be noted that the extent to which this would reverse the prevailing trend is uncertain and may depend partially on whether the combination spurs further media consolidation in response. To the extent that the combined company can negotiate across any of these markets to reduce fixed costs, it could translate into consumer benefits in the form of lower service bills.

Consumers also should benefit from operating efficiencies that reduce costs without reducing output, and from network upgrades, in particular to TWC’s relatively older and slower service. Comcast has said it expects the combination initially to yield about $400 million in capital expenditure efficiencies and to save about $1.5 billion in operating expenses within three years. The company also has announced it will accelerate TWC’s planned migration of at least 75 percent of its service footprint to all-digital service.

One under-appreciated consumer benefit of a combined Comcast-TWC is the role the larger company could play in the business services sector. While both Comcast and TWC have a modest presence in the market to provide broadband and voice service to small business, the firms are only marginal players in the market to serve large commercial enterprises. Because of the need for a large national service footprint, the business services market traditionally has been dominated by telecoms like Verizon and AT&T. A combined Comcast-TWC, with at least some footprint in all of the 50 largest markets, could for the first time become competitive, with benefits redounding to business services consumers.

Some have raised concerns that a combined company would have undue market power to discriminate in both the video and broadband markets, for instance by privileging its own content over that of competitors. Some of these concerns are relevant to the commission’s own separate industry-wide deliberations on regulation on net neutrality, a subject on which R Street has not taken any formal position. However, it is incumbent on those who raise such concerns to demonstrate why a combined Comcast-TWC presents any new issues or heightens any existing issues that did not already exist with the companies operating separately.

Comcast is already bound by the FCC’s program carriage rules not to privilege its own content. The company also has already pledged that the seven-year net neutrality agreement it consented to when it purchased NBCUniversal in 2011 would also apply to TWC. What’s more, any incentive a combined Comcast-TWC would have to discriminate against particular content providers operating on its platform would, by necessity, be balanced against consumer demand for that same content. This is a lesson already learned the hard way by TWC, which lost 300,000 customers during its blackout dispute with CBS.

Were it the case that a combined company would leave consumers with fewer choices, concerns about discriminatory treatment of content would have more force. But Comcast and TWC already do not compete with one another for customers in any market in the country. Moreover, Comcast also has stipulated as part of the terms of the agreement that it will divest 3.9 million residential video subscribers to Charter Communications. The combined Comcast-TWC would remain the largest provider of pay-television services, but it would control less than 30 percent of the market, with DirecTV and Dish Network – both of which do compete directly with Comcast and TWC — having 20 percent and 14 percent, respectively. Other services, including the telephone providers that also compete directly with cable and satellite, comprise with the remaining 36 percent.

As believers in pragmatic, free-market solutions, we believe antitrust action should be limited in scope and focus on demonstrable harm to consumers. We do not believe the issues raised by the proposed Comcast-Time Warner Cable merger meet that threshold. We ask that you allow it to go forward without undue delay.

Respectfully submitted,

 

R.J. Lehmann
Senior Fellow
The R Street Institute

 

Eli Lehrer
President
The R Street Institute

As competition between Lyft and Uber grows, questions linger about disruption

August 21, 2014, 8:00 AM

As Americans become more familiar with the concept of “ridesharing,” things are heating up in what the Wall Street Journal last week dubbed the “fiercest battle in the tech capital,” between Uber and its largest competitor, Lyft.

The Journal piece portrays a “bitter war,” featuring “two heavily financed upstarts plotting the demise of the taxi industry—and each other.” The campaign is mostly being waged in the marketplace, with the two firms competing over price, pick-up times, drivers and services offered. But there are also some allegations of dirty tricks:

A Lyft spokeswoman said Monday that representatives from Uber have abused its service in the past several months with the goal of poaching drivers and slowing down its network. Passengers who identify themselves as working for Uber frequently order a Lyft and then ride for only a few blocks, sometimes repeating this process dozens of times a day, she said…A spokeswoman for Uber denied the company is intentionally ordering Lyft rides to add congestion to its competitor’s service.

Competition is at the heart of capitalism, but some might question the wisdom of devoting so much energy to fighting one another when a common set of opponents lurk: regulators, lawmakers and the special interests who have their ear.

In city after city and state after state—from Pittsburgh to Seattle, and Nevada to Virginia—municipal taxi authorities and public-transit commissions have been cracking down and shutting down ridesharing services with claims that they violate rules governing the licensing, insurance, vehicle types, payment systems and handicapped accessibility required of for-hire taxi or limousine services. In some places, the services have managed to carve out at least temporary accommodation, but much work needs to be done if transportation network companies like Uber, Lyft, Sidecar and smaller upstarts like Summon and Wingz are to grow and thrive.

The first and most important question will be the TNCs’ contention that they are “information content providers” (in other words, publishers) and thus should be held immune from most liability under Section 230 of the Communications Decency Act of 1996. The argument is that, like dating sites, the TNCs merely match potential riders and available drivers.

It’s still not certain if the courts will see it that way. Uber already has been sued in a case charging vicarious liability for the behavior of one of its drivers, a charge that usually only would apply in an employer-employee relationship. More recently, an UberX driver was arrested following a fatal New Year’s Eve accident in San Francisco, a case that has become a centerpiece of the California regulatory debate this year.

Among the questions courts will have to weigh is the extent to which the TNC transactions are held at arm’s length. Uber provides a centralized pricing algorithm for its drivers, including the well-publicized “surge pricing” intended to draw more drivers to areas experiencing service shortages. This contrasts with Sidecar, which allows drivers set their own prices and lets consumers choose among nearby drivers. Lyft has implemented its own version of surge pricing, but more recently has experimented with the reverse: “happy hour” pricing with cut-rate fares when a surplus of drivers are on the road. Add to these considerations that Lyft and Sidecar formally regard payments to their drivers as “donations” that are always optional and negotiable.

Why does this matter? Because the more “tools of the trade” the services provide to their drivers—whether pricing algorithms or GPS devices or even the pink moustaches that adorn the fronts of cars operated by Lyft drivers—the more they potentially undermine their Section 230 defense. This may extend even to steps the firms already have taken to accommodate safety and insurance concerns, including beefing up their screening and background-check processes and purchasing commercial insurance to cover their drivers’ liability.

These are the kinds of thorny issues that could torpedo progress on the regulatory front. It’s obviously essential that TNC firms continue to offer the best services at the best prices, which is the only way to build a constituency who will demand regulators allow the companies to operate. But it also would probably be wise for the nascent industry to begin thinking about best practices that demonstrate they can agree to at least some common solutions.

Toward that end, it has been good to see the emergence of Peers, a nonprofit dedicated to taking on issues common to the sharing economy. Lyft also has taken the lead by founding the Peer-to-Peer Ridesharing Insurance Coalition, which could provide a needed dialogue with the insurance industry to develop new products that better fit the kinds of risks that ridesharing presents.

As my colleague Andrew Moylan and I argue in a recent paper, the peer-production economy holds the potential to free billions in trapped and underutilized capital and spur economic growth. But even as these innovative firms look to best each other in the market, they also must work together to keep regulators from strangling their industry while it’s still in the cradle.

Don’t intervene in Azeroth: Thieves in MMOs shouldn’t face criminal charges

August 20, 2014, 9:04 AM

While the world reels from the Ebola epidemic, the instability in Gaza and the intensifying tensions between Russia and Ukraine, one British politician seems to have put his finger on a crisis that no one noticed. Mike Weatherley, MP, proposed that criminal penalties be enforced for the theft of (wait for it) items stolen in the virtual world of Azeroth, home to the popular World of Warcraft role-playing game.

As Weatherly describes it, the law would mean that people “who steal online items in video games with a real-world monetary value receive the same sentences as criminals who steal real-world items of the same monetary value.”

It’s tempting to dismiss this idea as the ramblings of an easily wounded WoW player who just happens to have the capacity for legislation at his back. But Weatherley does point to a significant issue that has arisen in the game world, and which does seem to implicate virtual items as real world property, even if his proposed means of responding to this issue is wrongheaded. Given that this is a policy area that may become more and more relevant the more the popularity of online gaming (especially the sort powered in part by real money) grows, it behooves us to take a look at Weatherley’s proposal.

Let’s start with what Weatherley’s idea gets right, which is that items in video games like WoW do have real world monetary value. This is true even when they aren’t sold directly for real money, because often, the virtual currencies used in these games can actually have their exchange value relative to the dollar calculated and exploited. The practice of “gold farming” in WoW for the purposes of selling virtual gold in online marketplaces for real money has become infamous. So has the case of Anshe Chung, a Second Life character whose real-life player Ailin Graef became the first “virtual millionaire” by turning her virtual prostitute character into a real estate mogul within the game.

What’s more, the practice of buying and selling rare items with real money has a long history in video game culture. This author, for instance, recalls the thriving trade in “Rings of Jordan” from the Diablo II multi-player community, and which exploded with the introduction of World of Warcraft. At this stage, it is beyond doubt that high-value items within online games do have actual real world value. For instance, an “epic mount” in WoW can fetch as much as $500. Faced with sums like this, the question has to be raised: Why shouldn’t theft of such goods be treated as a criminal act?

Well, because despite the value of these virtual items, they still exist within a virtual ecosystem whose rules don’t quite line up with the real world, where private property rights enforcement mechanisms already exist and where the law is singularly ill-equipped to operate. A metaphor that treats the theft of one player’s epic mount as equivalent to stealing a real thoroughbred horse looks persuasive on its face, but when you actually think about how it would be prosecuted, the metaphor breaks down quickly.

In an online game, identity can be far more fluid than in real life. WoW ties players’ identities to their emails, meaning that a player who finds himself banned on one account can easily create a new one with a new email address. This isn’t a complete barrier to enforcement of criminal laws in games like WoW, which operate on a “pay to play” basis and thus gives law enforcement an additional means to track players though their credit cards, although there are privacy concerns and systemic identity theft problems associated with doing so. But in free massively multi-player games like the Guild Wars series, email addresses might be the only clue as to the criminal’s identity. It’s also quite plausible that a parent whose credit card was used in WoW might find themselves on the hook for “crimes” their children committed.

Players who do steal or otherwise violate in-game rules are usually held accountable by a much more efficient system than a legal regime — the moderators of the game itself. These moderators are usually successful at weeding out problematic players by banning their accounts or using other technical solutions that would run afoul of the constitution if employed by law enforcement. Companies like Blizzard, the maker of WoW, are allowed to be more draconian in ways that are more appropriately tailored to the setting and thus more effective than the law enforcement system.

Recognizing video game items as legally protected “property” raises a whole host of troubling legal questions about what is actually happening in games like WoW. For instance, along with selling items, players also sometimes sell epic-level characters to other players (this author even bought one once), which serve as a means for less-experienced players to gain access to perks within the game that would otherwise be off-limits. But if the items are real, then does that make transactions of this nature slavery? Do players who kill each other in-game, especially in games where no resurrection is allowed, face charges of murder or destruction of property? And are “griefers” and trolls to be treated as terrorists? Just how close do the rules of virtual life and real life have to be before the police force gets involved?

Having had my only set of unique armor in Diablo II stolen at age 13, I understand the impulse to seek an authority who can clap an online tormentor in very real irons. However, in this case, the remedy is worse than the disease. A discussion of civil penalties might be more appropriate, but when it comes to the application of criminal law to the world of Azeroth or EVE Online, the only winning move is not to play.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

For insurers, California’s ‘diverse procurement’ is well-meaning and wrong

August 19, 2014, 2:10 PM

Under Section 927.2 of the California Insurance Code, insurers writing $100 million or more in annual premiums in the state are required to submit reports to the Department of Insurance on what efforts they’ve made to procure business from firms owned by racial minorities, women and disabled veterans.

On the face of it, this “diverse business procurement” policy is laudable, because it seeks to empower historically marginalized communities while being minimally invasive and nominally disruptive to insurers. Deeper analysis yields a different conclusion, especially when one considers the unique degree of government involvement such a policy requires.

The degree to which government should be involved in business judgments should be predicated on the degree of separation between government and private conduct. The closer the private conduct is to the operation of government, or the extent to which the private conduct is dependent upon government sanction, the better the government’s case for substituting its goals for that of the private entity in question.

In the case of California’s diverse business procurement efforts, three tiers of involvement have developed. At the first tier is the state government itself, which is well within its proper scope of authority when it establishes its own procurement framework. Second are quasi-governmental bodies and third are heavily regulated private bodies. The distinction between the final two tiers is important for policy makers to keep in mind when they consider which praiseworthy objectives are to be achieved through disruptive means.

In California, in the second tier, state government has long obligated private actors to seek to redress social ills by substituting its policy preferences for private business judgment. Supplier diversity programs are designed to improve the financial standing of historically marginalized communities by channeling specific business expenditures (paper supplies, for instance) to firms that are owned by women, minorities and disabled veterans. The channeling function is created by obligating regulated parties to publicly report the extent to which their business procurement is “diverse.” Since 1986, firms that are subject to regulation by the California Public Utilities Commission have been made to report the extent to which they have achieved “diverse business procurement.”

Public utilities were the first candidates for adoption of diverse supplier requirements because of their close regulatory relationship with the state. Utilities are granted quasi-monopoly status because they meter out public services broadly thought of as essential goods. As a result, public utilities do not function in highly competitive markets. As protectors of the public utilities’ monopolies on public goods, policymakers saw little harm in directing them to seek out diverse suppliers at the admittedly nominal expense of ratepayers.

However, more recently, California’s highly regulated insurers have become entangled in similar treatment. Quickly drawn comparisons between public utilities and insurers give the mistaken impression that the two are not dissimilar. For instance, like public utilities, insurers are heavily regulated. Like public utilities, insurance products are purchased by virtually all Californians. Where the similarities end and where the meaningful distinctions between the two begins is the point at which consumer choice enters the equation.

The government-protected quasi-monopoly status enjoyed by public utilities and the competitive marketplace in which insurers operate requires that the two approach their consumers differently. While public utilities rely on tangible materials subject to physical scarcity or decay, insurers offer services of a non-physical nature, which are capable of rapid change and innovation to meet consumer demand. This has led to a voluntary insurance market in which market advantages are sought by attempting to maintain low rates through innovation and hard-fought proprietary advantage – not through state protection.

Were policy makers interested, they would find that their heavy-handedness is working at cross purposes with their stated objective. By imposing their own priorities on the insurance market, California’s policymakers have not only prevented policy holders from realizing their lowest possible rates – many of whom are themselves historically marginalized – but they have also encumbered the robust procurement diversity programs that insurers already have in place at a national level by subjecting them to state requirements.

 

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Republicans’ lukewarm climate warrior

August 18, 2014, 2:09 PM

From Bloomberg View:

Across the street from the Washington Convention Center, through a narrow door tucked between a bar and what used to be a furniture rental store, up two flights of rickety stairs, Eli Lehrer is sitting in his small, sparsely decorated office, drinking a Diet Coke and explaining how to sell the Republican Party on a carbon tax. After listening to him for an hour, I start to think it might work.

Lehrer is an odd candidate for the job of saving the planet, not least because he doesn’t seem that enthusiastic about it. Where liberals talk about climate policy in near-messianic terms of protecting future generations, Lehrer calls climate change real but relatively unimportant, blames Democrats for making it part of the culture war and points out that carbon dioxide is “not intrinsically harmful to human health.”

In other words, Lehrer, a 38-year-old Chicagoan who runs a think tank called the R Street Institute, seems as if he could talk climate change with most Republicans without tripping any alarms. His bona fides are good: He was a speech writer for Republican Bill Frist when he was Senate majority leader and was later vice president of the libertarian think tank the Heartland Institute, until he quit over a billboard that made questionable reference to the Unabomber, Ted Kaczynski

Republicans’ lukewarm climate warrior

August 18, 2014, 12:13 PM

From Bloomberg View:

Across the street from the Washington Convention Center, through a narrow door tucked between a bar and what used to be a furniture rental store, up two flights of rickety stairs, Eli Lehrer is sitting in his small, sparsely decorated office, drinking a Diet Coke and explaining how to sell the Republican Party on a carbon tax. After listening to him for an hour, I start to think it might work.

Lehrer is an odd candidate for the job of saving the planet, not least because he doesn’t seem that enthusiastic about it. Where liberals talk about climate policy in near-messianic terms of protecting future generations, Lehrer calls climate change real but relatively unimportant, blames Democrats for making it part of the culture war and points out that carbon dioxide is “not intrinsically harmful to human health.”

In other words, Lehrer, a 38-year-old Chicagoan who runs a think tank called the R Street Institute, seems as if he could talk climate change with most Republicans without tripping any alarms. His bona fides are good: He was a speech writer for Republican Bill Frist when he was Senate majority leader and was later vice president of the libertarian think tank the Heartland Institute, until he quit over a billboard that made questionable reference to the Unabomber, Ted Kaczynski