Out of the Storm News
Post-World War II, the United States experienced a simultaneous boom of babies and buildings. To keep pace with the need for new development, homes literally could be ordered out of a Sears catalogue.
The true expense of such homes was not found in the cost of the building supplies, but rather in the labor necessary to construct the structure. A Sears home available for $1,000 would, with the cost of construction figured in, run closer to $3,000.
That distinction, between the cost of materials and the cost of labor, is at the core of a current controversy about how replacement costs should be calculated.
Replacement-cost calculations are utterly uninteresting to most people, right up until they experience a loss. At that point, those uninterested people are surprised to learn that, if their homeowners insurance policy spells out that they are entitled only to the “actual cash value” of certain features of their home (a roof, a deck, etc.), then damage claims will subtract from the replacement costs any depreciation stemming from the wear and tear the home would have experienced over time.
In 2002, the Oklahoma Supreme Court was prompted by a U.S. District Court to answer this question: “In determining actual cash value, using the replacement costs less depreciation method, may labor costs be depreciated?” That is, when determining the amount to be paid out to satisfy a claim, is it appropriate to distinguish between the materials, which obviously depreciate over time, and the labor use in construction?
The Oklahoma court ruled against such distinctions because “labor is a part of the whole product, it is included in the depreciation of the roof.” At bottom, insurance is designed to return claimants to their pre-loss condition. If a tree lands on a homeowner’s seriously dilapidated porch, the value of that porch (which might effectively be nothing) is all the homeowner is entitled to.
But in the case of, for instance, the owner of the Sears home, the materials and labor were purchased separately. This can be cause for confusion. Recently, courts and even some departments of insurance have embraced that confusion and departed from the Oklahoma court’s reasoning. The counter argument – elucidated earlier this year by a U.S. District Court in Kentucky – holds that while construction materials age, labor does not:
Labor is not subject to wear and tear. Indeed, the cost of labor to install a new garage would be the same as installing a garage with 10 year old materials. In other words, depreciated labor costs would result in under indemnification. As the insurance contract is one for indemnity, depreciating the cost of labor violates the contract.
Of course, it’s true that labor is not subject to wear and tear. But that obfuscates the larger point, which is that any produced item is composed of a mixture of labor and material. It is that item which depreciates, not its component parts, which ultimately are indistinguishable.
The Oklahoma Supreme Court’s reasoning rests soundly on a theory of property not unlike that of 17th century British philosopher John Locke. A home’s roof is not just the materials needed to create it or the labor for its construction. The roof only exists when the materials and labor are mixed.
Were an insurance policy drafted to cover labor and materials separately, it would be reasonable for a claimant to assume that labor would not depreciate. But in virtually all cases, claimants insure their structure as a whole. Put another way, the relative component parts of labor and materials aren’t relevant to the depreciation of an item itself.
A practical consequences of the Kentucky court’s decision to not allow for depreciation of labor in “actual cash value” policies will be that like items are treated differently. For instance, it is possible that one contractor who got a good deal on materials, but overcharged for labor, would construct a deck that is deemed more valuable than another contractor who paid market price for materials and a regular price for labor, even if they were the same deck and depreciated at the same rate. Again, the value of the item itself is determinative of the value of a claim because the policy, in all likelihood, covers the item itself.
That courts are choosing to ignore the distinction between the antecedent conditions of an item and the item itself is troubling. That the colloquial wisdom upon which such decisions are being made is creeping into quasi-legislative determinations of state insurance departments is more problematic. Depreciation of labor is an affordability mechanism by any other name. Prohibiting it will hurt the very insureds that insurance departments are attempting to assist.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
John Kasich has had a difficult time differentiating himself from the Republican field. Not only are there other, more recognizable moderates, but John Kasich is basically the kind of entity that ghosts in and out of things like debates. He never says anything really right and he never says anything really wrong. In a sense, he’s like the State of Ohio: it’s there. You’ll visit it if you have to drive through it, but you wouldn’t go out of your way.
Yesterday, though, at a New Hampshire “Education Summit” for “The 74 Million” education advocacy group, Kasich came up with a novel and unconventional idea to address problems with education in this country. While other candidates like Bush, Jindal, Christie and Fiorina explained their comprehensive plans to address the need for education reform, John Kasich wants to abolish the teachers’ lounge – because we can’t have people fraternizing on work time.
While some Republicans have called for abolishing the federal Education Department, Ohio Gov. John Kasich on Wednesday set his sights on a smaller target: the teachers’ lounge.
At an education forum here featuring several GOP presidential contenders, Kasich offered many kind words for teachers and teachers unions, but he also chastised their efforts to block changes in some parts of his state. And they spend too much time scaring teachers about their jobs.
“There’s a constant negative … They’re going to take your benefits. They’re going to take your pay,” Kasich told former CNN anchor Campbell Brown, whose advocacy news site “The 74 million” hosted the forum along with the American Federation for Children…
“If I were, not president, if I were king in America, I would abolish all teachers’ lounges where they sit together and worry about ‘woe is us’,” Kasich told Brown.
Frankly, if I were king in America, I’d demand that our teachers’ lounges have an endless supply of wine and chocolate, maybe massage tables. Being a teacher is a tough job: you’re an instructor, a caretaker and a substitute parent. And sure, teachers, like any group of colleagues, will gather together to discuss what sucks about what they do. It doesn’t seem like a suuuuuuuper effective way to bust a union, however. After all, you can’t stop teachers from meeting in hallways, after work in restaurants or over a bottle of teacher-affordable alcohol in their off hours.
Kasich does seem to want to eliminate something though, since not eliminating anything puts him far out of line with his peers, most of whom suggest that the primary target for elimination actually ought to be the Department of Education itself. Kasich chastised his fellow conference-goers for wanting to take out an entire governmental authority, even though Kasich himself wanted to cut off the DOE back when he was chair of the House Budget Committee. And unfortunately for Kasich, the Internet is forever.
So what happened? Kasich says his views on certain issues have “evolved,” the standard answer most candidates give when faced with a flip-flop. But it might have more to do with his support of teachers unions in his home state, something he can’t shake. Even yesterday, as he polished up his answer to the New Hampshire forum, the Ohio Teachers Union was singing Kasich’s praises, with one member of the Ohio Federation of Teachers even going so far as to say that if we had to have a Republican president, she’d prefer it be John Kasich.
He must think taking away a teacher lunchroom is a decent compromise position. Or something.
In landmark news, a systematic review of available literature by Public Health England on the health and safety implications of electronic cigarettes concludes that their use is about 95 percent safer than smoking.
The authors conclude that smokers who have tried other methods of quitting without success could be encouraged to switch to e-cigarettes. In addition to encouraging their use as a cessation tool, encouraging switching could help reduce smoking-related disease, death and health inequalities.
The authors also conclude there is no evidence that young people’s experimentation with e-cigarettes has led to increased smoking in this group.
There is still a great deal to be learned about e-cigarettes. But these findings are crucial, given the rise of inaccurate perceptions that e-cigarettes are as harmful as tobacco cigarettes. Adopting this advice could have a profound public health impact for the 42 million Americans who still smoke.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
In his ongoing efforts to foster a competitive private flood insurance market in Florida, state Sen. Jeff Brandes, R-St. Petersburg, recently requested help from the state’s Office of Insurance Regulation to obtain data from the federal government that justify the rates the National Flood Insurance Program charges Floridians for their for flood insurance coverage.
The information – held by the NFIP’s overseer, the Federal Emergency Management Agency – would include loss-projection models, actuarial figures and actual loss-history data that private companies looking to write flood coverage in Florida could use for their rate-making
Florida Insurance Commissioner Kevin McCarty replied that his office would request the information. Most notable in his reply, however, was his statement that the NFIP’s rates would be considered “unfairly discriminatory” under Florida law, given the lack of actuarial information available, coupled with the federal program’s admitted rating system based largely on averaging multiple zones together with a “theoretical determination of the probability of flooding.”
Merely averaging together different risks and charging one rate would force those at a lower risk of loss to pay more than they otherwise should, which Florida law considers “unfairly discriminatory.” A Florida insurance company doing this would be sanctioned by regulators; however, because the NFIP is a federal program, rather than an admitted insurance carrier, the state has no oversight over it.
In 2014, Brandes and Rep. Larry Ahern, R-Seminole, sponsored S.B, 542, later passed by the Legislature, authorizing private carriers to offer flood coverage within Florida’s insurance regulatory framework. Earlier this year, the legislature passed S.B. 1094 – also sponsored by Brandes and Ahern – which increases coverage options for flood insurance beyond the NFIP.
However, given that the NFIP has been the sole provider of flood insurance, only they hold the historical-loss data and other information private companies need to set their rates.
FEMA has been reluctant to release its loss history data from properties covered by the NFIP, citing “privacy concerns.” It’s understandable that NFIP policyholders may not want their claims history made public. However, this is, ultimately, public data that can and should be released to private companies and/or to state insurance regulators. Personally identifiable information can be safeguarded with nondisclosure agreements and stiff penalties for violators. Many private entities exchange confidential data with government agencies every day, and the NFIP should be no different.
Ultimately, it is in the interests of consumers, policyholders and taxpayers alike to shift as many policies away from the NFIP and onto the private market. Doing so will give consumers more coverage options, decrease rates and save taxpayers from having to bail out the NFIP every time there is a major flood event.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
From the Project on Government Oversight:
We reached out to Kevin Kosar, former CRS researcher of 11 years, to talk about the concerns we heard from Hill staffers regarding making formal reports public. Kosar serves as director of the Governance Project which focuses on Congress as an institution at the R Street Institute.
Concern #1: CRS reports are only meant for Congress, not the public.
Being called Congressional Research Service seems to imply an exclusivity to the formal reports, as if they are only for the eyes of Congress. Perhaps publicly releasing them would create problems or misunderstandings that compel Members of Congress to protect the reports from disclosure, even when the information isn’t classified or sensitive.
Does Congress already release CRS reports to the public? Do CRS researchers expect that their reports will make it to the public?
Kosar: Yes, all the time, and Congress has done so for decades. Consider this 1979 CRS annual report listing dozens of reports that Congress either published or released as committee prints. Thousands of CRS reports are scattered over various government Internet sites, including both House.gov and Senate.gov. CRS does not own the work it produces; Congress owns it, and anyone who works for Congress can release any report he or she pleases. The Senate’s Virtual Reference Desk has a bunch of reports on it, which is posted to help the public better understand how Congress works. So, for example, there are CRS reports on vetoes and legislative process.
The problem, obviously, is that reports are released to the public ad hoc. Some reports appear on the Internet quickly, but others take months or years to dribble out. Meanwhile, DC insiders can easily find the reports they need, or pay for a subscription service like CQ-Roll Call to get the reports. The rest of the public has to cast about thousands of websites in hopes of locating a report, which may well not be the latest and most up-to-date copy of the report. It is an inequitable situation.
Finally, the Internet has been around for 20 years, and these days everyone at CRS knows that their reports will eventually find their way out to the public. Thus, when a CRS expert wants to speak in utter confidence to Congress, they do it on the phone, in person, or via a confidential memorandum.
Concern #2: CRS researchers are not ready to be public persons, scrutinized for their work.
Decision-makers may be hesitant to make CRS reports public because they want to protect the people who work on them. Making these reports public could thrust the authors into the public sphere.
Would researchers be prepared for public attribution for their work? Would they want the attention?
Kosar: One of the allures of working at CRS is that your name goes on the report. CRS is an individualistic organization—the researcher aims to become known as the go-to person on a subject. To achieve that means putting one’s name on the reports. That’s how congressional staffers locate the expert they want, by seeing a name on a report. Additionally, getting promoted to higher grades and pay levels at CRS is heavily dependent on the work one does for Congress. You the CRS expert want Congress to ask you lots of questions, to request you to testify before Congress, and help work on legislation. I should also note that CRS’s promotion guidelines speak of the value of an analyst becoming a “nationally recognized expert” in a subject. When academics and other researchers outside Congress are citing a CRS analyst’s work, that testifies to its quality. And it is also really nice as a CRS analyst to have an academic contact you and say, “Hey, I saw your report on X— it was really good. Would you be interested in helping me with an academic study or coming to present a paper at a research conference?” It is good for CRS researchers to be active members of scholarly communities.
Concern #3: If reports are made public, CRS researchers will get negative attention and complaints that will make their work too difficult.
After all, if their names are on these reports, researchers might face some sort of harassment. Even if information is unclassified and without any sort of agenda, won’t researchers focusing on sensitive topics need to be protected? Why would they want to be associated with what they write?
Might researchers get in trouble for something they say? Or receive negative attention that interferes with their jobs?
Kosar: Everything written by CRS goes through four different stages of review. The reports are scrubbed of anything that might offend anyone. Moreover, they do not make recommendations or push policies. They lay out the facts and the options—which is not something that upsets many people.
In my 11 years there, I never heard of any CRS analyst or reference expert being stalked or threatened for writing a report. I mean, who beats up think-tank experts? Nobody. Do CRS analysts sometimes get grouchy emails from the public or activists? Sure—and the practice is to delete them and go on with one’s day.
Concern #4: Putting CRS’s formal reports online would change the content that CRS generates.
It may be one thing to be read by Hill staffers, but allowing the entire country to access the reports sounds intimidating and could change the content and format of the materials to make them more interesting to the broader audience. Soon CRS analysts may feel pressured to create something akin to clickbait.
If analysts are writing for a public audience, won’t they change the content of their work?
Kosar: No. This speculation is wrong on a few counts. First, the basic premise of the contention is odd. Members of Congress and congressional staff are members of the public, so it’s not as if we are talking about two different tribes of humans with completely different understandings of government. In fact, a significant percentage of legislators are brand new to Congress and benefit from reading primers that introduce them to complex governance topics. That the public also benefits from reading these reports is an extra benefit.
Second, CRS staff are paid by Congress and work for the Congressional Research Service. So, they are unflinchingly loyal and dedicated to writing reports and studies that appeal to Congress. Add to that the fact that CRS’s promotion policy does not account for mentions in blogs or media sites. That’s not part of the promotion process. Furthermore, CRS has internal policies that ensure that reports are not being written about topics that are not of interest to Congress. The agency also tracks congressional downloads of its products so as to better understand what Congress wants and does not want. CRS’s focus, then, is laser-like on the needs of Congress and nothing will alter the basic incentive structure that keeps its focus on Congress’s wants.
The separation of powers is a hallmark of democratic systems. Power is divided among different branches or units of government. The legislature legislates, the executive executes and the judiciary judges. Even in parliamentary systems, where the head of state is a monarch or subservient to the parliament, there tends to be a functional division between courts, agencies and lawmakers. For better or worse, this division affects the popular view of the relationship between public administrators and legislators.
Separation of powers is a concept that cropped up in response to 17th century concerns about absolutist government. Thomas Hobbes argued that citizens had to obey their government no matter what it did. To disobey, he wrote in Leviathan, would plunge humanity back into a “perpetual war of every man against his neighbor,” which is the very state of nature that mankind sought to escape through the creation of government.
However, later theorists drew a different conclusion. Erecting a powerful government created a peril at least as great as the state of nature. Thinkers such as John Locke and Baron Montesquieu argued that despotic government might be avoided if government power was placed in a different set of hands. James Madison, who had a major role in producing the U.S. Constitution, justified the separation of powers thus:
In framing a government which is to be administered by men over men, the great difficulty lies in this: you must first enable the government to control the governed; and in the next place oblige it to control itself.
Marvelous as the separation-of-powers theory is, it comes with implementation costs. Separating the branches encourages rivalry and often contemptuous attitude, particularly between the legislative and executive branches. Legislators view themselves as the bureaucrats’ bosses and guardians of the public interest. Administrators and civil servants see themselves as apolitical experts who take oaths to dutifully carry out the law. Legislators often disdain bureaucrats, accusing them of being slow, stubborn and unaccountable. Civil servants and administrators, for their part, look upon elected officials as politicos and amateurs at governance. To keep a relative peace, the two sides take the view that lawmakers shall not meddle in administration and bureaucrats shall advocate policy. “Never the twain shall meet.”
Of course, governance suffers when legislators and administrators keep one another at arm’s length. Consider, for example, the findings of a new study on federal regulation. Congress frequently writes laws directing agencies to issue rules by a particular deadline. Over the course of 20 years, federal agencies missed half of these 1,400 deadlines. That is an eye-popping finding, but is it any wonder? Bureaucrats rarely assist Congress when it is writing laws, so legislators have little sense of what is a reasonable deadline. The Patient Protection and Affordable Care Act (“Obamacare”) had some deadlines that were a mere 90 days after enactment of the law.
The quality of regulation suffers from too strict an adherence to the separation of powers. Congress has no formal role in the development of regulations. When agencies issue final rules, no congressional committee reviews them as a matter of course. Nor does our national legislature vote on regulations, as many state assemblies do. Congress does not even have a formal system for tracking whether its demands for new rules have been fulfilled.
At the federal level, the legislative and executive branches are almost entirely estranged. The president produces a budget, which Congress typically ignores in favor of producing its own. Congress passes a law and then leaves it to agencies to make the rubber hit the road. Each week, the president makes policy by issuing memoranda and the like. If Congress views the president’s actions as contrary to law, then the third branch will referee the dispute.
At a recent gathering of women state legislators, Teri Quimby, who has worked for both Michigan’s legislative and executive branches, suggested elected officials should take field trips to bureaucracies. Doing so would enable lawmakers to better comprehend agencies’ capacity to achieve whatever goals are legislated.
It is a fine idea, but I would go further. More coordination between the branches might reduce the number of policy errors and make the inter-branch relationship less inimical. Bridging the separation between those who write the laws and those who implement them will improve governance. The two rivals—the elected officials with their fresh ideas and sense of the public’s needs and the long-serving, experienced agency wonks—need to work together.
Goodwill is not enough. Bridging the separation of power will require developing processes and forums that force inter-branch collaboration in policymaking. The REINS Act, which the House of Representatives recently passed, would force Congress to vote on certain regulations before they take effect. Congress and the president also might devise a “Kill List” process, whereby they could jointly identify failed programs for elimination.
Surely, there are more ideas out there. I invite readers to send them to me.
Whitney L. Ball, who died Sunday at 52, was one of the true heroes of the conservative movement. Like just about everyone else who heads a conservative organization, I have plenty of nice things to say for the way she worked tirelessly to advance liberty and help build just about every conservative organization in town. (R Street included.)
I’d like to share a more personal story that I hope shows her human side, too. While I had “known” Whitney as a face at conservative gatherings for over a decade, I don’t think I ever sat down with her for an extended conversation until shortly after I co-founded R Street.
We met for breakfast at an Alexandria hotel. After some pleasantries and launched into a discussion that ended up focusing on the philosophical roots of R Street and the history of the intellectual conservative movement. She was quizzing me intensively; probably testing to see if I really had the chops to run a think tank.
Our food came. A grapefruit on my plate, which looked fine, tasted awful; I took one bite, probably grimaced a bit and continued a rather intense discussion about Hayek. At a first-ever business meeting, I decided it would be unbecoming to complain about the food. In any case, she appeared not to notice.
However, when a waiter came to refill our drinks, she was firm. Very gently and politely, she said: “I think there is something wrong with my friend’s breakfast; his grapefruit isn’t very good.” The waiter fixed things quickly. I’ll never forget it: In a moment of intense conversation about weighty issues, she had taken the time to care about a tiny annoyance I was facing.
We had breakfast a few more times over the years and I always walked away impressed and energized. She’ll be missed.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
From Huffington Post:
Intriguingly, gender doesn’t appear to play a significant role in the study’s findings. For instance, an 18-year-old woman and an 18-year-old man will both pay 18 percent more for their own policies. This is probably the result of two factors — paperwork and risk, says Eli Lehrer, president of the nonprofit R Street Institute.
“A separate policy should, one assumes, (cost) a little bit more just because it requires additional paperwork, mailings and the like,” Lehrer says. “That said, the underwriting risk is more or less the same.”
…Lehrer says, “Young drivers are probably better off being added to their parents’ policy, regardless of where they live.”
WASHINGTON (Aug. 18, 2015) – The option to use carbon fees to comply with the Environmental Protection Agency’s Clean Power Plan could allow some states to offset a significant portion of their tax burdens, according to an R Street Institute policy brief released today.
Authored by R Street Senior Fellow Josiah Neeley, the brief examines EPA data for each of the 50 states to determine the “shadow price” of carbon imputed from emissions-reduction goals called for in the CPP. Neeley then extrapolated from that price and each state’s projected 2030 emissions to determine how much revenue could be generated by using fees, rather than regulatory dictates, to come into compliance.
Texas would generate the most in carbon fee revenues, with $2.5 billion generated per year if the state hits its 2030 reduction limits. That amount is larger than what the state currently collects in taxes on insurance; natural-gas production; cigarettes and tobacco; alcoholic beverages; hotels; or utilities. Carbon-fee collections could be used to offset tax breaks in any number of those areas.
“The revenues that would be generated by carbon fees on an ongoing basis would be sufficient to reduce or eliminate various state taxes in a number of states,” said Neeley. “In Texas alone, insurance and utilities taxes could be phased out entirely, and still leave $45 million to apply toward the state’s $267 million in miscellaneous taxes.”
Neeley cautioned that these estimates do not represent projections about the total cost of the CPP to the wider economy.
“How costly the CPP ultimately proves to be will depend on how each state chooses to go about meeting the required reduction goals,” said Neeley. “The estimates do, however, provide a sense both of how costly meeting the CPP goals via a carbon fee would be, and how much revenue would potentially be available for offsetting tax cuts.”
For a full chart on potential costs and revenues across all states, see the charts in the policy brief.
Earlier this month, the U.S. Environmental Protection Agency released the final version of its Carbon Pollution Emission Guidelines for Existing Stationary Sources: Electric Utility Generating Units. Known colloquially as the “Clean Power Plan,” the rule sets standards for carbon-dioxide emissions from existing power plants.
The CPP calls for an overall reduction in CO2 emissions of 32 percent from 2005 levels by 2030. However, it applies different standards to each state depending on what prescriptions, in the EPA’s view, are technically feasible. The CPP proposed two alternative standards for each state: a mass-based standard that limits the total amount of CO2 emitted, and a rate-based standard that would be applied to average emissions per kilowatt-hour of electricity.
The final rule does not set standards for Alaska or Hawaii, as the EPA claimed it lacked sufficient technical information for those two states. In addition, the CPP sets no emissions-reduction standards for Vermont, which receives its electricity largely from Canadian hydroelectric power. But each of the other 47 states are required to develop a plan to meet reduction goals, while retaining discretion as to the methods used to achieve those goals.
Of particular interest to those who prefer a market-based approach, the final rule stipulates that the plan “could accommodate imposition by a state of a fee for CO2 emissions from affected EGUs [electric generating units].”
Most economists view a carbon fee as a more efficient way to achieve emissions reductions than regulatory mandates or subsidies. A carbon fee has the additional advantage that it can be paired with equivalent cuts to existing taxes. Depending on the type of taxes involved, making a carbon fee revenue-neutral could largely or entirely offset the economic damage that would otherwise would stem from higher energy costs imposed by the CPP.
‘Shadow’ carbon prices
In order to estimate how much revenue a CPP-compliant carbon fee would generate, we looked to EPA modeling on the implicit carbon price needed in each state to achieve the required emissions reductions. For the calculations, we relied on the EPA’s mass-based standards, rather than the rate-based standards, as outlined in the EPA’s state-specific fact sheets.
From this modeling, the “shadow” carbon prices can be combined with state-specific limits for the amount of CO2 that can be emitted under the CPP. We used this to estimate the revenue that would be generated from a carbon fee that complies with the CPP.
Figure 1: State ‘shadow’ carbon prices for 2030
SOURCE: R Street analysis of EPA data
This shadow carbon price varies considerably across the states, from $26 a ton for Utah to $0 a ton for Delaware, Massachusetts, New Hampshire, New York, Oregon, Rhode Island and Washington state.
Importantly, these calculations assume the carbon fee applies only to emissions from the power sector, rather than being an economy-wide carbon tax. An economy-wide price on carbon would be substantially lower than one applied only to electric-generating plants, as it would apply to a much broader tax base. This briefing makes no attempt to calculate what economy-wide carbon fees each state would need to adopt to meet its CPP reduction goals, as such fees ultimately would generate equivalent revenue.
Carbon fee revenues
The amount of revenue each state would get annually from a CPP-compliant carbon fee is listed in Table 1. Based on the fees that would be collected should the state hit its 2030 emissions targets, the highest annual revenue is generated by Texas, at $2.5 billion, followed by Indiana, at $1.3 billion; Florida, at $1.3 billion; and Ohio, at $1 billion.
Table 1: Projected state-by-state carbon-fee revenuesState
Emissions (millions of tons)
Projected revenues ($M)
SOURCE: R Street analysis of EPA data
Revenues from a CPP-compliant carbon fee exceed many individual state taxes. Many states would be able to reduce or eliminate state corporate, income, gasoline or other taxes if they adopted a tax-swap approach.
For example, in Texas, revenue from a CPP-compliant tax would be greater than what the state currently collects in taxes on insurance; natural-gas production; cigarettes and tobacco; alcoholic beverages; hotels; and utilities. The fees could offset a 9 percent cut in the sales tax; a 52 percent cut in the franchise tax; a 59 percent cut in motor-vehicle sales and rental taxes; a 64 percent cut in the oil-production tax; or a 75 percent cut in the fuel tax. The insurance and utilities taxes could be phased out entirely, and still leave $45 million to apply toward the state’s $267 million in miscellaneous taxes.
It should be stressed that these estimates do not represent projections about the total cost of the CPP to the wider economy. How costly the CPP ultimately proves to be will depend both on how each state chooses to go about meeting the required reduction goals. The estimates do, however, provide a sense both of how costly meeting the CPP goals via a carbon fee would be, and how much revenue would potentially be available for offsetting tax cuts.
The California Earthquake Authority – the state’s quasi-public earthquake insurance pool – long has had a problem with take-up. Only about 10 percent of homeowners in this most seismically active state actually buy coverage for the biggest catastrophic peril they face.
But new policy options and a sizeable rate reduction has the CEA’s CEO – former North Dakota Insurance Commissioner Glen Pomeroy – very excited. Before a crowd of hundreds of industry representatives and regulators at last week’s National Association of Insurance Commissioners conference in Chicago, Pomeroy outlined the changes that he hopes will drive California’s earthquake insurance take-up rate to a hitherto unknown high.
The CEA has been busy under Pomeroy. In 2014, it worked with Assemblyman Ken Cooley, D-Rancho Cordova, to improve the language of the mandatory offers its members use as their principle outreach tool to the public. The offers, which accompany renewal notices for residential insurance policies, had gone unrevised for decades.
Benefitting from lower reinsurance premiums and the availability of alternative financing mechanisms like catastrophe bonds, the CEA also was able to file for a 10 percent rate reduction last year with the California Department of Insurance. In addition, the CEA sought newly flexible options for structuring earthquake policies. Lower and separate deductibles were proposed, in addition to higher policy limits and mitigation discounts of up to 20 percent. Those changes, now approved, will go into effect at the stroke of midnight as the calendar turns to 2016.
Remarking on the changes, Pomeroy observed that “all of this moves into hyperspace next year when we begin to offer new options.”
But while the CEA counts down its launch into hyperspace, it does so while crewing a ship of humble design. That’s because, as Pomeroy emphasized to the Chicago audience, the CEA was created to forestall a collapse of California’s homeowners’ insurance market and not to actually cover a meaningful number of residents. In other words, the CEA exists to provide a mechanism for red-blazered professionals to find Californians their dream homes, not to rebuild those homes.
The Golden State is home to $1.6 billion of the nation’s $2 billion in annual earthquake insurance premiums; a testament to the value of the CEA. But the startlingly unserious approach with which California has heretofore approached seismic peril stands in contrast to the dire risk that earthquakes pose.
Addressing the threat posed by seismic vulnerability will require cultivating much greater private stakes in earthquake peril. The ultimate solution to achieve that goal is to require seismically vulnerable properties to maintain earthquake insurance as a precondition for obtaining publicly backed mortgage loans, just as flood-prone properties must maintain flood insurance. Unfortunately, that’s a solution for which neither Democrats nor Republicans have shown any appetite.
The foundational problem for how to account for earthquakes is epistemological, a problem of knowledge. At the conclusion of his remarks, Pomeroy reflected on that problem. He quoted Kathryn Schulz’s recent work in The New Yorker, titled “The Really Big One.”
On the face of it, earthquakes seem to present us with problems of space: the way we live along fault lines, in brick buildings, in homes made valuable by their proximity to the sea. But, covertly, they also present us with problems of time. The earth is 4.5 billion years old, but we are a young species, relatively speaking, with an average individual allotment of three score years and ten. The brevity of our lives breeds a kind of temporal parochialism—an ignorance of or an indifference to those planetary gears which turn more slowly than our own.
In this context, Pomeroy’s allusion to hyperspace is apt. Convincing people to purchase earthquake insurance – convincing them that they want to purchase earthquake insurance – is a matter of bending space and time. It requires connecting the comfortable present with a desperate future. Marketing and Hollywood blockbusters strive to do just that, but it is not without a dose of irony, and certainly cold comfort to Pomeroy himself, that the best promotional tool available to the CEA are earthquakes themselves.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
California’s reaction to the Obama administration’s Clean Power Plan has been a collective mix of self-congratulation and confidence in the state’s ability to comply with the rule’s demands. But while the Golden State largely has moved away from the coal power plants which are the CPP’s principle target, California will not be unaffected by the new regulations.
That’s because California relies heavily on natural gas. Since 2013, more than half of the nation’s added natural-gas generation capacity has come from California. This development has been far more crucial to the state’s much-ballyhooed emissions reductions than the simultaneous expansion of its renewable-generation portfolio. It’s an inconvenient truth that California’s “green” transformation absolutely reeks of natural gas.
Burning natural gas produces fewer carbon emissions than burning coal, but it still produces carbon emissions. Natural gas has always been considered a “bridge” to cleaner energy-generation systems. The CPP effectively blockades that bridge.
Unlike the draft CPP, the final rule projects that renewable sources of electricity, not natural gas, will be used to replace the dirtiest sources of power. Gov. Jerry Brown and Senate Pro Tempore Kevin De Leon will no doubt celebrate that development, since both men propound the belief that a full 50 percent of California’s electricity will come from renewables by 2030. But their effort to hit that grand target will actually by hindered by a move away from natural gas.
Renewable energy, for all its environmental benefits, fails to deliver the same reliability as natural gas. The reason is simple enough: when the wind isn’t blowing and the sun isn’t beating down, wind and solar power shut down. Natural gas, on the other hand, can produce power on-demand whenever necessary. Short of an enthusiastic adoption of nuclear power or the discovery of a long-awaited power-generation holy grail, burning natural gas is the only way to provide Californians uninterrupted modernity.
In other words, as California adopts more renewable sources of generation, the relative significance of gas-powered generation will increase in importance. This will come at just the moment that the CPP curtails the technology’s expansion. This also is all happening just as many of California’s gas plants are reaching their golden years and will require reinvestment to continue operational reliably. Those plants offer irreplaceable capacity that, even under Brown and De Leon’s vision, will be necessary for the foreseeable future.
California may intend to comply with the CPP’s requirements, but successfully doing so will require an “all of the above” approach that those in state office lament. In fact, to keep the lights on and simultaneously reach its environmental goals in a responsible manner, California must avoid thwarting efforts to renew, refurbish or expand natural-gas facilities. In fact, it may need to actively contravene the CPP and increase its reliance on natural gas.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
The U.S Postal Service reported a third-quarter loss of more than $500 million. If you are wondering whether this is news you’re already heard, it is. The USPS has been running deficits for years.
But this was not how the Postal Service was supposed to work. Congress abolished the tottering old Post Office Department in 1971 and replaced it with the USPS. An “independent establishment of the executive branch,” the Postal Service was designed to be self-sustaining. Toward this end, it has more operational freedom than a typical government agency. It can, for example, sell unneeded real estate with relative ease. The agency also has managed, to its credit, to downsize its workforce by 225,000 persons over the past decade.
So why is the agency losing money? Four charts explain the Postal Service’s financial struggles.
Figure 1: Mail volume is down 60 billion pieces since 2007 (Billions of pieces)
Mail volume plunged not long after the Great Recession hit in late 2007. The reason is simple: more than 90 percent of mail is sent by businesses. In the years since, mail volume has not grown. It remains more than 25 percent below its peak and a high percentage of mail volume is advertising mail, which is not very profitable.
Figure 2: Fewer mail pieces means lower revenue ($1B)
Figure 2 tells the tale; between 2007 and 2012, the USPS’ annual revenues fell from about $75 billion to $65 billion. Increased postage rates and carrying more parcels has bumped up the Postal Service’s revenues the past two years, but not by enough. As Figure 3 below shows, the rate increases, which came under a special legal provision, will expire in early 2016.
Figure 3: USPS has not cut expenses quickly enough ($1B)
Postal unions will tell you the USPS is in financial trouble because Congress forced it to pre-fund its current employees’ future retirement health benefits. It’s true these costs are significant, at more than $5 billion per year. Yet even if one wished away these employee compensation costs, the USPS still has not been able to keep revenues above costs. USPS, by the way, has not made a payment to its Retiree Health Benefits Fund since 2010 due to insufficient cash on hand.
Figure 4: USPS debt has spiked ($1B)
The Postal Service had no debt in 2005. Come 2012, it had hit its $15 billion legal debt cap. With too little revenue coming in and expenses too high, debt piled up. Even without paying into its Retiree Health Benefits Fund, the service’s debt grew another $3 billion over the past three years.
With $6 billion in cash on hand, the Postal Service might make the $5.7 billion RHBF payment due Sept. 30. But it probably won’t, as operating with low cash on-hand is a perilous business practice. Moreover, the agency has put off capital upgrades and needs to replace its aging vehicle fleet, 140,000 of which are more than 20 years old.
Congress is struggling to find consensus on postal reform. Some reform proposals would have the USPS try to bolster its revenues by going into non-postal lines of business. Certainly, there is no harm in allowing the USPS to contract with a state to sell fishing licenses at its post offices. But it would be a big mistake to permit the USPS to enter fields such as banking, where the private sector is well-established. And it is highly unlikely that the USPS would earn the billions in revenues needed to right its fiscal ship.
Unfortunately, Congress to date has shown little appetite for reckoning with USPS’ fundamental business challenges: high debt, decreased demand for postal services, and excessive overhead costs. Until it does, it’s highly unlikely the situation will improve.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
From the Washington Post:
A new study by R Street, a D.C.-based libertarian think tank, analyzed every regulatory deadline mandated in law by Congress in the last 20 years. Agencies blew through 1,400 required deadlines of the 2,684 that had specific dates attached, according to the research. That’s a less than 50 percent success rate.
Why doesn’t Congress enforce its own deadlines? For one, no one is really checking. And, although it’s the law, there’s not much Congress can do beside sending strongly-worded letters and berating officials at public hearings.
“Congress puts things in laws and then washes it’s hands of it. There’s no system for tracking how many deadlines have been given to agencies, there’s no spreadsheet where they could even keep track of it to know if someone has been late or not late,” said Kevin Kosar, R Street’s governance project director. “They just put the number in there and whatever happens, happens.”
Some regulations that went way over schedule? A 1992 law required new pipeline safety regulations by 1995. There wasn’t even a public meeting on it until 2005. Federal regulations for catfish inspections, vending machine foods, and hazard material transportation all came in way after deadline. Kosar points to one slow to finish regulation on commercial fishing to reduce accidental deaths of bottlenose dolphins.
Many fans of smaller government would probably cheer the slow pace of creating more government regulations. But government’s inaction and dysfunction has consequences. Just the mere threat of a regulation can affect businesses, and the uncertainty of when and how it will be implemented makes it difficult to plan ahead.
“It’s not a healthy long-term habit,” Kosar told the Loop. “I don’t know anyone who would want to work for a boss that gives them random deadlines and then doesn’t pay attention to whether you meet them …it’s operational chaos.”
Solution? For Kosar – who used to work at the Congressional Research Office before quitting and penning a long Washington Monthly article about Capitol Hill dysfunction and its impact on nonpartisan research – it’s ironically more government.
He’s a proponent of a Congressional Regulatory Office, an idea that’s been tossed around for years to no avail.
But as difficult as it is to get rid of an existing federal program, it’s just as challenging to start a new one. Kosar conceded that the “optics” of a creating a new government bureaucracy isn’t a great political selling point, especially when it would require funding.
R Street Governance Project Director Kevin Kosar sat down with Dubai’s Al Arabiya news network — one of the largest news sources in the Arab-speaking world — to discuss Donald Trump, Hillary Clinton, Joe Biden and the rest of the 2016 presidential field. You can watch the clip (still in the original Arabic) below.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Hydraulic fracturing – the process of extracting oil and gas resources that requires breaking rock through the high-pressured injection of liquid into the ground, popularly known as “fracking” – has caused an uptick in the number of earthquakes that are occurring across the nation. That was the conclusion of a panel of experts, including one associated with the natural-gas industry, during the Center for Insurance Policy Research session at this month’s meeting of the National Association of Insurance Commissioners in Chicago.
It was not without some irony that that conclusion was proffered at an event hosted by Oklahoma Insurance Commissioner John Doak. His department, along with the Pennsylvania Insurance Department, circulated bulletins to insurers early in 2015 disputing the connection between seismic activity and injection. Oklahoma’s bulletin states:
At present, there is no agreement at a scientific or governmental level concerning any connection between injection wells or fracking and earthquakes…In light of the unsettled science, I am concerned that insurers could be denying claims based on the unsupported belief that these earthquakes were the result of fracking or injection well activity. If that were the case, companies could expect the Department to take appropriate action to enforce the law.
While delineating between natural and induced seismic activity is a difficult task, the clear increase in the number of documented earthquakes led the panel to attribute the change to saltwater injection and disagree with the bulletin’s conclusion.
There have been regulatory efforts to ask insurers not to enforce policy exceptions for “triggered earthquakes.” But this could mean claims that are higher than those that would be supported by approved rates and forms out. Joseph Kelleher, an attorney with Dinker, Biddle & Reath, LLP pointed out that insurers simply hadn’t priced for such expansive liability.
In terms of the induced earthquakes themselves, to date, they have not been particularly severe. The panel’s consensus was that the average has been around magnitude 3.0. When asked whether it is possible for fracking to induce a high-magnitude seismic event, the panel equivocated, noting that such events appear unlikely, since no such event has been observed to-date. Nonetheless, Steve Horton of the University of Memphis’ Center for Earthquake Research suggested that caution demands that extraction efforts should avoid known areas of major fault lines, like New Madrid.
The good news for residents near injection sites, their insurers and those in the gas-extraction business is that it is possible to curtail so-called “induced seismic activity” through better liquid-disposal techniques. Water disposal is crucial, given that gas wells – measured proportionally by their output – are actually water wells. In Oklahoma, the Mississippian Lime oil play averages 9.8 barrels of water for every equivalent barrel equivalent of gas. That average is indicative of a national trend.
Waste water from injection wells is often pumped back into the earth. The more water that is pumped into disposal wells, the greater the risk of seismic activity. To reduce that risk, the panel suggested recycling and reusing the water. In combination with such technical changes, the panel reported that Arkansas had enjoyed a decrease in seismic activity since it opted to prohibit exploration within 1 mile of known faults.
The insurance industry is in an interesting position when it comes to induced seismic events. It stands to gain from the continued adoption of energy generated through natural gas, because of the fuel’s net positive impact on climate change. But policy creep enforced by regulators to cover an increasing number of seismic events endangers both the industry’s underwriting flexibility and the accuracy of its prices.
This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Eli Lehrer, president of the R Street Institute, a conservative think tank, cautions that policy changes directed at reducing prison time for violent offenders must be handled carefully.
“The reports should be viewed with skepticism,” Lehrer said. “There are certainly too many people in jail today, but if you go back to 1980 or 1975, there were too few. The idea that the policies haven’t done any good is also wrong.”
Are taxis becoming extinct? Is creative destruction a good thing? Are Uber and Lyft drivers fulltime employees entitled to benefits or are they independent contractors? Should municipalities cap the number of drivers? Will other industries incorporate characteristics of the sharing economy?
There has been much debate about ridesharing and its implication on the marketplace and political landscape. Policymakers and candidates have taken different approaches to addressing these questions. In fact, we at R Street have contributed extensively to this discussion about the sharing economy in general.
While talking heads and editorial boards continue to discuss the implications of the sharing economy, it is important to hear the voices on the front lines, the actual drivers (both Uber and taxi).
Recently Zach Weissmueller of Reason TV filmed interviews with Uber and taxi drivers as he commuted around Los Angeles. The 10-minute video includes footage of interviews with three Uber drivers who have very different perspectives on their work and the political cloud swirling around the company. The video also interviews two individuals within the taxi industry.
I think the perspectives manifested in the video reflect what I have encountered using the service in D.C. The first driver described driving for Uber as a fun “part-time hustle,” but wouldn’t recommend it as a full-time job because the driving “really kills your car for only $10 an hour (after taxes).” At the other end of the spectrum was an immigrant who is satisfied with his work as an Uber driver because it enables him to send extra money back to his family in Africa.
Zach also interviewed William Rouse, general manager for Yellow Cab Los Angeles, to discuss the differences between the background check process Uber utilizes and that of Rouse’s own company. The video also documents an encounter with a taxi driver who preferred not to discuss anything with the Reason film crew.
In my time as an urban commuter, using both taxi services and ridesharing apps (both Lyft and Uber), I have encountered drivers with a wide array of personalities, perspectives on the latest developments in commuting options and driving abilities. This video was a helpful introduction to the many different types of drivers out there.
Next time you hail a cab or summon an Uber, take the opportunity to learn about your drivers, asking questions like Zach does in the video. Most of the drivers I have met are happy to talk about their experiences. It’s a fun and informative way to meet people and gain anecdotal perspective on the ridesharing discussion.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Last month, as but one marker of the sudden and somewhat unexpected public outcry to remove from all manner of public life flags and memorials honoring the Confederacy, the Memphis City Council voted unanimously to reverse a decision made by their forebears 110 years ago.
The council determined that Memphis, the city where Martin Luther King Jr. was assassinated in 1968, will remove the memorial to Confederate Gen. Nathan Bedford Forrest that stands currently in the city’s Health Sciences Park (known, until 2013, as Forrest Park) and return the remains of Forrest and his wife, which are contained in the statue’s base, to Elmwood Cemetery, where they originally were laid to rest 140 years ago.
Many Americans might recognize Forrest’s name from the brief reference to him at the beginning of the film “Forrest Gump.” He otherwise is not today the household name that his contemporaries Robert E. Lee, J.E.B. Stuart and Thomas “Stonewall” Jackson have remained. As a military tactician, he may well have been their superior.
He also was a prosperous slave trader, committed atrocities during the war that today we would consider war crimes and, in the war’s aftermath, became the first-ever grand wizard of the terrorist organization known as the Ku Klux Klan.
The council may have been unanimous in their decision, but the broader community of the American South has not been. Within two weeks of the vote, Council Chairman Myron Lowery reported that he’d already received 500 emails on the subject, estimating that “98 percent of [them were] from Forrest’s supporters…living outside of Memphis.”
Tensions have heightened somewhat as plans to remove the memorial move forward, but there is no question that it is going to happen. Earlier this week, the memorial was vandalized, with the slogan “Black Lives Matter” spray-painted on its base. Just yesterday, Thomas Robb – national director of the Knights of the Ku Klux Klan – announced his offer to pay to transport Forrest’s remains to Boone County, Ark., to be buried near Robb’s Christian Revival Center, where he serves as pastor. Perhaps that would be appropriate.
I have lived in “The South” for more than a decade now – first, in Virginia and now, in Florida. But I am not a Southerner. I was raised in the North. My mother is an immigrant and my ancestry in this country on my father’s side does not extend beyond the early 20th century. Just as I cannot, as a white person, truly understand what it is to be confronted by symbols of slavery and white supremacy, nor can I say I know what it is to look at the rebel flag or Confederate memorials and regard them as symbols of “heritage.”
What I can imagine is that, after decades of a culture war consisting of occasional skirmishes punctuated by long periods of stasis, for those whose perspective on Confederate pride is markedly different from my own, the massive shift in public attitudes we’ve seen in recent months and weeks is almost certainly both dizzying and stupefying. Just a decade ago, an earlier measure proposing to rename Forrest Park and remove the memorial was passed narrowly by the Memphis City Council but vetoed by Willie Herenton, the city’s first black mayor. It seemed, for a long time, that’s the way things would remain.
That’s not how things have gone, and all indications are that’s not how things will go in the near future. The flags are coming down. The memorials are being decommissioned. The rallying cry of “heritage” – for so long, given deference as a serious stance for serious people to take in serious public debates– has been, overnight, rendered laughable, a pathetic justification for an impermissible attitude.
By and large, these are developments I cheer. I think symbols matter. I think history matters. I think we as Americans are long overdue for a genuine reckoning with our past, with what it says about us today, with how it shapes the directions we may head tomorrow.
My concern is that the speed with which we are disposing of these relics does not actually permit that kind of reckoning. Confronting the past, after all, must include remembering why the memorials and the monuments were created to begin with. It was not, as some would have it, simply to honor the sons of a nation that went to war and fought for a losing cause. It was, almost universally, to perpetuate a system of continuing terrorism against American citizens, a system which conspired for more than a century after slavery to deny them basic rights to assemble, to buy property, to vote and to enjoy police protection and equal treatment under the law.
The line between erasing misplaced honor and erasing history itself can be awfully thin. In our haste to correct past sins, we might just obliterate it. I have a side in the cultural debate, and I think my side is certainly right, but the debate itself also matters. Take down the memorials, where it is appropriate, but remember that they were there and why they were erected in the first place.
I’ll give the last word on the matter to Nate DeMeo, creator of the amazing podcast The Memory Palace. His most recent episode is about the Forrest memorial, and consists largely of him proposing a plaque that should be placed alongside the statue, wherever it ultimately ends up. The whole story is only about ten minutes long, and absolutely is worth hearing in its entirety, but I’d just like to include his conclusion here:
Maybe it should just say – maybe they should all say, the many, many thousands of Confederate memorials and monuments and markers – that the men who fought and died for the CSA – whatever their personal reasons, whatever was in their hearts – did so on behalf of a government formed for the express purpose of ensuring that men and women and children can be bought and sold and destroyed at will. Maybe that should be enough.
But I want people to know about those Memphians in 1905, who wanted people to remember Forrest and why, who wanted a symbol to hold up and revere, to stand for what they valued most. I want people to know that statute stood in downtown Memphis for 110 years. And to remember that memorials aren’t memories; they have motives. They are historical; they are not history itself.
And I want them to know why it was moved. That in 2015, after Clementa Pinckney and Sharonda Coleman-Singleton and Tywanza Sanders and Ethel Lance and Susie Jackson and Cynthia Hurd and Myra Thompson and Daniel Simmons Sr. and Depayne Middleton-Doctor were murdered in a church in Charleston, South Carolina, there were people in Memphis who were done with symbols…and ready to bury Nathan Bedford Forrest for good.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
The following piece was co-authored by Google Policy Fellow Sasha Moss.
Michael Corleone would understand. Just when music companies and their performance-rights organization (PROs) thought they were getting out from under supervision by the U.S. Department of Justice, the DOJ may be about to pull them back in.
For some time now, the DOJ’s Antitrust Division has been investigating whether to modify the special antitrust consent decrees that govern the two leading PROs: the American Society of Composers And Publishers (ASCAP) and Broadcast Music Inc. (BMI). These broad settlements, originally reached in 1941, were designed to prevent anti-competitive behavior by the music publishers and set the rules for how the PROs can operate. This includes licensing on non-discriminatory terms (preventing the PROs from blocking a radio station or music service from playing their songs).
The consent decrees have been modified before; BMI’s was amended in 1994 and ASCAP’s in 2001. But some music publishers argue these agreements are showing their age. The publishers and the PROs are hoping (and expressly asking) the DOJ to agree with their view that, here in the Internet Era, digital music doesn’t need so much government intervention. Some suggest the DOJ’s antitrust lawyers have shown sympathy to arguments for a “partial withdrawal” of digital copyrights from the consent-decree framework.
But new arrangements to replace that framework ultimately may pull the labels and PROs back in. Billboard reported recently that the DOJ may be considering revisions that impose an even tighter regulatory scheme. According to the report, the Justice Department circulated a letter letting ASCAP and BMI know it is considering allowing any single co-owner of a “split work” — also known as a “fractional, “co-authored” or “co-pub” composition — to issue a license for 100 percent of the work. This is in contrast to the current practice in the music industry, whereby everyone who has a piece of the copyright needs to agree to license the work. The music companies have let their resulting unhappiness be known, albeit only off-the-record.
Not everyone has been so unhappy with the DOJ trial balloon on split works. Billboard quoted streaming service Pandora as saying: “We appreciate that the Department of Justice is taking steps to prevent further anti-competitive behavior in music licensing.” Matt Schruers of the Disruptive Competition Project has framed the reported DOJ inquiry as actively pro-competition. Per Schruers, the music industry has created “artificial gridlock” among its rights-holders by allowing each co-author the power to unilaterally veto, but not unilaterally authorize, the license to use a copyrighted song. This means that a single rights-holder with only a small percentage of ownership in the work may pull the work when a licensing agreement ends, or deny a license to begin with.
These sorts of unilateral decisions by fractional rights-holders have been costly to services like Pandora Radio. Two years ago, Universal Music Publishing Group, owners of at least fractional rights in 20 percent of the music in the BMI catalog, withdrew its digital rights from BMI, a move that was followed by doubling the rates it sought to charge Pandora. And in another example, a different publisher, BMG, also withdrew its rights, but in this instance the result was Pandora took down all of BMGs wholly owned works and Pandora’s customers were cut off from a substantial trove of the BMG catalog
Is this what Congress intended with its last major revision of the Copyright Act, back in 1976? It doesn’t appear so. Contemporary reports from the U.S. House summarizing the changes conclude:
“Under the bill, as under the present [pre-1976] law, coowners of a copyright would be treated generally as tenants in common, with each coowner having an independent right to use or license the use of a work, subject to a duty of accounting to the other coowners for any profits.”
That’s not always how split works licensing model operates today, as the UMPG example demonstrates. To license use of a song, Internet companies may end up having to cut separate deals with each fractional rights-holder. More deals mean more transaction costs, as well as more potential dissenters with the power to scuttle those deals. The process is particularly onerous for new potential entrants to the digital market, and the leverage enjoyed by the major labels and publishers only grows as they continue to consolidate. Today, Sony alone controls nearly half of all royalties collected.
The purpose of copyright is not merely to provide monopoly revenue streams to content companies, but to ensure that creative works actually reach the public. Thus, for the DOJ to clarify obligations under the decades-old consent decrees could make sense. Allowing fractional rights-holders to authorize use of a work unilaterally is one potential avenue to untangle the complex web of rights in music and bring the licensing system more in-line with those of other copyrighted works with multiple authors.
To be clear, no one is asking to eliminate the consent decrees, even though all sides officially say they favor competition and the free market. Ironically, those who laud the competition they say would follow from allowing rights-holders to “partially withdraw” digital music rights tend to fear simplification of the system as a whole, precisely because would make competition among rights-holders more likely.
For instance, they oppose allowing fractional rights-holders to license joint-authored songs on grounds that this would create a “race to the bottom” in digital copyright licensing, lowering prices that could be commanded on the open market. Publishers and PROs thus must find a way to thread the needle in arguing both that the free market commands we let them partially withdraw digital rights and that the free market is lousy when co-authors compete with one another on price.
Any recommended modifications by the DOJ would have to be agreed to by the PROs and then approved by a court. In the meantime, we need a more robust public conversation around how to handle thorny issues like split works. Of course there’s an irreducible tension between (a) the “exclusive rights” held by rights-holders in their “writings and discoveries” (“exclusive rights” just means the power to “exclude” non-rights-holders’ use) and (b) the goal of the U.S. Constitution’s Progress Clause, which gives Congress the power to grant such rights to “promote the progress of science and the useful arts” for rights-holders and non-rights-holders alike.
There are a few things about which almost everyone in this conversation already agrees: markets should be competitive; the public has an interest in copyright; and public policy should meet its Constitutional aim to encourage both creative and technological innovation. We can’t help but wish, in navigating this thicket of thorny issues, we were discovering simpler arguments and simpler solutions.