Out of the Storm News
WASHINGTON (June 20, 2014) – The R Street Institute welcomed today’s passage of H.R. 4871, the TRIA Reform Act of 2014, by the House Financial Services Committee.
The measure, sponsored by Rep. Randy Neugebauer, R-Texas, calls for a five-year extension of the federal Terrorism Risk Insurance Program, a $100 billion reinsurance backstop originally passed in the wake of the Sept. 11, 2001 terrorist attacks. However, the bill includes important taxpayer-protection provisions that gradually shrink the size of the federal program.
“Rep. Neugebauer’s bill strikes the proper balance between ensuring that sufficient capacity exists for U.S. businesses to insure against catastrophic terrorism, while also guarding against government subsidies that would unjustly enrich insurance companies and major commercial real estate developers,” R Street Senior Fellow R.J. Lehmann said.
Under terms of the TRIA Reform Act, the trigger level for conventional terrorism attacks would be raised gradually from the current $100 million to $500 million by the end of 2019. For attacks involving nuclear, chemical, biological and radiological events, all of which must be covered by law under workers’ compensation policies, the program’s current terms would remain intact.
“Reinsurance broker Guy Carpenter recently issued a report finding that multiline terrorism reinsurance capacity is about $2.5 billion per program for conventional terrorism and about $1 billion per program for coverages that include NBCR,” Lehmann said. “Given those figures, and the continuing growth of capacity thanks to the influx of alternative sources of capital, we think the adjustments called for in the House bill are perfectly reasonable.”
The industry also would be asked to increase its co-payment share of conventional terrorist attacks from the current 15 percent to 20 percent, while individual company deductibles would remain at 20 percent of prior year premiums in a particular line of business. The industry would be asked to repay taxpayers 150 percent of funds expended, up from 133 percent currently, up to a floating retention level calculated by adding the aggregate amount of individual company deductibles.
Lehmann also praised a provision calling on the non-partisan U.S. Government Accountability Office to conduct a study on the feasibility of charging companies an upfront premium for TRIP’s reinsurance coverage.
“Much like the federal Riot Reinsurance Program of the 1970s, the way forward for federal terrorism reinsurance ultimately is to charge companies an actuarially adequate premium,” Lehmann said. “We can never know how much capacity the private reinsurance sector might be willing to commit to terrorism coverage so long as the government provides it for free.”
If 20th Century design was inspired by American architect Louis Sullivan’s 1896 pronouncement that “form ever follows function,” the key realization thus far of the 21st Century has been that this is merely a necessary – rather than a sufficient – condition for quality designs to flourish.
We have learned, and the market has confirmed, that an object should be designed in accordance not only with how it functions, but moreover with how it should function. Especially in the case of interactive technology, a description that has grown to describe just about anything, an object should function the way its user expects it to function.
As technology has become more powerful and flexible, the task of matching function and expectations has undergone a change akin to the philosopher Immanuel Kant’s metaphorical Copernican Revolution. For older generations of technology – in which scarce resources limited both what functions were available and the maximum complexity of users’ commands – the steps necessary for users to extract and refine what they could do with a device were explained in thick manuals. The prevailing strategy for more recent generations of technology has been to meet users halfway, competing to efficiently perform functions and effectively implement concepts that users have been had led to expect.
Today’s designs, however, are increasingly able to cut out the middleman, more and more closely conforming to their users’ preexisting intuitions and thought processes and less and less asking users to make those thought processes conform to products’ capabilities.
In other words, the key to success in modern interactive design does not lie in “creating” the best design possible. Rather, it begins with doing the best possible job of stripping designs down to concepts and procedures with which the user is already familiar, preferably through everyday use. Where there is no alternative but to require more input from a user, his or her options are laid out in terms the user already can be expected to know. While the fusion of design and utility has not yet been perfectly realized, industry has become more fully aware of both parts of this process and continues to pursue integration in earnest.
This coevolution of design standards and procedures has clashed, and continues to clash, with the structure of U.S. patent law. The first problem is the potential uncertainty that surrounds the scope and strength of a design patent’s protections. Even in the paradigm case of a design feature that has been aesthetically improved beyond what was required to give the feature its functional attributes, there remains the potential for overly broad claims about what aspects of a design qualify under the law as “ornamental.”
Under section 284 of the U.S. Code’s Title 35, triers of fact may award “non-statutory” damages for infringement of a design patent. But these same judges also may err in determining how much of an object’s value comes from the aesthetic appeal of its ornamental features and how much comes from other sources of value, whether ornamental or functional, and whether patented or unpatented.
The risk of error at each stage of the process – from the initial design patent application to the ultimate test of infringement in court – creates at least some incentive for a designer to overstate his or her case. Fortunately, these incentives are similar to the temptations to make overly broad claims about other grounds for patentability. Regardless what grounds are at issue, the remedy inevitably is better training for examiners and judges in traditional design standards and greater vigilance on their part about those standards’ application.
Ned Andrews is an assistant public defender with the Virginia Indigent Defense Commission and an associate fellow of the R Street Institute. He is a graduate of the University of Virginia School of Law, where he served on the managing board of the Journal of Law and Politics and the Virginia Journal of Law and Technology. He previously received a bachelor’s in philosophy from Yale University. Andrews was the 1994 Scripps National Spelling Bee champion and he is author of the 2011 book “A Champion’s Guide to Success in Spelling Bees: Fundamentals of Spelling Bee Competition and Preparation.”
Legislation regulating so-called “transportation network companies” in California passed the state Senate Energy, Utilities and Commerce Committee by a unanimous 8-0 vote earlier this week, amid a row between the TNCs, insurance companies and traditional taxicabs.
This bill, A.B. 2293, stems from a tragic New Year’s Eve incident in San Francisco in which a six-year-old girl was struck and killed in a crosswalk by a driver who was an UberX contractor.
Under the bill, which moved now to the Senate Insurance Committee, TNCs would be required to provide primary insurance for any driver currently logged in to use their service. The measure codifies the California Public Utilities Commission’s proposed minimum of $1 million of coverage.
The measure is sponsored by Assemblywoman Susan Bonilla, D-Concord, who said she presented the bill to fill a “gap” in insurance coverage and consumer protections, create clear definitions of when commercial and personal insurance coverage is primary, and ensure all drivers are adequately covered during all periods of TNC services.
Bonilla’s legislation defines three distinct “periods” of TNC service, between when a driver turns the application on to when it is turned off.
- Period 1: The driver turns the app on and waits for a passenger match
- Period 2: A match is accepted, but the passenger is not yet picked up
- Period 3: The passenger is in the vehicle
Insurance industry groups argue that the TNCs must be required to provide primary coverage for all three periods. For their part, the TNCs argue that during Period 1, a driver is not active and should not be required to carry the higher coverage standard demanded during the other two periods.
In addition to insurers, the bill also is supported by consumer attorneys, the California Airports Council and the San Francisco International Airport, who each argued for the measure on public safety grounds.
TNCs like Uber and Lyft raised opposed the $1 million minimum as too high and insisted the bill is anything but a compromise. Many Lyft and Uber drivers showed up to voice their opposition, most implying that A.B. 2293 would shut down the TNCs and therefore their livelihoods.
The other source of opposition came from taxi cab associations, who decried the bill for codifying TNCs as different from cab companies and subject to different regulations.
Committee Chairman Alex Padilla, D-Los Angeles, expressed support for the measure overall, although he felt it should be subject to further negotiation. He said he feared requiring $1 million coverage during all periods might be too high, but felt that was a matter on which the insurance committee should rule.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
The last few weeks have brought both good and bad news to supporters of patent reform looking to reduce system abuse.
Hopes for legislative action were dashed when a major bipartisan reform bill that enjoyed the endorsement of President Barack Obama was pulled from the Senate calendar.
Conversely, there are still opportunities for the executive branch to intervene directly, as well as courts, which have recently been tougher on plaintiffs pursuing patent claims based on suspected invalid patents or outright frivolous claims. It also provides an opportunity to expand the conversation to international trade.
A patent troll generally has one of two goals: to extract a dubious royalty payment or to block market entrance by a potential competitor.
In regard to the former, the troll often attacks a small business or start-up, claiming to hold the original patent on the product or process its target is selling. The start-up, lacking the resources for a long court fight, settles out of court because it’s the better of two bad options. The cost, nonetheless, is passed onto consumers in the form of higher prices.
This practice – albeit slightly different – has spilled over into international trade.
France Brevets, Taiwan’s Industrial Technology Research Institute, Innovation Network Corporation of Japan and South Korea’s Intellectual Discovery are all examples of state-sponsored patent pools. Over time, they have all accumulated thousands of patents, many for products that never made it market. Their aim is to use weaknesses in patent law to favor companies within their own countries while taking legal action against foreign competitors.
Call this a latter-day version of protectionism: If a product from a foreign company threatens your domestic player, sue, ideally in a place where there are legal weaknesses to exploit.
That results in convoluted litigation such as ITRI’s infringement suit against South Kore’’s LG Corp. (and its U.S. subsidiaries) in a U.S. District Court for the Eastern District of Texas – a preferred venue among trolls because it ranks among the highest in the United States in upholding patent claims. This is a further inducement for defendants to settle, even if they have a strong case that the suit is frivolous. And, at the end of the day, consumers pay.
With legislative patent reform dead for the time being, it seems for now the movement to curb this abuse will have to rely on the courts. This means slower movement toward general reform, as court cases often focus on one aspect of the wide range of patent law. But each new ruling in favor of defendants adds to the weight of case law and jurisprudence.
The U.S. Supreme Court dealt a setback to trolling in an early June ruling when it vacated an appeals court ruling of infringement brought by Biosig Instruments against Nautilus Inc., an exercise equipment maker, over heart rate monitors. The Supreme Court said the appeals court, by disallowing only patents that were “insolubly ambiguous,” still left the door open for claims based on vague or indefinite specifications.
For trolls, who in lawsuits often try to stretch broad definitions and descriptions as much as possible, this is a setback, because the Supreme Court essentially raised the standard for a judgment of infringement beyond mere ambiguity, and will force plaintiffs to be more specific about the definitions and functions of a product or device to make an infringement charge stick.
As these and other court decisions add up, the attractiveness of the United States as a venue for patent trolling suits may diminish.
At the same time, the White House can be more assertive in condemning state-sponsored patent pooling. The practice is questionable under current trade agreements, as it involves governments taking an ownership stake in commercial intellectual property. This creates a conflict of interest when the regulator has an interest in the jurisdiction over which it presides. The U.S. Trade Representative’s Office and the Department of Commerce need to be more vigilant in protesting these practices.
Legislation may be on hold for now, but that doesn’t mean pressure for patent law reform should stop. Otherwise trolls will continue to abuse weaknesses in patent law and be a drain on domestic and international economic growth.
WASHINGTON (June 19, 2014) - The R Street Institute welcomed today’s unanimous decision by the Supreme Court in Alice Corp. v. CLS Bank International, which ruled that patents filed by Alice Corp. regarding its abstract ideas are not legitimate.
“While this decision was more narrow than some had hoped, in that it does not fundamentally overturn the notion that software is patentable, we are encouraged that the court expanded the reasoning it used in Bilski v. Kappos to rule a broader category of abstract ideas are not eligible for patent protection,” R Street Senior Fellow R.J. Lehmann said.
“The U.S. Constitution offers limited monopoly rights to inventors to provide incentive for progress in sciences and the useful arts, but too often in recent years, overly broad and inappropriately granted patents have been used as the basis of litigation that has choked the courts and stifled innovation,” Lehmann added. “This decision offers a step in the direction of reform, but comprehensive action by Congress is still needed to address these problems.”
Dr. Nicholas Vozoris in 2012 reported that “smokers of mentholated cigarettes, and in particular women and non–African Americans, have significantly increased odds of stroke compared with nonmentholated cigarette smokers.” He used National Health and Nutrition Examination Surveys (NHANES) data to calculate odds ratios of 2.3 for all menthol smokers, 3.3 for women and 3.5 for non-African Americans. Dr. Vozoris found no increased risk for hypertension, heart attack, congestive heart failure or emphysema among menthol smokers.
If the Vozoris findings on menthol and stroke were independently confirmed, they could serve as a basis for a ban on menthol cigarettes. Instead, an analysis of NHANES data by Dr. Brian Rostron comes to an entirely different conclusion.
Dr. Rostron found that African-American menthol smokers had a significantly lower risk for stroke (OR = 0.52, 95% confidence interval= 0.28 – 0.99) than nonmenthol smokers. Dr. Rostron writes:
It is not clear to me how Vozoris obtained his findings, given that I cannot replicate his general results for stroke using the NHANES data and analyses that he specified. Moreover, the absence of observed differences in stroke prevalence among NHANES menthol smokers would suggest that methodological or analytical issues may have affected his results.
Dr. Rostron earlier showed that menthol smokers have lower lung cancer risks than nonmenthol smokers.
Promoted by the Archives of Internal Medicine, the Vozoris study was covered by major news media. The journal, which changed its name to JAMA Internal Medicine in January 2013, is not publicizing the study that corrects the deficient 2012 analysis.
Although medical journals cannot avoid publishing erroneous reports, editors should take all measures to correct the types of mistakes discovered by Rostron.
Following on last month’s ruling by the European Court of Justice that search engines could be forced to take down links that individuals believe infringe their privacy rights, a new ruling from the Supreme Court of British Columbia has ordered Google to remove content not just from its Canadian site, but from all of its global web properties.
While the Canadian case deals with the sale of counterfeit products, rather than privacy, it embraces on the same logic to make its sweeping censorship demands.
At the least, the case may confirm widespread fears that the ruling by Europe’s highest court, which applies even to links that are factual and in the public record, could spread a newfound “right to be forgotten” across the globe, opening the door for disgraced politicians, sex offenders, and malpractice-burdened doctors to wipe their slate clean.
But in some ways, the British Columbia decision poses a threat even more dangerous than the fear that courts would pave the way for a fragmented, balkanized Web and lay the foundation for what the Wall Street Journal editorial board warned would be “an Internet with borders.” The Canadian case arguably establishes a precedent of national and even local courts handing down dictums that affect freedom of speech around the entire world.
University of Ottawa law professor Michael Geist explains on his blog:
The ruling in Equustek Solutions Inc. v. Jack is unusual since its reach extends far beyond Canada. Rather than ordering the company to remove certain links from the search results available through Google.ca, the order intentionally targets the entire database, requiring the company to ensure that no one, anywhere in the world, can see the search results. Note that this differs from the European right to be forgotten ruling, which is limited to Europe.
The Canadian court’s ruling favorably cites the ECJ ruling in Google v. González, which I wrote about in detail here, and relies heavily on it in its effort to assert jurisdiction. If Google does business in Canada by advertising there, then logically, the Canadian court system has jurisdiction over Google’s global search results.
Of course, the court isn’t simply asking Google to remove specific links. Having determined that “deletion of individual URLs is ineffective” like “an endless game of ‘whac-a-mole,’” the court instructed Google to prevent similar content from showing up anywhere for any reason — indefinitely.
But what happens when a takedown order in one country conflicts with the law in another? This happened in the French court case of Yahoo! Inc. v. LICRA. The case concerned the sale of Nazi artifacts on Yahoo’s auction site, which ran afoul of a French law banning the display of Nazi paraphernalia.
When the French court made the bold claim that it had authority over Yahoo’s servers in the United States, Yahoo asked a U.S. court to block the ruling on grounds it conflicted with its First Amendment rights and to “confirm that a non-U.S. court does not have the authority to tell a U.S. company how to operate.”
While the U.S. court agreed with Yahoo, the company also had substantial business activities in France that could be brought to a halt if it didn’t comply. The end result was that Yahoo banned the sale of these objects, even though the French court’s decision was ultimately reversed on other grounds.
This case foreshadows some of the strange scenarios and chilling effects that could emerge when regional courts claim jurisdiction over the whole Internet. Geist imagines a few of these:
The implications are enormous since if a Canadian court has the power to limit access to information for the globe, presumably other courts would as well. While the court does not grapple with this possibility, what happens if a Russian court orders Google to remove gay and lesbian sites from its database? Or if Iran orders it remove Israeli sites from the database? The possibilities are endless since local rules of freedom of expression often differ from country to country.
As western democracies find ways to limit content online, it gives more heavy-handed governments like Russia an excuse to jump on the global Internet censorship bandwagon.
The Russian Public Chamber has already submitted a recommendation to the Russian Parliament calling for the introduction of a right to be forgotten that would affect not only Russian search engines, but also foreign ones like Google and Yahoo. Countries like China and Korea are also seeking to assert their right to censor the global Internet.
It’s hard to imagine why Internet censorship would be the first, best option in any legal dispute. Particularly where other legal remedies seem obvious.
People depend on search engines like Google and Yahoo to be an accurate and open gateway to the web. Allowing any country to leverage privacy or other legal claims to limit search engine content globally will leave us with the lowest common denominator for free speech rights. And that could do a lot of damage to our free and open Internet.
Today’s decision by the U.S. Patent and Trademark Office’s Trademark Trial and Appeal Board to vacate six of the Washington Redskins’ trademarks on grounds that they are “disparaging to Native Americans” does more than just raise the temperature in a nasty public relations fight between owner Daniel Snyder and various public officials (Senate Majority Leader Harry Reid, perhaps most notable among them.)
The ruling could also set the stage for other ethnic and national groups to challenge long-standing professional sports trademarks that heretofore have largely flown under the radar.
The fight over the Redskins’ name, which some take to be a racially derogatory slur, is just the most high-profile of what has been a decades-long battle over team mascots. Understandably, given how common Native American names and imagery have been in the American sports landscape, those have been the primary focus, going back to when Stanford and Syracuse universities both dropped their Indian mascots in the 1970s.
The issue truly came to the forefront at the college level in the early 1990s (at least one version of the Redskins trademark case, similarly, dates from 1992.) In the past quarter-century, NCAA colleges that have shifted away from Indian-related names include Eastern Michigan, Marquette, St. John’s, St. Bonaventure, Miami of Ohio, Quinnipiac and, most recently and perhaps most contentiously, the University of North Dakota.
Of course, many Native American names and mascots persist. At the pro level, in addition to the Redskins, the National Football League also has the Kansas City Chiefs. In addition, there’s Major League Baseball’s Cleveland Indians and Atlanta Braves, the National Hockey League’s Chicago Blackhawks and the National Basketball Association’s Golden State Warriors. The teams differ in the degrees to which they continue to employ iconography that evoke Native American stereotypes, ranging from the Indians’ highly questionable, grinning, red-faced “Chief Wahoo” logo to the Warriors, which essentially dropped any Indian symbolism way back in 1971.
But Indians are not the only ethnic group that have been made team mascots. Indeed, the practice was once shockingly common. Unsurprisingly, Negro League teams were filled with references to their players’ skin color, with myriad “black” variants of other established team names: Black Barons, Black Bears, Black Bronchos, Black Caps, Black Colonels, Black Eagles, Black Lookouts, Black Senators, Black Sox, Black Spiders, Black Tyrites, Black Yankees and, of course, in a double-dose of offensiveness, Black Indians.
Then there were the Negro League team names that went above and beyond, into new levels of cringe-inducing tastelessness, like the Colored House of David, the New York Cubans, the Jersey City Colored Athletics, the Ethiopian Clowns, the Zulu Cannibal Giants and, perhaps with a hint of irony, the Atlanta Black Crackers.
In basketball’s pre-NBA days, one of the dominant teams was the all-Jewish Philadelphia “Sphas,” which took its name from the South Philadelphia Hebrew Association. The Sphas, who generally were known colloquially as the Philadelphia Hebrews, played in a pro and semi-pro circuit that included a number of all-Jewish teams, including the perhaps even more questionably named Cleveland Rosenblums.
The era of naming teams for ethnic groups has, by and large, faded into an increasingly embarrassing past. It’s safe to say the Anti-Defamation League wouldn’t cotton to a new team called the “Rosenblums,” to say nothing of what the NAACP would think of the “Zulu Cannibal Giants.” But there are still some teams in the major professional North American sports whose monikers originated as ethnic terms. They mostly go unnoticed due either to the obscurity of the term or the apathy (or even, positive embrace) of the affected groups. Nonetheless, in the wake of the Redskins decision, here are four ethnic groups who could conceivably earn their day in court.
The Irish: The best-known and perhaps most obvious of ethnic-related pro sports names happens to belong to the NBA’s most storied franchise, the Boston Celtics. According to possibly apocryphal team lore, founder Walter Brown insisted upon the name, over the objections of public relations staff, declaring: “Boston is full of Irishmen. We’ll put them in green uniforms and call them the Boston Celtics!”
The team’s logo is a slightly watered-down caricature of longstanding stereotypes perpetuated by bigoted anti-Catholic Irish bashers, from the shillelagh to the clog shoes to the derby hat to the pipe and potbelly and leering wink. All that’s missing from the equation are implications that the character is a violent alcoholic, perhaps because that would run too close to violating the trademark of college football’s most-celebrated team, the University of Notre Dame Fightin’ Irish.
The Dutch: The New York Knickerbockers, better known as the Knicks, trace their name to a term used to describe the descendants of the original Dutch colony of Nieuw-Nederlandt, whose progeny would eventually include the American presidents Martin Van Buren and Franklin and Teddy Roosevelt. The term was popularized by Washington Irving’s satirical 1809 novel “A History of New York from the Beginning of the World to the End of the Dutch Dynasty,” written under the pseudonym Diedrich Knickerbocker.
Though there is evidence the term was actually drawn from a real Knickerbacker family (originally Kinnekerbacker), the popular image of the Knickerbocker became inextricably tied with knickered pants that roll up at the knee. The stereotyped character of “Father Knickerbacker,” with his knickers, cotton wig, tricorn hat and buckled shoes, persisted into the 20th Century. Indeed, for their first 18 years of existence, the Knicks’ logo was Father Knickerbocker dribbling a basketball.
English-Americans: It can be argued that the name of the Knicks’ cross-town compatriots, the New York Yankees, is actually the direct inverse of Knickerbocker. Though the precise origin of the term Yankee remains under dispute, by 1758, it was recorded as a slur that native Brits would use to describe their cousins in New England.
Most linguists agree that Yankee is likely of Dutch origin, noting its similarity to the Dutch diminutive name “Janke,” or “Johnny,” and would have been used by the New York Dutch to describe English settlers in neighboring Connecticut. H.L. Mencken famously argued the name was derived from the Dutch Jan Kaas, literally meaning “John Cheese.” Basically, Mencken’s claim was that the term originated as a slur the Dutch used to call the English a bunch of “cheese heads.”
Canadians: Among the National Hockey League’s six Canadian-based teams, there are not one, but two teams named for ethnic terms for our friends from the Great White North: the Vancouver Canucks and the Montreal Canadiens. Though both terms are generally used today to describe pretty much any resident of Canada, they both originated more specifically as terms to describe French-Canadians. The term “Canadiens” predates the nation of Canada, and originally referred to any French-speaking North American.
Canuck is of more recent vintage, and its precise origins (like those of Yankee) are hotly disputed. What is in less dispute is that Canuck long has been used as a slur. Its first published appearance with its current spelling came in 1849, in James Edward Alexander’s “L’Acadie,” to describe “a lusty fellow in a forest road with a keg of whisky slung round him.” By the turn of the 20th Century, the character of Johnny Canuck had emerged as a rival to Uncle Sam, except portrayed as a dim-witted lumberjack. Johnny Canuck (complete with toque, the stereotypical national hat) also served as the Canucks’ original mascot.
Meanwhile, if it turns out that Washington’s football team does need to change its name, the Washington Gerrymanders sounds about right to me.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
WASHINGTON (June 12, 2014) - The R Street Institute welcomes today’s introduction of the TRIA Reform Act of 2014, legislation that would make significant, positive changes to the federal Terrorism Risk Insurance Program.
Sponsored by Rep. Randy Neugebauer, R-Texas, the bill would extend the program for five years, but with greater industry liability and less involvement on the part of the taxpayers. The House Financial Services Committee has scheduled a mark-up session for the measure the morning of June 19.
The bill would raise the program trigger for all non-nuclear, biological, radiological and/or chemical events from $100 million to $500 million by the end of 2019. It also increases the assessment rate to compensate taxpayers for the value of their TRIA investments from 133% to 150%. The bill addresses insurance industry concerns by establishing a fixed, 90-day timeline for certification of acts of terrorism.
“This legislation, while it doesn’t go as far as we may have liked, is an improvement over the Senate bill in terms of shifting more risk from taxpayers onto the private sector,” said R.J. Lehmann, R Street senior fellow. “Among other items, it would raise the trigger level for conventional terrorism events to $500 million. These are the sorts of events and the levels of coverage for which private reinsurance is already available, and this change will encourage more firms to enter the market.”
The Senate Banking Committee passed its version of legislation on June 4. It extends the program for seven years. Both bills reduce the federal share of losses from 85 percent to 80 percent.
Lehmann cautioned that even more changes would benefit the program and taxpayers considerably.
“As the House moves forward to consider this legislation, we still would encourage members to consider other changes. In particular, we feel the Terrorism Risk Insurance Program could withdraw coverage for commercial liability insurance,” he said. “As it stands, current policy in that area amounts to a federal subsidy for firms to be reckless in their preparation for terrorism events.”
Those unfortunate enough to have attended California Gov. Leland Stanford’s 1862 inauguration likely had to commute to the Capitol Building as he did – by boat.
Sacramento is, if not the most likely to flood, at the very least, one of the most flood-prone cities in the United States. It sits at the confluence of two large rivers that empty into a massive lowland waterway known as the delta. Since time immemorial, those rivers and the delta have flooded predictably. As weather patterns continue to change, and as climate models predict winter storms of greater intensity and duration, they will do so again. In fact, an eventual flood capable of submerging the capital city is virtually inevitable.
Still, a quixotic drive for growth has taken hold in the Sacramento area. Now, a new spurt of development threatens to place more people at risk.
The site of Sacramento’s newest round of development is just north of downtown in a floodplain named Natomas. The area saw fast growth in the early 2000s but has been under an insurance moratorium since 2006. Construction in that area was halted, as FEMA rezoned the floodplain when crucial levees were found deficient.
In an alarming development, that moratorium is set to be lifted. This month, President Barack Obama signed into law the Water Resources Reform and Development Act. The act, among other things, will enable the levees protecting the Natomas floodplain to be overhauled to withstand a 100-year-flood. Doing so will end the insurance moratorium by obviating homeowners from having to purchase flood insurance at all.
Sacramento Mayor Kevin Johnson is excited by this prospect, declaring:
Natomas has been percolating and there’s a lot of pent-up energy, and that community is going to boom in a responsible way.
His enthusiasm is misplaced. While overhauling the levees is an excellent development vis-à-vis the existing residents, it should not provide the basis for further development.
To grasp just how risky moving to Natomas is, consider this data collected by The Water Institute: on a quote for a policy with $200,000 in building coverage and $80,000 in content coverage, with a $1,000 deductible, the estimated actuarially accurate premium would be $20,967 annually. Of course, through the magic of the National Flood Insurance Program (NFIP), the actual premium charged for that policy is $353 annually.
Clearly, the NFIP maintains rates that are well below what the private market could charge. By doing so, the program provides incentive to develop areas that otherwise would remain untouched. The environmental and human costs of such development are high, if not immediately realized in the form of dramatic rescue scenes. What is realized immediately is a massive transfer of risk from a small percentage of homeowners onto the backs of all U.S. taxpayers.
Nationally, it would be a great service to Americans everywhere for policymakers to phase out flood insurance subsidies. Private markets, while not prepared to take on all of the risk, could play a role in bringing flood insurance rates into conformity with actual levels of risk.
In the case of Natomas, there may be time yet to forestall the construction of thousands of new homes. It will be a few years before the Army Corps of Engineers is able to certify the relevant levees as capable of 100-year-flood protection. Until that day, city and county officials should be inundated with information concerning the risks posed to their constituents by a major flood. If those officials will not listen, they should be presented with proposals for boat subsidies.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
WASHINGTON (June 17, 2014) – Alternative approaches to the proposed Food and Drug Administration regulations on e-cigarettes should be explored before final regulations are approved, said the R Street Institute in comments to the agency this week.
Authored by Dr. Joel Nitzkin, R Street senior fellow and public health expert, the proposed alternatives include classifying the differences in risk and addictiveness between tobacco and nicotine-only products, and cigarettes; providing different marketing, flavoring and toxicological guidelines for each major class of tobacco and nicotine-only product; and not restricting flavoring options for lower-risk products. Most importantly, the recommendations include the fact that the FDA should play a lead role in restating the goal of American tobacco control efforts from “a tobacco-free society,” to “a smoke-free society, virtually free of tobacco-related illness, death and addiction.” This should all be done while not stifling continuing product development.
“As a public health physician, I strongly support FDA regulation of all tobacco and nicotine-delivery products, provided that regulation is evidence-based, practical and reasonably streamlined in a way that will protect and advance public health,” Nitzkin wrote. “As I see it, if the FDA opts to implement the letter of the Law without administrative discretion to accommodate contrary evidence, FDA efforts will do more harm than good in terms of future rates of tobacco-related addiction, illness and death in the United States.”
Nitzkin cautioned that while the tobacco control law was good public policy, it should not hinder product innovation and competition in the marketplace.
“The law was intended to reduce tobacco-attributable addiction, illness and death, while allowing adults who choose to do so to continue use of tobacco and non-pharmaceutical nicotine-delivery products,” he said. “The law was not intended to eliminate such products from the American marketplace by imposing regulatory burdens so onerous that none but the largest of the ‘big tobacco’ companies could comply.”
The proposed regulations will be the subject of a hearing before the Senate Commerce Committee on Wednesday.
Proposed taxes on “affiliated reinsurance” transactions sought by the Obama administration and some members of Congress would restrict trade and could make it impossible to transfer the costs of the Terrorism Risk Insurance Act (TRIA) to the private sector.
The Facts about TRIA
TRIA is a public/private partnership, in which the federal government provides reinsurance to encourage private insurers to provide coverage they otherwise would not or could not. In return for this backstop, insurers are required to offer coverage. Although TRIA has not been triggered in its 12 years of operation, a major terrorist attack cause up to $100 billion in unbudgeted federal outlays. Without action, the program will lapse on Jan. 1, 2015.
Both Parties Agree: Reauthorization is Necessary and Will Involve More Private Capital
While Congress is considering legislation to reauthorize TRIA, members have said repeatedly they want the private sector to take on more risk. Some ideas include increasing the industry’s retention, raising the program’s trigger level and shrinking the federal government’s pro-rata payment share. The Senate Banking Committee has passed a version of reauthorization legislation. Both the House and Senate bills would require more private capacity.
Congress Must Reject Reinsurance Tax Proposals
If the reinsurance tax becomes law, international insurers and reinsurers operating in the United States would be denied a key risk management tool every other insurer uses. They would have to either replace affiliate reinsurance with non-affiliate reinsurance or raise more capital. This would force the industry as a whole to reduce the size and scope of their U.S. offerings. Above all, it would make reinsurance more expensive, making it much harder to transfer terrorism risks to the private sector.
Quite simply, the proposed reinsurance tax will make it impossible for Congress to achieve its stated goals of moving more terrorism insurance into the private sector. If Congress wants to have private industry rather than taxpayers take on more terrorism risk, it must reject affiliate reinsurance tax proposals.
TALLAHASSEE, Fla. (June 16, 2014) - The R Street Institute welcomed Friday’s signing of SB 542 by Gov. Rick Scott, approving rules guiding the state’s emerging private flood insurance market that track broadly with principles R Street outlined before this year’s legislative session.
“By signing this bill into law, Gov. Scott has expanded choice to Florida’s consumers who would otherwise be stuck with inflexible, expensive flood insurance coverage from the federal government,” said R Street Florida Director Christian Cámara. “We are particularly thankful to Sen. Jeff Brandes, who sponsored the bill and spearheaded the effort. Since the much-needed reforms to the National Flood Insurance Program, we have seen interest by private carriers to offer similar coverage, often for less. Now, Floridians will be able to reap the benefits of those offerings.”
Most importantly, R Street praised provisions of the legislation that would preclude the state-run Citizens Property Insurance Corp. and the Florida Hurricane Catastrophe Fund from moving into the market for private flood insurance. While Citizens should be commended for moving to address its capital structure with appropriate risk transfer mechanisms, the Cat Fund has failed to do the same and coverage sold by Citizens remains dramatically underpriced. A major hurricane could still cause both entities to levy billions in post-storm “hurricane taxes” on nearly every policy in the state.
“We are pleased the Legislature and the governor chose to avoid placing any more risk on the backs of Florida taxpayers and policyholders,” Cámara said. “Simply shifting the risk from the federal program, which is already $25 billion in debt, to the sate would have been a step backward. Allowing private insurance companies to sell flood insurance in a competitive market is a major step in the right direction toward real reforms in flood insurance.”
R Street’s November 2013 report, “A state approach to flood insurance reform in Florida,” can be found here:
This piece was co-written by R Street President Eli Lehrer.
We both love credit unions. The free-market think tank we lead considers them a vital part of America’s financial system. Professionally, we’ve worked with credit unions, their trade associations and fellow credit union members to fight off taxation efforts and raise the cap on member business loans.
Now, however, we’ll be withdrawing almost all of our funds from the credit union where we’ve done business for the past two years and depositing them in a big national bank. We aren’t doing this lightly. In fact, we soon hope to find a credit union that better suits our needs. But in the meantime, we hope we can offer a few lessons to other credit unions that want more small business members.
To an extent, our decision was driven by the things that credit unions just don’t or can’t do. Our organization – which can have as much as $1 million on deposit – didn’t have access to some of the kinds of services we all have on our own personal accounts.
Among other things, we couldn’t easily make electronic check deposits and couldn’t see check images online. When we looked for these features – and we spent almost a month searching – we couldn’t find a single credit union anywhere that both offered them to small business and could fit us into their field of membership.
This state of affairs, we believe, exists because of federal laws that cap credit union member business loans. Since credit unions face such onerous restrictions on the size of their loan portfolios, it’s natural and even proper that many underinvest in the IT systems necessary to support small business. In the long term, easing MBL restrictions and declining IT costs will almost certainly solve these particular problems. But this will take time.
We might have continued to put up with the IT inconveniences if we felt our credit union excelled in other areas but, ultimately, we didn’t. We have three pieces of advice to credit unions who want to keep small business clients, even without making big IT investments or lifting the MBL cap:
Take member democracy seriously: We inquired several times about serving on the credit union’s board or a committee. While we certainly weren’t entitled to any sort of governance role by virtue of our advocacy or level of deposits (our credit union is small enough that we are a major depositor), we do think we deserved a reply and perhaps an invitation to meet with some board members. We never got either.
If credit unions are serious about running their affairs on a democratic basis, they need to be at least as responsive to member requests as politicians are to requests from constituents. The ability of a small business to participate in organizational governance is a key benefit that credit unions enjoy over banks; they need to capitalize on it.
Realize that kindness can’t replace competence: Our branch manager and several of the tellers are lovely people. They know us by name, came to parties we threw and sent us holiday cards. But when it came to business-specific requests, they fell down on the job. Over two years, about a quarter of our e-mail messages and telephone calls went unreturned. This is a bad practice in any business and unpardonable for one that’s supposed to exist only to serve members.
Don’t do what you can’t do well: Some aspects of the credit union were indifferently run. One reasonably minor issue with a credit card took dozens of phone calls and several months to resolve. Nobody would take responsibility for dealing with it. By the time we finally reached someone who could, it was too late to resolve. The credit union could have saved itself a black eye if it hadn’t gotten into a business it clearly didn’t know how to run. Bad service is worse than no service.
We’re still committed to the credit union movement. We hope to move our funds back into a credit union before the end of 2014. But even if the MBL cap isn’t lifted, credit unions like ours can do a much better job serving their small business members.
This note is in response to the April 2014 FDA Docket No. FDA-2014-N-0189 requesting comment on proposed regulations for extending the authority of the FDA Center on Tobacco Products (CTP) to a number of tobacco products not covered by the 2009 Family Smoking Prevention and Tobacco Control Act (the Law).
FDA has proposed a set of regulations intended to bring almost all non-pharmaceutical, tobacco-related products under the authority of the CTP. This step requires CTP to demonstrate it has the authority and capability, within the current scope of its jurisdiction, to implement a regulatory structure that will protect and improve public health.
CTP’s performance to date is very much open to question, given its failures to propose regulations governing the manufacturing quality of products already under its jurisdiction; to process thousands Substantial Equivalent applications; and to communicate practical guidelines to manufacturers for New Product, Reduced Exposure and Modified Risk applications. Substantially increasing the numbers of products the CTP will regulate could make matters worse.
As a public health physician, I strongly support FDA regulation of all tobacco and nicotine-delivery products, provided that regulation is evidence-based, practical and reasonably streamlined in a way that will protect and enhance public health. Unfortunately, it appears CTP is nowhere near meeting this performance standard. The 24-month grace period built into the currently proposed deeming regulations may help sort out these internal problems, but there is nothing in the proposal that indicates CTP recognizes the problems exist.
Though I have no direct insight into the inner workings of CTP, their paralysis appears related to an unwillingness or inability to acknowledge that approval guidelines will expose the public to some degree of risk. This reluctance is compounded by substantial discrepancies between prescriptions in the text of the Law and what our “evidence base” suggests should be done to protect and improve public health.
As I see it, the discrepancies are so severe that, if CTP opts to implement the letter of the Law without administrative discretion to accommodate contrary evidence, FDA efforts will do more harm than good in terms of future rates of tobacco-related addiction, illness and death in the United States.
I respectfully urge FDA to consider alternate approaches in pursuit of public health objectives. After determining which approaches comport with our best evidence, the agency then could consider which actions would fall within a reasonable scope of administrative discretion, and which, if any, might require technical corrections by Congress.
In the best of all worlds, CTP would suspend further consideration of the proposed deeming regulations until it can demonstrate to Congress, industry and the general public that it is successfully executing its existing responsibilities. Only then should it proceed to release an amended set of deeming regulations for public comment.
Suspecting that this will not be done, I offer the following observations and recommendations.
These recommendations are largely based on the “continuum of nicotine delivery products.” They represent one public health physician’s opinion on best how to streamline the FDA regulatory process; bring it in line with the totality of scientific evidence, as opposed to research limited to each newly proposed product; and pave the way for substantial reductions in tobacco-related harms in the United States.
The tobacco control law was intended to reduce tobacco-attributable addiction, illness and death, while allowing adults who choose to do so to continue use of tobacco and non-pharmaceutical nicotine-delivery products. The law was not intended to eliminate such products from the American marketplace by imposing regulatory burdens so onerous that none but the largest of the “big tobacco” companies could comply.
Unfortunately, the Law includes a number of provisions that directly conflict with the evidence base. Discrepancies between the Law and the evidence base include, but are not limited to the following:
- Risk to non-users is based on the physical and chemical characteristics of the product, without regard to marketing or social factors;
- The presumption that fruit and candy flavors other than menthol are intended to attract children and teens to tobacco use, and that such flavoring is not required to make the lower-risk smokeless and nicotine-only products palatable to adult users;
- The presumption that any statement of reduced exposure or reduced risk is likely to attract large numbers of teens and other non-smokers to nicotine use;
- Emphasis on chemical analyses as a means of ascertaining the risk posed by a tobacco product, without regard to the lack of certainty regarding how much cancer, heart, lung and other disease risks can be attributed to any given chemical substance;
- Disregard of the American and Scandinavian epidemiologic literature demonstrating major difference in risk between cigarettes and smokeless and other tobacco and nicotine-only products.
Because of these discrepancies, proceeding further with implementation of the Law without due exercise of administrative discretion may do more harm than good, in terms of our collective ability to reduce tobacco-related addiction, illness, and death in the United States.
The Law, and CTP implementation to date, both appear to presume that each newly proposed tobacco product is so unique that research findings on similar products cannot be considered. This presumption so substantially increases the cost and difficulty of New Product applications that none have been fully submitted to date. This presumption also substantially increases the burden of processing such applications and denies FDA use of prior literature for baselines and benchmarks with which to judge such applications. The stakeholders who benefit from this presumption are those in the tobacco control community who would prefer to eliminate all tobacco and non-prescription nicotine delivery products, as well as the cigarette and pharmaceutical companies who would like the Law to protect them from competition from relatively low-risk smoke-free and nicotine-only products. However, the presumption does nothing to protect or enhance public health.
Fourteen specific recommendations are listed at the end of this report. Most, if not all, of them can be implemented without congressional action. Where congressional action is required to bring the Law into better concordance with congressional intent, such action should be requested.
From the Washington Examiner:
Mytheos Holt for the R Street Institute: In what is likely the most bizarre story you will read all week, Seattle resident, trained fighter and self-proclaimed “superhero” (read: costumed vigilante) Phoenix Jones has decided to disband his team, the Rain City Superhero Movement (RCSM), citing an amusing problem: A lot of people seeking to join it aren’t all that … well … super. …
On Tuesday, Republican primary voters asserted themselves in spectacular fashion by wresting the GOP nomination from House Majority Leader Eric Cantor and giving it to quirky economist Dave Brat, who now looks very likely to win the seat in the fall. This is much more than a run-of-the-mill primary upset. Because Cantor was second in command to Speaker John Boehner among Republicans in the House, his defeat has set off a scramble for power, the outcome of which has yet to be determined.
Cantor’s defeat has led to searching questions about what exactly Brat’s victory means? Let’s run through a few different interpretations.
Immigration. One widely held view is that Cantor’s defeat means that immigration reform is dead. There is one problem with this line of thinking: comprehensive immigration reform, as endorsed by the Obama White House and a bipartisan group of senators that includes Chuck Schumer, D-N.Y., and John McCain, R-Ariz., among others, was already dead. The fundamental bone of contention is whether or not unauthorized immigrants should be granted a path to citizenship, provided they jump through various hoops, like paying back taxes and demonstrating English language proficiency, most of which would be impossible to implement.
Grassroots conservatives staunchly opposed a soft amnesty along these lines when it was proposed by the Bush administration, and they continue to oppose it now. They’ve long had the numbers and the influence in Congress to keep legislation to this effect from making it to President Barack Obama’s desk. It’s true that Cantor and other Republicans, including Boehner, had tried to find ways to revive the immigration reform effort, but they weren’t gaining much traction.
Tea Party vs. the Establishment. Though Cantor is now being portrayed as an establishment Republican par excellence, it is important to remember that he had long styled himself as a more conservative alternative to Boehner, who was always careful to cultivate allies to his right. Cantor predates the tea party, and his urbane manner contrasted with the populist style that is a hallmark of the tea party right.
Nevertheless, Cantor was, by and large, a man with whom tea party conservatives could do business, and he was willing to take on the thankless task of leading often fractious House Republicans. It is true that, as Ross Douthat observes, Cantor was seen as a friend of K Street, the lobbying corridor that does so much to shape American politics on both sides of the partisan divide. But it’s only relatively recently that (some) tea party conservatives decided that they wanted Cantor’s scalp. To suggest that Cantor’s defeat is a victory for the Republican right over the party’s squishy centrists is not quite correct.
Change vs. the Status Quo. My preferred interpretation is that Cantor’s defeat represents a defeat for those Republicans who believe that there is nothing wrong with the party that can’t be solved by a charismatic candidate and moving to the left on social issues like marriage equality and immigration reform. National Review senior editor Ramesh Ponnuru laments Cantor’s defeat because he had done so much to tout the work of conservative policy thinkers offering an alternative to the centralized, top-down, big-government policies offered by the left and the coziness with big business and Wall Street that defines too much of the right. He is right to do so. Cantor really did make an effort to open up the domestic policy conversation on the right.
Yet like Douthat, Ponnuru suggests that Cantor’s shift might have been too little too late: had Cantor been quicker to champion Main Street over Wall Street, he might have bested Brat. Instead, Brat’s call for a Main Street agenda resonated with enough GOP primary voters to put him over the top. If Brat’s success doesn’t demonstrate that rank-and-file Republicans are hungry for change, nothing will.
The tea party movement does not represent some irrational, nihilistic force, as its critics both inside and outside of the Republican coalition maintain. Rather, it is a movement founded on the belief that the Republican elite has grown fat, happy and complacent at a time when the country faces serious challenges — economic, fiscal, and social — and that the elite needs to be shaken out of its torpor.
What the movement needs is an agenda: something to be for, not just something to be against. Cantor’s defeat underscores that this longing for change persists, and that it won’t go away until Republicans start seriously addressing the economic stagnation at the bottom and the crony-capitalist corruption at the top of the American economy.
This year, the California Senate Insurance Committee has hosted two informational hearings on earthquake risk, bringing together stakeholders, regulators and thought-leaders.
The fact is that Californians are overexposed and underinsured. The desire to live in a beautiful environment outweighs the certainty of earthquake loss, as the state’s population continues to concentrate itself along the coast in two of the most seismically active areas of the world. The likely impact of a significant earthquake increases apace.
Modeling done in contemplation of the 100th anniversary of the 1906 San Francisco earthquake estimated that, were an identical event to occur today, the total economic loss to the region would be $260 billion. In that event, California would not be facing an insurance crisis. Instead, it would be facing a massive depression.
Leaving aside commonsense questions about any individual’s wisdom or responsibility in moving to dangerous locations, or government’s baffling public policy support of such foolish decisions, we must accept as a reality that people willingly put their lives and property in the path of certain disaster. Is sympathy wasted on the person bemoaning her flood loss after she built her castle on the banks of the certain-to-flood Mississippi?
The California Earthquake Authority – a publicly managed, privately funded, state residual market entity – was established in the wake of the 1994 Northridge earthquake. That earthquake measured 6.7 on the Richter scale, killed 60 people, destroyed thousands of homes, businesses and apartment complexes, and is, to date, still the costliest quake that California has experienced.
Insurers had dramatically underestimated their exposure to a Northridge-like earthquake. The insured loss in Northridge was more than four times the $3.5 billion in earthquake premiums collected by all earthquake insurers in California from 1969 through 1994.
Because California law requires insurers that sell homeowners insurance to also offer earthquake insurance, insurance companies’ response to Northridge was to attempt to reduce their earthquake exposure by restricting the sale of new homeowners policies. Insurers representing more than 93 percent of the homeowners market either reduced their sales of new policies or stopped writing entirely. Lenders, builders and realtors started to howl in economic pain. A state residual market entity, though controversial, was deemed necessary because of the problems caused by Northridge’s effect on the homeowners insurance industry.
Over time, the animating rationale undergirding the genesis of the CEA has changed. Just what was “the problem” that policymakers were seeking to address?
It is often assumed that the CEA was created to increase earthquake coverage, which is a reasonable assumption, given that the Northridge quake preceded the CEA’s creation. However reasonable, that assumption is wrong. Floor analysis from the time makes clear that the CEA was created in an effort to ensure that homeowners insurance remained available, in spite of the specter of seismic catastrophe.
To wit, the CEA’s primary function is not as a guarantor against earthquakes, but rather as a stabilization mechanism in the homeowners insurance market. Expanding the number of Californians with earthquake coverage is an ancillary benefit.
In each of this year’s hearings, a consensus developed that the CEA has been at once a success and a failure. The success was that the CEA has ushered in a new era of stability to the residential homeowners insurance market. The purported failure is that availability has not resulted in most homeowners being protected against earthquake risk.
It is odd that the CEA is indicted for failing to achieve what its creation did not anticipate. In fact, though financially robust, the CEA’s inability to cover a large proportion of Californians should surprise nobody. As far as take-up rate is concerned, one problem is affordability. As far as affordability is concerned, the problem is a high risk and a small pool of insured. For people willing to move into the jaws of disaster, insurance to cover such risks will be costly.
To solve the problem of affordability, both of California’s U.S. senators, with support from the CEA, have tried to put everyone else in the nation on the hook for California’s earthquake coverage. Their proposal would establish a system by which California would pay fees to the federal government in exchange for loan guarantees to the CEA, allowing it to reduce its premium rates and increase its take-up rates. As one would expect, the California Senate has seen fit to endorse just such a proposal. Senate Joint Resolution No. 28 urges the federal government to implement the “Earthquake Insurance Affordability Act.”
Eli Lehrer soundly dispatched the viability of such a federal approach back in 2011. In primis, a loan-based approach does not spread risk adequately to sustain the rates necessary to make the CEA policies practicable to purchase.
Are there other solutions to expanding the number of Californians covered by earthquake policies? Some might say a more desirable alternative would be to require, as a condition of securing financing for real property, earthquake insurance coverage in earthquake-prone areas. Such a mandate is comfortably within existing precedent. For example, federally backed mortgage securitizers Freddie Mac and Fannie Mae already require homeowners’ insurance and, in flood prone areas, flood coverage. As long as homeowners policies are linked by law to an earthquake offer, as a matter of long-term strategy, such an approach could be brought to bear.
But given the status quo, and aside from their market distorting infirmities, such insurance mandates are likely to lead to technical complications. For instance, mandating earthquake coverage could cause problems in California since the CEA is statutorily required to stop issuing policies within 180 days of exceeding its financial capacity. The CEA maintains that mandatory coverage would push the agency over the statutory limit. If so, it is clear that mandatory coverage would require a sizable increase of insurance capacity.
With regard to capacity, one school of thought holds that the scale of California’s earthquake risk is simply too big for capital markets to cover and that there is no way of “insuring our way out” of the present situation. R.J. Lehmann makes a compelling case that there is currently no such deficiency and that, even now, the private market is willing and able to take-on more earthquake risk. If Lehmann is correct, increasing CEA capacity is not a question of ability, but is, rather, a question of will.
More importantly, the private market could be induced to take-on more earthquake risk. A more straightforward approach to expanding coverage while maintaining the homeowners insurance marketplace could be pursued by simply “de-linking” homeowners policies from mandatory earthquake policy offers. This free-market approach would allow smaller insurers to enter the homeowners market, while simultaneously freeing up other insurers to more aggressively approach the earthquake insurance market.
Regardless of how the earthquake coverage problem is resolved, the CEA should be considered a success. It alleviated a dire situation in which insurers were forced to refuse to sell even basic homeowners policies. And yet, the existence of the CEA should also serve as a reminder that California’s penchant for micro-managing insurance practice has had profoundly dangerous, unexpected and distortionary consequences.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
A new research paper has calculated that frequent lawsuits by patent assertion entities—or patent trolls as they are known pejoratively—has scared off some $22 billion in venture capital investment over the past five years.
Advocates of patent reform long have argued that patent trolls – which buy up and hoard low-value patent with the intent to claim infringement should another company develop a similar product or device – have had a deleterious effect on entrepreneurs and start-ups. The paper, by Catherine Tucker, a professor at MIT’s Sloan School of Business, is believed to be the first to quantify the damage statistically. The research was commissioned by the Computer & Communications Industry Association.
In her 48-page paper, Tucker measured the statistical correlation between levels of patent litigation and venture capital investment in the United States between 1995 and 2012.
We find that VC investment, a major funding source for entrepreneurial activity, initially increases with the number of litigated patents, but that there is a “tipping point” where further increases in the number of patents litigated are associated with decreased VC investment, which suggests an inverted U-shaped relation between patent litigation and VC investment….There is some evidence of a similar inverted U-shaped relation between patent litigation and the creation of new small firms. Strikingly, we find evidence that litigation by frequent patent litigators, a proxy for PAE litigation, is directly associated with decreased VC investment with no positive effects initially.
Among her conclusions is that frequent patent litigators reduced VC investment by $21.8 billion over the past five years, relative to a baseline of $131 billion that VCs invested in start-ups and innovation over the same period. Frequent litigators are defined as companies that file 20 or more patent lawsuits. The direct costs of legal fees and settlements for this type of litigation were between approximately $3.77 billion and $18.9 billion in 2012.
The report puts more sting into the tabling of the Senate’s Patent Transparency and Improvements Act, which, among other points, would have codified a “loser pays” stipulation if a court found a patent case is frivolous. One of the reasons patent trolls are so successful is that it is in the financial interest for defendants to settle, rather than risk the costs of a lengthy trial, even if the accused infringer stood a good chance to win. The patent assertion entity extracts its settlement fee and, at the same time, avoids having the validity of its claim adjudicated. In short, it lives to troll another day. A “loser pays” regime would provide more incentive for a defendant to take the case to court, and mean more risk for a troll when pressing dubious claims.Creative Commons Attribution-NoDerivs 3.0 Unported License.