Out of the Storm News
Henry Ford would be gratified to see how far mass production has come. Entire cars are constructed in a matter of hours. He would be somewhat less impressed by how difficult and costly it has become to replace parts.
Why is it that certain parts – bumpers, for instance – are such a headache to repair? The root of the problem is related to something simple: expectations.
Insurers that sell automobile insurance policies agree to return the covered vehicle to its pre-loss mechanical condition in the event of a claim. What, exactly, such an agreement entails is subject to debate.
By way of example, contemplate a scenario in which your car’s bumper has had an unexpected rendezvous with a parking lot lamppost. The bumper is left misshapen and in need of replacement. Parts like your car’s bumper are considered “crash parts.” These are parts of the vehicle that are non-mechanical, non-structural and non-safety related. That means that crash parts tend to serve a cosmetic function and are made of sheet metal or plastic.
Secondary manufacturers around the world are able to reverse-engineer and manufacture your damaged car’s bumper. Replacement parts makers that have the blessing of the vehicle manufacturers are referred to as “original equipment manufacturers,” or OEM, for short. Other crash part manufacturers must be content with being referred to by the less poetic and negative term “non-OEM.”
Names aside, the bottom line is that non-OEM crash parts typically cost between 20 percent and 65 percent less than OEM crash parts. This cost difference makes non-OEM crash parts extremely attractive for cost-conscious repair. To combat their bottom-line disadvantage, OEM manufacturers maintain that the non-OEM parts are not of the same “like, kind and quality” as OEM crash parts.
The cost/quality debate forms the background to expectations and disputes regarding an insurer’s contractual obligation to return your car to its pre-loss condition. For years, meeting expectations concerning crash part replacement has been a flashpoint between insurers, automobile manufacturers and repair shops.
Since 2005, the Legislature in Sacramento has witnessed the introduction of 10 pieces of legislation designed to bend the cost curve away from one industry and toward another. The powerful interests aligned on each side have seen repeatedly that most of these attempts fail in the first step, with the first committee to hear the issue.
Thus, it was with alarm that insurers reacted to a regulatory decision by the California Department of Insurance (CDI) to create a new auto claims world, using its existing scope of devolved authority, by promulgating regulations that remove the incentivize to use non-OEM crash parts.
Regulatory usurpation of legislative matters works real political violence and undeniably is dictatorial in effect. Moreover, the nature of regulatory promulgation allows debate only on the details on a legal product created by government. Regulation can be a premeditated, non-market-based, government-originated effort to create winners and losers by fiat. In contrast, legislative debates are more likely to preserve fairness. For those apparently selected to be losers by the flex of regulatory muscle, the legislative process is understandably preferable.
Yet, before going further, consider again your car’s damaged bumper. When the car goes in for repair there are a variety of options available when it comes to replacing the damaged parts. By law, you have the right to determine where the car is taken for repairs and what specific repairs/parts are used on your vehicle. Thus, you can elect to order replacement parts directly from the factory of the vehicle’s manufacturer. This option tends to be expensive, but in theory, it could be a guarantor of quality.
Given that OEM crash parts are more expensive than non-OEM crash parts, an insurer’s estimate of the appropriate cost of repair may only cover a portion of the cost necessary to install OEM parts. This is because the insurer believes that it can discharge its obligation to return your car to pre-crash condition by using less expensive non-OEM parts.
The CDI characterizes instances in which an insurer’s estimate covers only the cost of less expensive non-OEM parts as “requiring” the use of such parts. The significance of this distinction is borne out by the regulations. The regulations oblige any insurer that “requires” non-OEM parts to guarantee the quality of those parts. The obligation is puzzling, because insurers do not certify, manufacture, replace or otherwise interact in a legally-significant way with parts themselves. Instead, insurers maintain a contractual relationship with their insured to return vehicles to pre-loss condition; a relationship which is only incidentally related to the repair shop/part manufacturer industry’s work on vehicles. Without a concrete nexus to the part itself, insurers are ill-suited to shoulder liability for defects.
By purposely conflating a warranty for repair and a warranty, the CDI has heavily burdened insurers, at the expense of their policyholders.
At the time they were promulgated, insurers speculated the regulations, in effect now for a little over a year, would cause a decline in the use of non-OEM crash parts and an increase in the cost of repairs. Though data bearing out those claims does not yet exist, the merit of those claims is apparent.
For instance, hindering the use of non-OEM parts would be expected to lead to an increase in the use of OEM parts, in turn, leading to an increase in the number of vehicles which reach the dollar threshold at which vehicles are declared a total loss. This is undesirable when one considers that, depending on the value of your car, faced with the prospect of repairing that bumper using an expensive OEM part, it could end up totaled.
To rectify the perceived problem with non-OEM crash part quality, insurers repeatedly have proposed systems by which such parts could be certified as equivalent to OEM quality. These proposals have consistently been opposed by automobile manufacturers and body shops alike. The great irony of their opposition is that it is commonly known that both non-OEM and OEM parts are manufactured overseas, often at the same plant!
Whether intentional or not, the CDI, having acceded to the agenda of the automobile manufacturers and the body shops, has set a course for higher costs and no discernable increase in consumer protection. It is time for the Legislature to step in and to create a uniform system by which crash parts of all kinds are rendered regulatory identical. Perhaps then a bumper repair would be an easy fix.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
There are about 46 million smokers in the United States, and an estimated 480,000 deaths per year are attributed to cigarette smoking. Despite our best efforts, these numbers have not budged in the last ten years. Electronic cigarettes may be our best hope for changing that.
Before e-cigarettes, the best we had to offer smokers was a set of pharmaceutical-based smoking-cessation protocols that failed about 90 percent of smokers even under the best of study conditions, when results are measured at six to twelve months. The flaws in these “evidence-based” protocols are fairly obvious. They do not satisfy or eliminate the urge to smoke in the majority of smokers, the dose is too low, and the duration of treatment is too short.
E-cigarettes and similar vapor devices compare quite favorably to these methods. E-cigarettes enable the user to inhale nicotine without the witches’ brew of toxic chemicals found in cigarette smoke. They have proven extremely popular among smokers who have been unable or unwilling to quit, and studies show e-cigarettes to be as good as to much better than the pharmaceutical products. In addition, e-cigarettes have been shown to be easier to quit than cigarettes, enabling the now-ex-smoker to finally end his or her addiction to nicotine. While some non-smoking teens may experiment with e-cigarettes, very few will continue their use. As with adults, almost all e-cigarette use by teens is by smokers attempting to cut down or quit.
Further, e-cigarettes contain the same type of nicotine as the pharmaceutical products, and like those products e-cigarettes do not contain tobacco. E-cigarettes’ other ingredients are well known, and consist of propylene glycol (theatrical fog), vegetable glycerin, citric acid, distilled water, and food flavoring. In the absence of FDA regulation (which is the fault of FDA, not the e-cigarette companies), the e-cigarette industry has developed and put into place voluntary standards to ensure the quality and consistency of e-cigarette fluid.
All told, the reduction in risk offered by e-cigarettes relative to traditional cigarettes is about 98 percent. Smokers find the vapor products far more satisfying, and, in many cases, far more effective than the pharmaceutical alternatives.
The case for e-cigarettes is based on solid research and the excellent safety record of the pharmaceutical nicotine-delivery products, which are chemically similar. When it comes to reducing the death toll of smoking, vapor products are the best option we have.
June 26, 2014
Chairman Dianne Feinstein
U.S. Senate Select Committee on Intelligence
Washington, DC 20510
Vice Chairman Chambliss
U.S. Senate Select Committee on Intelligence
Washington, DC 20510
Dear Chairman Feinstein, Vice Chairman Chambliss, and Members of the Senate Select Committee on Intelligence:
We the undersigned write to express our grave concerns with the Cybersecurity Information Sharing Act of 2014 (CISA). Over the last year, the public has learned that the National Security Agency (NSA) and other government agencies have significantly stretched the meaning of statutory provisions of law in order to gather sensitive information on hundreds of millions of Americans. The NSA has, without a warrant, searched for the communications of Americans among those collected under laws authorizing surveillance of persons abroad, and engaged in questionable cybersecurity practices, such as compromise of security standards and failure to promptly inform technology companies about security vulnerabilities in their software. CISA ignores these revelations. Instead of reining in NSA surveillance, the bill would facilitate a vast flow of private communications data to the NSA. CISA omits many of the civil liberties protections that were incorporated, after thorough consideration, into the cybersecurity legislation the Senate last considered. For the following reasons, we strongly oppose this legislation and urge against Senate consideration:
- Militarization of the Civilian Cybersecurity Program: CISA requires that cyberthreat indicators shared from the private sector with the Department of Homeland Security (DHS) be immediately disseminated to the Department of Defense, which includes the NSA and U.S. Cyber Command. This new flow of private communications information to NSA is deeply troubling given the past year’s revelations of overbroad NSA surveillance. It would enhance the NSA’s role in the civilian cybersecurity program, risking militarization of the program, which would diminish transparency and accountability.
- Inadequate Use Limitations:CISA’s inadequate use limitations risk turning the bill into a backdoor for warrantless use of information the government receivesfor investigations and prosecutions of crimes unrelated to cybersecurity. CISA permits state, local, and tribal governments to use cyber threat indicators to prevent, investigate, or prosecute any crime to which the sharing entity assents. It also allows the Federal Government to use information it receives for an unacceptably broad range of law enforcement purposes, including investigations and prosecutions under the Computer Fraud and Abuse Act (CFAA) and the Espionage Act. Exemption from disclosure law may obstruct transparency regarding law enforcement use of such information. The legislation should contain reasonable use restrictions, similar those in the July 2012 Cybersecurity Act.
- Failure to Protect Personally Identifiable Information:CISA requires private sector entities to remove personal information that pertains to knownU.S. persons before they share cyber threat indicators. In practice, this will provide little privacy protection because private sector entities will not know the citizenship of the person to whom the information pertains. Further, the bill does not require any effort by the government to remove personal information before sharing cyber threat indicators. Finally, the bill does not require that federal privacy rules be in place before information-sharing begins, increasing the risk of improper dissemination of potentially sensitive personal information.
- Overbroad Liability Protection for Countermeasures:CISA defines “countermeasure” broadly, and unwisely provides an affirmative defense when a countermeasure causes damage to an entity’s network or information system, including actions that would otherwise violate the CFAA, the Wiretap Act, and the Stored Communications Act. This invites reckless and careless use of countermeasures that could inadvertently harm bystanders.
- Arbitrarily Harms Average Internet Users: The definition of “cybersecurity threat” is overbroad, and includes “any action” that may result in an unauthorized effort to adversely impact the security, confidentiality and availability of an information system or of information stored on such system. Countermeasures can be employed against such threats absent risk of liability. This could lead to use of countermeasures in response to mere terms of service violations. For example, logging into another individual’s social networking account – even with their permission – typically violates the website’s terms of service, and therefore qualifies as unauthorized access under the CFAA, and could be treated as a “cybersecurity threat.” A provision preventing this harm appeared in the July 2012 Cybersecurity Act and should be included in CISA.
- Infringing on Net Neutrality Policy:Likewise, the July 2012 bill also contained provisions clarifying that nothing in the Act, including overbroad application of the terms “cybersecurity threat” and “countermeasure,” could be construed to modify or alter any Open Internet rules adopted by the Federal Communications Commission. Net neutrality is a complex topic and policy on this matter should not be set by cybersecurity legislation.
Cybersecurity legislation intended to protect national security, financial systems, computer users, and the Internet must not undercut essential privacy rights. Accordingly, we urge that these changes be adopted before this legislation moves forward.
Please contact Greg Nojeim, Director of CDT’s Project on Freedom, Security & Technology, email@example.com, or Jake Laperruque, CDT’s Fellow on Privacy, Surveillance, and Security, firstname.lastname@example.org, regarding any questions.
Thank you for your consideration,
American Civil Liberties Union
American Library Association
Center for Democracy & Technology
Competitive Enterprise Institute
The Constitution Project
Council on Islamic American Relations
Cyber Policy Project
Defending Dissent Foundation
The Electronic Frontier Foundation
Free Press Action Fund
Government Accountability Project
National Security Counselors
New America Foundation’s Open Technology Institute
PEN American Center
People For the American Way
The concerns posed by this problematic legislation are far- reaching in their effects, and implicate a broad array of issues, including privacy, open government, civil liberties and the integrity of our information technology infrastructure. Many of the undersigned groups share several or all of these concerns as described in today’s letter circulated by CDT, which highlights technology and privacy issues with the bill, and a letter organized by the ACLU, which focuses on serious concerns the bill poses for open government, whistleblower protections and civil liberties. These concerns are complementary and overlapping, as evidenced by the significant number of groups signing onto both letters.
 Cybersecurity Information Sharing Act of 2014, Sec. 5(c)(1)(C).
 Cybersecurity Information Sharing Act of 2014, Sec. 4(d)(4)(A)(i).
 Cybersecurity Information Sharing Act of 2014, Sec. 5(d)(5)(A).
S. 3414, Sec. 704(b), 2012.
S. 3414, Sec. 708(6)(B), 2012.
 S. 3414, Sec. 707(a)(10), 2012.
Yesterday, 32 organizations from across the political spectrum, including the American Civil Liberties Union, the Electronic Frontier Foundation (EFF), and R Street Institute, asked Attorney General Eric Holder to explain just how the United States government plans to use the system it’s building and the data contained therein. Specifically, they want the federal government to perform a formal Privacy Impact Assessment (PIA) to follow up on the last such report, done in 2008.
New research published in Multiple Sclerosis Journal and authored by Anna Hedström of Stockholm’s Karolinska Institute of Environmental Medicine confirms that snus users have a significantly lower risk for multiple sclerosis than nonusers of tobacco. Five years ago, I discussed the researchers’ earlier findings on this subject here.
Hedström’s study is based on some 7,900 Swedes with MS and 9,400 controls. Compared with never users of tobacco, snus users had a lower risk for MS (odds ratio OR = 0.75, 95 percent confidence interval, CI= 0.63 – 0.90). Hedström also showed an increased effect at higher duration-dose levels of snus. For example, users with greater than 10 packet-years (the number of snus doses per day and years of use) had an OR of 0.45 (CI= 028 – 0.68). Smokers had modestly increased risk (OR= 1.49, CI= 1.40 – 1.59), a finding that is similar to that reported in Hedström’s previous study.
Scientific research is methodically unveiling the benefits of nicotine and smoke-free tobacco use with respect to degenerative brain diseases. A finding that nicotine may improve performance in people with mild cognitive impairment has resulted in calls for more research on nicotine’s effect on dementia.
The impact of nicotine/tobacco use on Parkinson’s disease is well documented. An American Cancer Society study provides clear evidence that smokeless tobacco use may be protective for Parkinson’s disease (RR = 0.22, CI = 0.07 – 0.67). In fact, nicotine is being discussed as therapy for this disorder (see here, here and here).
Alzheimer’s disease is the sixth-leading cause of death in the United States and Parkinson’s disease is the fourteenth. The role of nicotine and smoke-free tobacco in reducing risk of or treating these disorders is of significant import.
Last week, the Washington Post detailed a phenomenon which, to many twenty-something residents (or former residents) of the District of Columbia, is all too familiar: young people are leaving Washington proper in droves:
They were once a part of the free-spending group of young people who jolted Washington’s economy. Now older and with more financial strains, they are trying to find a new place in it.
Amid the talk of young newcomers and their fondness for social leaguesand artisanal-coffee shops, another reality exists: Many are struggling to keep pace with the city’s rising cost of housing. And as new millennials move into the District, older members of that generation — loosely defined as ranging from 18 to 34 years old — are heading out.
This odd, migratory pattern among younger D.C. residents is undoubtedly a problem, and one with which this author can personally identify, having moved last year from a squat, overpriced studio in Kalorama to a spacious two bedroom in Arlington, when the combination of lower taxes and split rent became simply too attractive to turn down. For others, such as those cited in the article, similar factors are no doubt at work.
Since the article’s publication, its author, Robert Samuels, has posited a number of potential remedies for the relevant problem. For instance, he has suggested making neighborhoods more walkable, or reforming and expanding the public transit system (which, in WMATA’s case, is decades overdue). Lower crime rates also would be essential for some of the more affordable areas of the city to attract new residents, Samuels notes.
All of these are good suggestions, as far as they go, but there’s one obvious solution that is completely ignored both in the original article and its sequel.
I refer to the century-old Height of Buildings Act of 1910, which restricts buildings in the District of Columbia to the rather diminutive height of 110 feet. While this particular height probably seemed formidable in 1910, today it only recalls a hilariously anachronistic phrase from the Rogers and Hammerstein musical Oklahoma:
Everything’s up to date in Kansas City
They’ve gone about as far as they can go
They went and built a skyscraper seven stories high
About as high as a building oughtta grow
Here in D.C., regulators appear to believe eight stories is about as high as a building ought to grow. What seems to concern them less is how high this causes rent to grow. As Danny Vinik noted in The New Republic in May:
Zillow, a real-estate database, rates D.C. as the seventh-most expensive metro area in the country for residential renters. Office rents are third-highest in the U.S, according to commercial real estate firm Cassidy Turley. Increasing the supply of housing would bring down these rents significantly. That leaves more money in consumers’ pockets to spend on goods and services and more money with companies to boost wages or invest. That means more jobs, including many from the construction boom that would result from a relaxation of height restrictions. The increased density would lead to a larger underlying tax base and boost revenue to put toward city services. And, as Matt Yglesias has often written, allowing companies to cluster near each other has significant economic benefits.
One doesn’t have to be in favor of transforming the much-beloved D.C. skyline into a Tokyo-esque hive to see the problem, nor why even a modest increase in the height limit could drive down rents and allow more young people to stay in the city. Such an improvement would be at least as important as, say, updating the city’s public transit.
Of course, as in any policy fight, there are both economic interests and inherent resistance to change to be overcome. On the economic interests front, one can easily see how landlords and homeowners prefer a policy that sees their rents rise every year, irrespective of the age of the tenants who happen to be paying them, or that keeps the value of their homes increasing, irrespective of whether anyone can afford them.
As to resistance to change, one need only look to the fuss that more senior residents of the district raised over the addition of late night bars and bike lines to neighborhoods like Cleveland Park. Shortening their beloved skyline, even by a few stories, may be a bridge too far.
Still, D.C. has a choice: Embrace the future by letting its buildings grow up along with its younger inhabitants, or simply serve as one temporary stop for those young people on their way to environments with less regulatory meddling. It may be that many residents probably are all too happy to choose the second option, but they may need a reminder that economic growth also implies growth within the city. In this case, that growth may have to be vertical.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Earlier this week, the University of California’s Office of the President (UCOP) determined that, as a result of a perceived shortfall in regulation, it would not reimburse faculty members who use transportation network companies (Uber, Lyft, Sidecar, etc.) or other peer-to-peer services like Airbnb while engaged in UC-related business. The message is recreated below:
UCOP’s Office of General Counsel has determined that third-party lodging and transportation services, commonly referred to as peer-to-peer or sharing businesses, should not be used because of concerns that these services are not fully regulated and do not protect users to the same extent as a commercially regulated business. As the market matures and these businesses evolve, the university may reconsider whether reimbursement of travel costs provided by peer-to-peer or sharing businesses will be allowed.
Therefore, until further notice, please do not use services such as Uber, Lyft, Air B&B or any other similar business while traveling on or engaging in UC business.
What or who would drive the president of the University of California (Janet Napolitano) to steer recklessly into a collision with the regulatory decision of another duly delegated California state body that has far greater expertise? Why does Ms. Napolitano even have a position on how TNCs should be regulated?
This is an instructive example of the unexpected reach of state bodies and their ability to choose winners. The UCOP has decided to augustly articulate a legal, but policy-deficient, rationale for their judgment about the sufficiency of the current regulatory regime.
What precisely does the UCOP mean by “not fully regulated”?
Setting aside the other peer-to-peer services, in California, TNCs are regulated by the California Public Utilities Commission (CPUC). The CPUC has promulgated preliminary regulations and insurance requirements to TNCs which, while still being refined, do hold the force of law. In fact, the CPUC has already issued licenses to operate to five separate providers of TNC services. As far as the CPUC is concerned, it has enough of a handle on the situation to allow for TNCs to continue to operate.
The sufficiency of the regulation surrounding TNCs is more about resolving hitherto latent ambiguities which have been emphasized by incumbent industries. No doubt, such concerns deserve serious attention from regulators and policy-makers alike. But, such attention does not signal that TNCs are somehow without “full regulation.”
More frustrating still is that the UCOP has determined, with no or limited subject-matter expertise, that other “commercially regulated businesses” offer consumers a greater degree of protection. Here, the UCOP is conflating different activities. TNCs are under the regulatory purview of the CPUC because they are considered charter-party carriers – a designation populated by limousine services. They are not regulated as taxis are, by local authorities. Since TNCs have a different regulator and a different business model, comparisons between their regulatory environment and the regulatory environment in which taxis exist is awkward and misleading.
The UCOP, if utterly compelled to act, should have at least done so with an eye on the angle that the California Legislature is taking. There is little doubt that, whatever the final product looks like, there will be a meaningfully cognizable difference between what is required of taxis and what is required of TNCs. The University of California, unfortunately, is doing its part to express a policy preference for disregard of other state regulators and apparently for onerous regulation. California-licensed TNCs are lawful vendors. In fashioning its policy judgment, the UCOP erred.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
The Supreme Court of British Columbia has ordered Google to remove information from the internet, reports Zach Graves for R Street.
The Canadian case involves the sale of counterfeit products, but it is similar to a ruling by the European Court of Justice last month, which ruled that search engines could be required to remove links that infringe upon a person’s privacy rights.
The decisions are distinct, but both have a wide reach:
- The European decision applies even to factual information in the public record.
- Unlike the European decision, the Canadian decision applies beyond Canadian sites, requiring Google to remove information that can be assessed from anywhere in the world.
The ruling, writes Graves, leads to some important questions. What if a ruling in one country conflicts with another? He notes that China, Korea and Russia are already pursuing ways to possibly censor the Internet.
People depend on search engines for information, and allowing a country to set restrictions on the type of information available on the internet is concerning.
Source: Zach Graves, “The Dangerous Proliferation of the ‘Right to be Forgotten’,” R Street, June 18, 2014.
In recent weeks, more than a few groups – including one on whose board I serve – have denounced various aspects of the Texas Republican Party’s platform.
Indeed, there’s a lot to criticize. In addition to its absurd call for so-called “reparative therapy” for homosexuals and bans on pornography, the platform draft includes nativist language on immigration and an attack on vaccination. There’s also some conspiracy theory garbage opposing Sharia law and the United Nations’ Agenda 21.
A few parts of the platform seem downright sloppy: one provision calls for the repeal of all laws “regarding the production, distribution or consumption of food.” I’m sympathetic to what the writers of this were probably thinking, but taken literally, the provision would make it legal to label jars of baby food as containing “carrots and peas,” even if what they really contained was fermented gerbil vomit.
It also includes some foreign policy planks that somebody must care about but that seem quite out of place for what is, after all, a state party.
For all its real flaws, however, the platform is a pretty decent summation of current streams of thought among the populist, socially conservative right. The current draft calls for the outright repeal of the Patriot Act as well as the National Defense Authorization Act provisions that allow the use of military tribunals for trying terrorists. Both provisions would probably get more votes in the Democratic caucus than the Republican one. Previous iterations of the Texas GOP platform have also called for usury laws—government price controls on interest rates—and mandatory labeling of genetically modified food.
Frankly, much of the platform’s weirdness and its strong populist flavor come from the unusual way that Texas drafts its platform. The platform comes from a drafting committee, just as most other state platforms do. But where most party platforms typically are written by insiders for media consumption, the Texas GOP platform is debated and rewritten by anybody who takes time off and pays the fee to attend the party convention. The result is that it’s a true “grass roots” platform that reflects the feelings of the party’s activists, rather than its officeholders.
People with a pet issue can usually get it in, so long as it isn’t too contentious a topic. And I know this for a fact. In 2012, the Heartland Institute’s then-Texas director, attending the convention in her own private capacity, got some language into the platform on property insurance that I helped her write.
This method of writing the platform serves to tell office-holders what their grassroots are really thinking, rather than serving as a manifesto written by those officeholders. The platform may well pull Texas office-holders in a populist, social conservative direction, but it doesn’t necessarily prove much about how they would govern.
A group of Democrats coming together and voting in the same way would likely call for vastly higher taxes; a straightforward government takeover of health care; a forced conversion to “green energy” that would wreck the economy; an end to secret ballot elections for unions throughout the country; imposition of racial quotas on private employers; government bans on “unhealthy” food; outright confiscation of guns; laws against “hate speech”; Internet censorship; denial of broadcast licenses to “unbalanced” (read: conservative) media; new restrictions on prayer in public; and taxpayer funding for partial-birth abortion. And there would probably be a long Noam Chompsky-inspired rant against corporations thrown in somewhere as well.
Only a handful of Democratic politicians currently in office support any of these things in public and most would probably oppose them if asked. Many Democratic voters would probably oppose them too. But all are popular with certain parts of the Democratic base and a left wing populist platform would probably write a platform containing all of them.
My point isn’t that the Texas Republican platform is irrelevant: it does reflect the views of a certain portion of the Republican Party and may influence their views. But nobody should mistake it for a manifesto on how Republican officeholders in Texas or anywhere else plan to govern.
This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
It has never been more of a dog’s world. Consider, 45 percent of households own dogs.
With such a high rate of cohabitation, insurers have seen dog-related claims rise. Today, according to the Centers for Disease Control and Prevention (CDC), there are approximately 4.5 million Americans bitten by dogs each year. Of that 4.5 million, 885,000 people are wounded seriously enough to require medical attention.
It is undeniable that dogs injure an enormous number of Americans. People with dogs in their homes are associated with a higher likelihood of being bitten by a dog and people with two or more dogs in their homes are five times more likely to be bitten than those without any dogs.
The Insurance Information Institute reports that one dollar of every three paid out in homeowners’ claims was a result of a dog-related claim and that the average cost paid out for such a claim was $27,862.
Whether or not the associative risks or costs of dog ownership are recognized, the popularity of dogs persists.
Insurers, as their business demands, assess the cost of the risk that dogs represent, and have compiled information related to dog-related claims. From this information, some insurers have chosen to adjust their underwriting practices to account for the risk profiles of different breeds. This activity sometimes leads to a policy showdown.
Two states, Pennsylvania and Michigan, have chosen to prevent insurers from distinguishing between the risks that different breeds represent. To forestall what proponents of such bans describe as “breed discrimination”, both states have prohibited underwriting practices that are sensitive to breed. There are two rationales behind such bans, one is emotive and the other is policy-based.
First, such bans are, to a dog-loving populous, intuitively attractive. Let’s face it, though legally property, dogs are so much more to those who love them. For those who love dogs, the term “breed discrimination” holds a great deal of rhetorical power. Semantically, “breed discrimination” sounds odious. And politically, fighting against discrimination is almost always good … right?
Well, not really. Risk classification of all types is, in a literal sense, the “quality or power of finely distinguishing.” AKA, legal discrimination. In spite of the legality and inevitability of risk classification, the issue remains sensitive.
Second, even without access to the proprietary data that insurers may have, underwriting according to breed – in the strict sense – is problematic. Though the CDC attempts to measure the health risks posed by one breed versus another, they confess the shortcomings of such an approach. Data about attacks is self-reported and prone to inaccuracy. Further, the majority of dogs in U.S. households are not pure in breed. The existence of mutts and customized cross-breeds (for instance, any dog known as a, “fill in the blank”-doodle) complicates easy classification.
From the perspective of insurers, who are interested in pricing their products competitively so that clients posing a low risk pay a lower premium, forbidding the use of dog breed data is problematic. In every state, insurers are required to underwrite on an “actuarially justified” basis. This means that only legitimate cost factors may be taken into account. Dog breed data, though imprecise, meets this threshold because it is a proxy for indicia of associated risks that are statistically correlative.
More specifically, it is known that male dogs bite more frequently than female dogs; that non-neutered dogs are more likely to bite than neutered dogs; and, that chained dogs are more likely to bite than unchained dogs. Further, it is known that larger dogs are capable of causing greater injury than are smaller dogs. Thus, to the extent that some breeds are more likely to possess any number of the enumerated characteristics, some insurers have come to believe that there is a meaningful correlation between that breed and heightened claim risk.
To be certain, insurers are not underwriting based on breed guided by a normative judgment.
Still, many insurance customers will find such reasoning unsatisfactory because, without access to the data that undergirds it, the correlation will, in their view, not rise to a sufficient level of statistical relevance. Fortunately, there is recourse available to those customers and it lies in the realm of the free market.
Some companies, like State Farm, forego breed-sensitive underwriting in favor of adjusting premiums after a dog has demonstrated itself to be a risk. It is not inconceivable that State Farm, by accepting a certain number of losses that it may have otherwise avoided by employing breed sensitive underwriting, has gained a reputational advantage that could well offset the costs and increase market share and profits. State legislatures, instead of succumbing to the temptation to ban breed-sensitive underwriting, should recognize that the market has a solution to unpopular underwriting practices.
With 885,000 people a year wounded seriously enough by dogs to require medical attention insurers are justified to classify dog owners of all types, as a higher risk than those who do not own dogs.
If legislatures must pass laws to save dogs and their owners from the effects of breed discrimination, perhaps they would allow insurers a rating exception as they pass their laws. The exception would be a straightforward approach to assessing the risk that a dog embodies by simply weighing the dog. We know that the larger the dog the more damage it can do. Can scales predict what breed is not allowed to?
This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Imagine the following: A new form of entertainment enters the market, which engages its players in intense competition, sometimes to the exclusion of their social lives, and which seems to many to be a waste of their energy when they could be accomplishing more useful things.
In fact, several argue the intense competition and warlike elements of this new form of entertainment make its participants prone to violence, and urge that it be removed from polite society in favor of the older, more respectable forms of social entertainment.
I am referring, of course, to chess, a game which is regarded today as an eminently respectable pastime requiring great reservoirs of strategic skill. Chess champions are international celebrities, and often take up (and bolster) political causes, as in the case of Garry Kasparov’s advocacy for the Magnitsky Act. Yet, as io9 documents, there was a time when statements like the following were written without irony about the game:
The great interest taken in this warlike game — the importance attached to a victory — and the disgrace attending defeat, are exemplified in numerous instances handed down to us by various writers, of which the most worthy of notice are the following….
Richlet, in his Dictionary, article Echec, writes, ” It is said, that the Devil, in order to make poor Job lose his patience, had only to engage him at a game at Chess.”
Would that we could look at similar accusations against video games with similar derision, especially given that the video game industry recently held its annual event showcasing the coming year in technology and games, known commonly as E3.
Unfortunately, it seems that every time the video game industry dares to show its face in public, a hyperventilating article is never far behind. This year’s E3 was no exception, as the New York Times‘ Nick Bilton fretted about the convention:
But it is hard to argue that there isn’t some level of desensitization after a day spent at E3. At the main entrance of the Los Angeles Convention Center, where the conference was held, people lined up to play the new game Payday 2. In this game, you team up with friends to rob a bank. Killing police is a big part of succeeding.
As I watched people picking off cops and security guards with sniper rifles and handguns, news broke that a real-life shooting in Las Vegas had resulted in the death of two police officers and three civilians (including the two shooters).
I asked Almir Listo, manager of investor relations at Starbreeze Studios, which makes Payday 2, if he felt in any way uncomfortable about making a game that promotes shooting police.
“If you look hard enough, you can find an excuse for everything; I don’t think there is a correlation,” he said. “In Sweden, where I am from, you don’t see that stuff happen, and we play the same video games there.”
After the Sandy Hook shootings in Connecticut, when it became clear that Adam Lanza was a fan of first-person shooters, including the popular military game Call of Duty, President Obama said Congress should find out once and for all if there was a connection between games and gun violence.
“Congress should fund research on the effects violent video games have on young minds,” he said. “We don’t benefit from ignorance. We don’t benefit from not knowing the science.” Yet more than a year later, we don’t conclusively know if there is a link.
In the event that Payday 2 is ever played competitively at the same level that Chess is (or, for that matter, that Starcraft is in South Korea), one can only hope that articles like Bilton’s will be held up to a similar level of ridicule.
Where to begin? Perhaps with the fact that Adam Lanza’s favorite game – the one that he was, in fact, said to be “obsessed with” in the Sandy Hook Crime Report – was the thoroughly non-violent Dance Dance Revolution. Or that Bilton offers no evidence the shooters in Vegas had ever heard of video games, let alone Payday 2. Or perhaps the fact that, like so much of what President Obama suggests Congress fund, there is no need for research on the effects of violent video games on young minds. There have been scores of such studies already, and every one not either funded by anti-video game activists or so vague in conclusions as to be meaningless shows no significant effect that video games, violent or otherwise, have on young minds (or, for that matter, adult ones.)
That Bilton, whose coverage of the rest of E3 was relatively balanced, should fall for the chicanery that video games can cause violence – or, as the new and far less menacing cries of alarm would have it, “decrease empathy” – is sad, but not unexpected. Video games are a convenient bogeyman for a society that labels even the Western canon with “trigger warnings” and frets about whether even a single sexist joke could somehow lead to mass acceptance of rape.
Video games are unapologetic in their gore, their violence and their transgressiveness. Despite all that, they are and remain harmless. In this era of oversensitivity, the imperviousness of video games to the mindless censoriousness of our politically correct, morally panicky current culture is a refreshing checkmate in what often looks like a losing battle for free speech.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
WASHINGTON (June 24, 2014) – State and local governments should take steps to curb cigarette use and promote e-cigarettes as a real driver of tobacco harm reduction, said the R Street Institute in a paper released today.
Authored by Dr. Joel Nitzkin, R Street senior fellow and public health expert, “E-cigarette primer for state and local lawmakers” lays out the benefits associated with using e-cigarettes as a means to quitting traditional cigarettes, while promoting controls through regulation to keep all tobacco products out of the hands of minors.
“Adding a tobacco harm reduction component to current tobacco-control programming is the only policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years,” said Nitzkin. “Sensible FDA regulation will be needed if e-cigarette makers and vendors are to present the level of risk posed by these products honestly. However, any regulation must be evidence-based, practical and reasonably streamlined in a way that will protect and advance public health.”
As draft FDA regulations begin to make their way through the process, Nitzkin outlines several steps that state and local governments can take in the meantime.
First, state and local governments should fully enforce age restrictions on the purchase of all tobacco products, and consider upping the age restriction from 18 to 21 to remove cigarettes from the high school environment. Second, to encourage users to switch, governments should heavily tax cigarettes, but only lightly tax lower-risk products. Third, governments should consider implementing non-pharmaceutical smoking cessation protocols that could prove to be more effective for long-term abstinence. Finally, governments should urge tobacco-control leaders to open dialogue with those in various tobacco-related industries who endorse e-cigarettes as the solution to curbing cigarette use and would welcome the opportunity to partner with those in the public health community in pursuit of shared public health objectives.
Simultaneously, governments should urge the FDA to sensibly regulate e-cigarettes and other lower-risk tobacco products by prohibiting sales to minors, restricting marketing and assuring quality and consistency of manufacture. They should urge the FDA not to impose restrictions on flavoring or nicotine content that would make those products unpalatable to smokers who otherwise would switch.
The paper can be found here:
Cigarettes kill an estimated 480,000 Americans each year. An estimated 46 million Americans smoke cigarettes, the most hazardous and most addictive of tobacco products. Despite our best efforts, these numbers have been consistent, year to year, for more than a decade. Switching from cigarettes to a smokeless tobacco product or an e-cigarette can reduce a smoker’s risk of potentially fatal tobacco-attributable cancer, heart and lung disease by 98 percent or better. This approach is called “tobacco harm reduction” (THR). Adding a THR component to current tobacco-control programming is the only policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years. The e-cigarette family of products offers the most promising set of harm reduction methods because of their relative safety compared to cigarettes, their efficacy in helping smokers cut down or quit and their unattractiveness to teens and other non-smokers. They also promise to be less addictive than cigarettes and easier to quit.
This primer provides evidence in favor of e-cigarettes as a THR modality and a review of the arguments against them. Many in tobacco control oppose any consideration of e-cigarettes because of their dislike of the “tobacco industry”; because they fear that THR will attract large numbers of teens to nicotine addiction; because the case in favor of e-cigarettes has not been proven to their satisfaction; and possibly because of likely harm to the major pharmaceutical firms that now support much tobacco-control research and programming. This primer closes with recommendations for actions state and local lawmakers should and should not consider with respect to THR and e-cigarettes.
Demos blogger Matt Bruenig, in an apparently Burkean mood, writes:
The biggest factor in production is not nature, labor, or capital, but in fact accumulated technology and knowledge that comes to us as an unearned inheritance from the past. The marginal productivity of that unearned inheritance accounts for the majority of our economic output. Imagine you held everything else equal in the economy, but then ticked off electricity technology (which nobody alive has produced). By how much would the economy shrink? A ton.
I say Burkean, of course, because of Edmund Burke’s famous passage:
We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that he would do better to avail himself of the general bank and capital of nations, and of ages.
Bruenig correctly points out that it is the bank and capital of nations and of ages that account for nearly all of our economic activity. We owe our know-how, our “lower-level knowledge” as Amar Bhide puts it, to those who came before us. We live off of their accomplishments, and can only aspire to add something meaningful to them.
A drastically more open immigration policy makes a great deal of sense, from this perspective. Global wealth is greatly hindered by the fact that nearly all of humanity is stuck in places that do not have a lot of capital in the bank of their nations, so to speak. From a broad point of view, allowing as many people as possible to move to areas where they can participate in greater “accumulated technology and knowledge” enriches the world. From a humanitarian point of view, it most directly lifts the poorest people on Earth out of poverty, and from a selfish point, it is highly likely to enrich the average American.
The great innovators of the 19th and 20th century in this country were largely either immigrants or the children or grandchildren of immigrants. Who believes that America would have been better off without a Ford or a Carnegie?
Some fear that opening the door to the bank and capital of our nation will do violence to those institutions, but history has not given us reason to give much credibility to this concern. It certainly did not happen when we had drastically more open borders in the 19th century. Technology and lower-level knowledge are accumulated by the sweat of our brows, and the more people we have to get to work pushing the frontier further, the better off we will all be, to say nothing of those who will inherit our legacy.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
A little more than a week ago, if you were to walk through the lobby of Tesla Motors – not that I ever have, but if one were – you would have admired a wall displaying hundreds of patents belonging to the company.
This week, however, it’s bare.
That’s because Tesla founder and serial entrepreneur Elon Musk has removed them “in the spirit of the open source movement, for the advancement of electric vehicle technology.” In a June 12 blog post, Musk declared that no longer would Tesla enforce protection on their patented technologies. Instead, the company plans to open the doors in hopes that other firms will enter, foster innovation and grow the electric vehicle market.
There are those who have scoffed at this news, believing Musk was embarking on a high-ride publicity stunt. It’s certainly true that Tesla and Musk have gotten the media’s attention. But more importantly, this has garnered more attention for the open source movement – a movement that has been stifled by patent trolls and hungry patent attorneys.
Musk elaborates on that point by writing that patents these days serve only to “stifle progress, entrench the positions of giant corporations and enrich those in the legal profession, rather than the actual inventors.” He goes on to say that receiving a patent “really just means that you bought a lottery ticket to a lawsuit.” A lottery that expenses millions in attorney fees and court costs is no winning one, and one I certainly wouldn’t want to be a part of.
Indeed, as the dust settles on this news it does appear that Tesla’s attempt at promoting electric car programs is paying off. It was reported by the Financial Times that BMW and Nissan, two of Tesla’s biggest rivals, are interested in pairing with the company to expand its network of charging stations throughout the United States. Indeed, investors seem to agree with the company that the network effect from having more electric cars on the road, and thus more charging stations, is more important than the monopoly rights granted by the patents. Tesla’s stock price soared to its highest point in months over the past week and a half.
Obviously, it’s going to take a lot more than one high-end car company standing up to say that they’re tired of the way the patent system works, especially on the infringement front, but Tesla is hopefully paving the way and opening the door for more major players to see that the open source movement can be a winning strategy for everyone involved. After all, has history ever proven that patents were a positive for innovation?This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
WASHINGTON (June 20, 2014) – Current design patent law provides incentive for frivolous lawsuits and abuse, said the R Street Institute in a policy paper released today.
Authored by Ned Andrews, the paper, “Is interactive design becoming unpatentable?,” lays out recommendations for modernizing the design patent system to allow smaller companies to enter the technological market.
“In order to have the kind of ornamental status that could be the subject of a design patent, an object must possess either some entirely nonfunctional feature or be the result of workmanship that does not contribute in any way to its function,” wrote Andrews. “Current definitions falsely equate the aesthetic merit of functionality with that of applied ornamentation. Thus, some inventors seek design protection for aspects of an object that are, in fact, functional.”
Andrews writes that the system creates an incentive for companies to acquire the patent rights for designs that are as aesthetically or conceptually simple as possible. They then wait for another company to develop a product that resembles the original and then file a claim of infringement, hoping that a manufacturer-defendant will agree to an early settlement.
“The parties that tend to come out on top are the biggest players – the Apples and Samsungs,” he wrote. “This interferes with smaller players’ ability to make headway on a useable portion of their own applications, because they can’t afford to risk a lawsuit from or pay the fees demanded by the trolls or big firms.”
Andrews recommends modernizing the design patent system in a variety of ways. First, impose a simple test: if the device would be less functional if the claimed aspect of the design were absent, the claim in question fails the non-functionality test. Second, courts should limit the findings of design infringement to cases in which the similar aspects of the article’s design perform an ornamental purpose, rather than a functional purpose. Third, both the U.S. Patent and Trade Office and the courts should renew their attention to the criteria of novelty and non-obviousness.
Finally, courts should make standard the practice that in “exceptional cases” of bad faith or misconduct, of awarding reasonable attorney’s fees to the prevailing party in a civil case.
The paper can be found here:
Murray Rothbark is R Street’s distinguished visiting office dog and director of canine policy.
This paper has been accepted for publication in the International Journal of Environmental Research and Public Health.
A carefully structured Tobacco Harm Reduction (THR) initiative, with e-cigarettes as a prominent THR modality, added to current tobacco control programming, is the most feasible policy option likely to substantially reduce tobacco-attributable illness and death in the United States over the next 20 years. E-cigarettes and related vapor products are the most promising harm reduction modalities because of their acceptability to smokers.
There are about 46 million smokers in the United States, and an estimated 480,000 deaths per year attributed to cigarette smoking. These numbers have been essentially stable since 2004. Currently recommended pharmaceutical smoking cessation protocols fail in about 90% of smokers who use them as directed, even under the best of study conditions, when results are measured at six to twelve months.
E-cigarettes have not been attractive to non-smoking teens or adults. Limited numbers non-smokers have experimented with them, but hardly any have continued their use. The vast majority of e-cigarette use is by current smokers using them to cut down or quit cigarettes. E-cigarettes, even when used in no-smoking areas, pose no discernible risk to bystanders. Finally, addition of a THR component to current tobacco control programming will likely reduce costs by reducing the need for counseling and drugs.
WASHINGTON (June 20, 2014) – The R Street Institute welcomed today’s passage of H.R. 4871, the TRIA Reform Act of 2014, by the House Financial Services Committee.
The measure, sponsored by Rep. Randy Neugebauer, R-Texas, calls for a five-year extension of the federal Terrorism Risk Insurance Program, a $100 billion reinsurance backstop originally passed in the wake of the Sept. 11, 2001 terrorist attacks. However, the bill includes important taxpayer-protection provisions that gradually shrink the size of the federal program.
“Rep. Neugebauer’s bill strikes the proper balance between ensuring that sufficient capacity exists for U.S. businesses to insure against catastrophic terrorism, while also guarding against government subsidies that would unjustly enrich insurance companies and major commercial real estate developers,” R Street Senior Fellow R.J. Lehmann said.
Under terms of the TRIA Reform Act, the trigger level for conventional terrorism attacks would be raised gradually from the current $100 million to $500 million by the end of 2019. For attacks involving nuclear, chemical, biological and radiological events, all of which must be covered by law under workers’ compensation policies, the program’s current terms would remain intact.
“Reinsurance broker Guy Carpenter recently issued a report finding that multiline terrorism reinsurance capacity is about $2.5 billion per program for conventional terrorism and about $1 billion per program for coverages that include NBCR,” Lehmann said. “Given those figures, and the continuing growth of capacity thanks to the influx of alternative sources of capital, we think the adjustments called for in the House bill are perfectly reasonable.”
The industry also would be asked to increase its co-payment share of conventional terrorist attacks from the current 15 percent to 20 percent, while individual company deductibles would remain at 20 percent of prior year premiums in a particular line of business. The industry would be asked to repay taxpayers 150 percent of funds expended, up from 133 percent currently, up to a floating retention level calculated by adding the aggregate amount of individual company deductibles.
Lehmann also praised a provision calling on the non-partisan U.S. Government Accountability Office to conduct a study on the feasibility of charging companies an upfront premium for TRIP’s reinsurance coverage.
“Much like the federal Riot Reinsurance Program of the 1970s, the way forward for federal terrorism reinsurance ultimately is to charge companies an actuarially adequate premium,” Lehmann said. “We can never know how much capacity the private reinsurance sector might be willing to commit to terrorism coverage so long as the government provides it for free.”
If 20th Century design was inspired by American architect Louis Sullivan’s 1896 pronouncement that “form ever follows function,” the key realization thus far of the 21st Century has been that this is merely a necessary – rather than a sufficient – condition for quality designs to flourish.
We have learned, and the market has confirmed, that an object should be designed in accordance not only with how it functions, but moreover with how it should function. Especially in the case of interactive technology, a description that has grown to describe just about anything, an object should function the way its user expects it to function.
As technology has become more powerful and flexible, the task of matching function and expectations has undergone a change akin to the philosopher Immanuel Kant’s metaphorical Copernican Revolution. For older generations of technology – in which scarce resources limited both what functions were available and the maximum complexity of users’ commands – the steps necessary for users to extract and refine what they could do with a device were explained in thick manuals. The prevailing strategy for more recent generations of technology has been to meet users halfway, competing to efficiently perform functions and effectively implement concepts that users have been had led to expect.
Today’s designs, however, are increasingly able to cut out the middleman, more and more closely conforming to their users’ preexisting intuitions and thought processes and less and less asking users to make those thought processes conform to products’ capabilities.
In other words, the key to success in modern interactive design does not lie in “creating” the best design possible. Rather, it begins with doing the best possible job of stripping designs down to concepts and procedures with which the user is already familiar, preferably through everyday use. Where there is no alternative but to require more input from a user, his or her options are laid out in terms the user already can be expected to know. While the fusion of design and utility has not yet been perfectly realized, industry has become more fully aware of both parts of this process and continues to pursue integration in earnest.
This coevolution of design standards and procedures has clashed, and continues to clash, with the structure of U.S. patent law. The first problem is the potential uncertainty that surrounds the scope and strength of a design patent’s protections. Even in the paradigm case of a design feature that has been aesthetically improved beyond what was required to give the feature its functional attributes, there remains the potential for overly broad claims about what aspects of a design qualify under the law as “ornamental.”
Under section 284 of the U.S. Code’s Title 35, triers of fact may award “non-statutory” damages for infringement of a design patent. But these same judges also may err in determining how much of an object’s value comes from the aesthetic appeal of its ornamental features and how much comes from other sources of value, whether ornamental or functional, and whether patented or unpatented.
The risk of error at each stage of the process – from the initial design patent application to the ultimate test of infringement in court – creates at least some incentive for a designer to overstate his or her case. Fortunately, these incentives are similar to the temptations to make overly broad claims about other grounds for patentability. Regardless what grounds are at issue, the remedy inevitably is better training for examiners and judges in traditional design standards and greater vigilance on their part about those standards’ application.