Out of the Storm News
If you care about fighting poverty, the best thing you can do is to simply give poor people money. That is the view of some of the smartest, most compassionate people I know, and it explains the new enthusiasm, on the left but also in some quarters on the right, for an unconditional basic income—that is, a cash payment to which all citizens would be entitled, with no strings attached.
On the left, the case for an unconditional basic income rests on the notion that low-end service jobs aren’t the kind of fulfilling and valuable work that we as a society ought to preserve. By providing everyone with a guaranteed minimum income, the supply of workers willing to do such work will dry up. The poor will be liberated, and free to pursue their deeper desires. On the right, the argument is that a basic income would eliminate the need for armies of caseworkers and other bureaucrats who (supposedly) do little more than meddle in the lives of the poor. Anti-poverty programs like food stamps, Medicaid and housing vouchers would all be thrown on a bonfire, to be replaced by cold hard cash. No more hand-holding, say conservatives and libertarians who favor a basic income. If we’re going to redistribute, let’s do it in the most straightforward way possible.
As you’ve no doubt guessed, I think that no-strings-attached money is a dangerously bad idea and that it will do far more to undermine poverty-fighting efforts than it will to strengthen them. I also think that meddlesome caseworkers are the unsung heroes of the fight against poverty.
As of yet, there is no national proposal to greatly increase the flow of no-strings-attached money to poor households in the United States, though the idea has been gaining ground among the pundit class of late, amid fears that robots will soon take all of our jobs.
But my hometown, New York City, is on the cusp of a grand experiment to increase the flow of no-strings-attached money to its poor citizens. If past experience is anything to go by, this experiment will end badly.
Under Mayors Rudolph Giuliani and Michael Bloomberg, New York City dramatically overhauled its approach to fighting poverty. As Robert Doar, who served as commissioner of the New York City Human Resources Administration under Bloomberg, recounts in a recent article for National Review, the cash-welfare caseload in the five boroughs fell from 1.1 million in 1995 to 347,000 at the end of 2013, when Bloomberg left office. Over the same period, the city experienced a substantial decline in child poverty, from 42 percent in 1994 to 28 percent in 2008 to 32 percent in 2011, as the lingering after-effects of the Great Recession continued to take their toll. The really encouraging news from the Giuliani-Bloomberg era is that work rates also increased. Among single mothers, for example, the work rate went from 43 percent in 1994 to 63 percent in 2009.
One of the reasons Doar placed such a heavy emphasis on the importance of work rather than, say, training and education programs is that, as he explains, getting real-world work experience is key to helping welfare recipients not only get but also keep jobs over time. Training and education have a place, but they work best as a complement to on-the-job training rather than as a substitute.
But there is a new sheriff in town. New York City’s new mayor, Bill de Blasio, has installed Steve Banks as Doar’s successor, and Banks, as Heather Mac Donald of City Journal reports, takes a very different approach. Banks and de Blasio are firm believers in training and education programs, and they’ve announced their intention to ease the enforcement of work requirements. They will no longer require that food-stamp applicants provide proof of their housing expenses, nor will they ask able-bodied adults without children to look for work in exchange for food stamps.
Is this a badly needed correction from the bad old days of Bloomberg? It is important to understand that, for better or for worse, the Bloomberg administration was very accommodating when working poor applicants sought to enroll in the food stamp and Medicaid programs. A big part of the reason was simply that the city government didn’t set the eligibility rules for these programs, and the federal and state governments had grown more permissive over time. But it also reflected a public philosophy that the billionaire mayor was never very good at articulating—that those who can work and choose not to do so are different from those who do not.
This is a distinction that advocates of an unconditional basic income see as pernicious and that those who want to ease up on work requirements see as needlessly punitive. But it is a distinction that makes eminent public policy sense. The welfare reformers of the 1980s and 1990s didn’t call for work requirements because they wanted to punish the poor. They did so because of mounting evidence that worklessness in high-poverty neighborhoods contributed to the entrenchment of poverty and to the social isolation of those living in welfare-dependent households. Drawing welfare recipients into the workforce was seen as the best way to get them on the ladder to upward mobility. Despite massive shifts in the economy that have been particularly hard on less-skilled adults, work requirements have been a success, by and large. Meanwhile, experiments conducted in the 1960s and 1970s by the federal government found that a no-strings-attached basic income reduced work effort and encouraged marital breakup. Given what we’ve learned about the consequences of family breakdown for children, and particularly for male children, in the years since, this is nothing to scoff at.
Moreover, whatever their practical effects, work requirements are central to the moral legitimacy of poverty-fighting efforts in the United States.
In her essay on “Rethinking Welfare Rights,” Amy Wax, a professor at the University of Pennsylvania Law School, identified a deep-seated, widely held set of beliefs among Americans about welfare. While most Americans accept the idea that we as a society have a shared responsibility for the well-being of the poor, they also differentiate between those who deserve help and those who don’t. Those who deserve help are those who make an effort to support themselves and their families to the extent they can. Many people simply can’t earn enough to support themselves by dint of disability, limited skills or a lack of the community ties that enable one to identify and pursue economic opportunities.
And so the role of government, according to Wax, is to help close the gap between what people can earn by doing their best to provide for themselves and what they need to lead decent lives. This gap is real, and there is a distinct possibility that it will grow as our economy and society continue to evolve. To protect programs that close this gap, and to grow them if necessary, it is vitally important that welfare disbursements are perceived as fair. In the long run, those who choose to work will not support welfare programs that appear to offer a better deal to those who do not, nor should they be expected to do so. The failure to enforce work requirements thus undermines the legitimacy of welfare, and it endangers the good that welfare can do.
This notion of “conditional reciprocity” is particularly important in diverse societies. A number of scholars, including the economists Edward Glaeser and Alberto Alesina, have found that more diverse societies are less likely to support high levels of social spending than more homogeneous societies. But more recently, Bo Rothstein, a Swedish political scientist and defender of the welfare state, has found that what really undermines social solidarity and social trust is not diversity, per se, but rather the perception that public authorities are corrupt, dishonest, discriminatory and partial as opposed to clean, impartial and honest. One of the reasons the Danish welfare state enjoys such widespread support, for instance, isn’t just Denmark’s famous ethnic homogeneity: It’s also that Danish work requirements are extremely strict by international standards, and they’ve grown tighter over time. Unemployed Danes must demonstrate their “labor market availability” by searching for jobs, taking jobs at local job centers and taking part in so-called activation programs run by hard-nosed caseworkers. The Danish state is indeed generous, but its generosity comes with strings attached, and that’s how Danish voters like it. We could learn a thing or two from them.
There is far more to say about how we can fix America’s social welfare programs. But before we can expand them or shrink them or modernize them, we must first ensure that they rest on a solid moral foundation. And that, ultimately, is what work requirements are all about.
June 5, 2014
The Hon. Harry Reid
522 Hart Senate Office Building
Washington, DC 20510
Dear Sen. Reid:
We are writing to urge you to advance the Smarter Sentencing Act (S. 1410), a bipartisan, bicameral bill that promotes more effective and just criminal sentencing without reducing public safety.
Advancing this reasonable reform this year is both warranted and necessary. As conservatives, we are deeply concerned about the federal prison system’s size and cost, which have grown enormously since 1980. Federal prisoner costs now consume about 30 percent of the Department of Justice’s budget. Federal prisons operate at nearly 140 percent of their capacity, a condition that make prisons less safe and less rehabilitative places and leads to recidivism. Instead of returning police power to the states, we are expanding the federal criminal justice system, its reach and its costs. Requiring lengthy prison terms for nonviolent drug crimes has fueled this growth. Half of all federal inmates – more than 100,000 people – are incarcerated for drug offenses. Crime is undeniably serious and demands accountability. But just punishments must be proportionate to the harm caused to victims and society, and they should also restore victims, offenders and communities. Our limited criminal justice dollars are used most effectively when focused on assisting victims and police, and when we reserve prison cells for violent offenders and terrorists.
The Smarter Sentencing Act (SSA) is a modest, incremental approach that impacts only nonviolent drug offenses and does not repeal any mandatory minimum sentences. The SSA reduces the length of mandatory minimum sentences for nonviolent drug offenses by half. Over time, this will reduce prison growth and costs to manageable levels. Long mandatory minimum terms are preserved for drug offenses that result in serious bodily injury or death. The bill also gives judges limited, clearly-defined discretion to sentence low-level, nonviolent drug offenders with negligible criminal records below the mandatory minimum term. Finally, the SSA allows certain inmates sentenced before the effective date of the Fair Sentencing Act of 2010 to petition for sentence reductions consistent with that law. The Fair Sentencing Act, which Congress passed unanimously, reduced an unjustifiable sentencing disparity between crack and powder cocaine offenses. The SSA does not let anyone serving a sentence under the old law get a reduction automatically – courts must do an individualized review and approve each request before a reduction can be granted, thus protecting public safety.
The federal prison population has increased by 3.2 percent annually over the past decade. In contrast, total state prison populations continue a three-year decline, partly because states are using just, effective, cost-saving alternatives like probation and parole to save expensive prison beds for violent offenders. Crime has continued to decline in states that have used these alternatives or reduced or reformed mandatory minimum sentences for nonviolent offenders. A recent poll revealed that 63 percent of Americans think this sentencing reform trend is a good thing. Nationwide, these reforms recognize that lengthy prison sentences for nonviolent offenders can actually be harmful: prolonged incarceration destroys family unity, increases reliance on government aid, hinders reintegration into society and stunts the economic growth of individuals and families. In short, states and the public are getting smart on crime, and the federal government should, too.
The need for this legislation’s moderate but positive improvements is urgent, and we respectfully ask you to advance the Smarter Sentencing Act as soon as possible. Thank you for considering our views.
Coalition to Reduce Spending
National Association of Evangelicals
Justice Fellowship/Prison Fellowship Ministries
Former chair, American Conservative Union; current opinion editor, The Washington Times
City of New York (Retired)
R Street Institute
Heritage Action for America
Americans for Tax Reform
Taxpayers Protection Alliance
The June 2 edition of the journal Pediatrics features a study of e-cigarette advertising on television from 2011 to 2013. The research, led by Jennifer Duke of RTI International in North Carolina, found that exposure among youth (12-17 years old) to television ads for e-cigs increased 256 percent over that period.
The author casts the industry as villain: “It appears that youth are being exposed to a sustained level of marketing about the benefits of e-cigarettes.” ABC News was less subtle, with a piece titled: “E-cigarette TV ads target kids.”
In fact, the results in Figure 1 of the study (left), which were not fully described by the authors, show that the advertisements’ effects were mainly on adults. The authors report exposure to e-cig ads in terms of target rating points (TRPs), a standard unit of measurement for the proportion of people exposed to an ad and its frequency.
Duke and colleagues report that exposure of 12-to-17-year-olds to e-cig advertising peaked in the second quarter 2013 at 347 TRPs. Young adults (18-to-24 years old), for whom purchase of e-cigarettes is legal almost everywhere, had peak exposure of 611 TRPs in 2013, indicating much higher exposure than youth.
The authors fail to note that older adults had far higher peak levels of exposure to e-cig ads. My table contains the estimated peak TRPs for each age group in Figure 1.Peak exposure to e-cigarette television advertising (TRPs) by age groups in Q2 2013 Age Group (years) Population (millions) Peak Exposure (TRPs) 12-17 24 347 18-24 30 611 55-34 41 611 35-54 86 820 55+ 77 850
This data shows that 234 million adults 18 years or older were the primary recipients of e-cigarette advertising.
Back in 2012, Kyle Bartlow and Mariah Gentry were juniors at the University of Washington’s Foster School of Business, looking to live out the American dream of entrepreneurship. Together, they formed JoeyBra LLC, a company dedicated to making and marketing specially designed women’s undergarments that include pockets on the wings (see the image above.) As the pair describes the item on their website:
Inspired by UW’s vibrant Greek system, JoeyBra was created for women who are constantly on the go and struggle to find a place to put their ID, keys, or phones. From their own personal experience, they know that women hate taking purses to dances, bars, or dance clubs. Leaving these items at home can pose a safety risk, but with JoeyBra women will never have to worry losing or damaging their valuables again.
Things got off to raring start for JoeyBra. A successful KickStarter campaign raised more than $10,000 in seed funding, which was supplemented by another $5,000 award when the company was named a finalist in the Foster School’s annual business plan competition.
Alas, like many fledgling entrepreneurs, Bartlow and Gentry soon discovered the road to riches was bound to have a few potholes. Theirs came in the personage of Mr. Charles Robinson, a British man who in 2001 was granted a patent (D448541) by USPTO for the ornamental design of a brassiere that, similarly, included pockets on the wings. An illustration of his design is below:
Robinson brought an infringement suit in Virginia against both JoeyBra LLC, and Bartlow and Gentry as individuals, seeking a preliminary injunction to bar them from marketing their wares.
Robinson’s basis for seeking an injunction was that it would cause him “irreparable harm,” even though he had never actually brought a product to market in the dozen years that he held the patent. In March 2013, U.S. District Court Judge Norman K. Moon denied the injunction, in part because Robinson could show no material harm, but also because, Moon wrote, “a brief patent search reveals that a pocketed bra is, in fact, not a ‘very novel element.’”
[T]hough the pocket on Defendants’ JoeyBra product is, like Plaintiff’s claimed design, on the side of the bra (rather than the cup), the size, orientation, and accessibility of that feature appear to be substantially different; as a consequence, and more significantly, the carrying capacity and overall functionality of the allegedly infringing product also appear substantially different. The pocket on Plaintiff’s design appears to be fit for a key, and after twelve years since receiving his patent, he does not have a product on the market. Defendants’ JoeyBra product, on the other hand, holds an iPhone and credit cards, among other items.
Moon also granted Bartlow and Gentry a motion to dismiss the claims against them personally, on jurisdictional grounds, though he would allow the case against JoeyBra to continue (three of the KickStarter funders lived in Virginia, which was enough to establish Robinson’s jurisdictional claim.)
Robinson filed a motion to reconsider, which Moon denied in May 2013. Following that ruling, there was no further action in the case until it was ultimately dismissed (but without prejudice) in March 2014 for “failure to prosecute.”
Having spent two crucial years of their young entrepreneurial career fending off this spurious claim, Bartlow and Gentry attempted to recover their legal fees. Specifically, under Title 35, Section 285 of the U.S. Code, “the court in exceptional cases may award reasonable attorney fees to the prevailing party.”
Alas, in a new order handed down June 3, Judge Moon broke the bad news: just because you won the case doesn’t mean you “prevailed.”
Moon noted that in only one case, Velez v. Portfolio Recovery Assoc., has a federal court determined that a party granted a motion to dismiss on jurisdictional meets the definition of a “prevailing party.”
What’s more, the Fourth Circuit has ruled that a party granted a preliminary injunction is not a “prevailing party,” because consideration of the merits of such cases are “necessarily abbreviated.” The Fourth Circuit has not ruled on whether denial of a preliminary injunction could render one a “prevailing party,” but the Tenth Circuit has ruled that it does not.
Of course, even if JoeyBra and its principals were declared “prevailing parties,” they’d face a whole other hurdle in establishing that their case met the definition of “exceptional.” In a unanimous decision in April in the case Octane Fitness LLC v. ICON Health & Fitness Inc., the U.S. Supreme Court ruled that the exceptionality test for award of attorney fees had set too high a bar, remanding the matter back to the Federal Circuit.
These are, of course, precisely the kinds of issues that were to be addressed by the patent reform bill that Senate Judiciary Committee Chairman Patrick Leahy, D-Vt., has tabled indefinitely. Perhaps one day Congress really will get back to work to resolve them.
Until then, JoeyBra isn’t taking any chances. They’ve applied for a patent of their own.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
R Street Executive Director Andrew Moylan joined David Williams of the Taxpayers Protection Alliance on TPA’s “Taxpayer Watch Podcast” to discuss the June 1 start of the Atlantic hurricane season and the work done by the SmarterSafer Coalition to bring budget watchdogs, insurance companies and environmental activists together for a common cause. You can check it out here or click the embedded player below.
This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Residents of 11 Utah cities would be billed as much as $20 a month, as part of a plan to salvage the state’s once-heralded UTOPIA fiber optic network.
UTOPIA, short for the Utah Telecommunications Open Infrastructure Agency, was conceived in 2002 as a local government-managed alternative to commercial cable, telco and satellite broadband and has struggled ever since. It lost $19 million in fiscal year 2011. As of late 2012, the agency was $120 million in the red and had fewer than 10,000 customers
While there have been plenty of municipal broadband failures in the past, this may be the first time a government has actually considered reaching directly into consumer pockets to cover the cost of underperformance. Under the proposal, Macquarie Capital, an Australian investment group, would take over construction and operation of the financially troubled UTOPIA project in return for a share of the profits. The catch, however, is that households in the UTOPIA cities would be hit each month with a special “availability fee” of $18 to $20 to pay for the project’s completion, whether they opt to get service or not. This fee would be adjustable each year.
The terms themselves are measly. The “availability fee” would entitle consumers to only three megabytes per second of bandwidth (UTOPIA originally promised one gigabyte per second — 341 times as fast) and a monthly cap of 20 gigabytes, or enough for five or six high-definition Netflix movies. This compares to the 12 to 15 Mb/s available for $50 to $60 a month, with unlimited data, from most cable companies. AT&T’s U-Verse and Verizon’s FiOS offer faster connections.
In return for a 30-year public-private partnership, Macquarie would promise to complete UTOPIA’s build-out to 155,000 total residential and business addresses in 30 months. The network will continue to serve as a wholesale backbone for retail Internet service providers, and Macquarie would aggressively promote the network and extend it to any additional city that wants to accept its terms.
Five UTOPIA cities–Lindon, Murray, Layton, Tremonton and Centerville–have scheduled public meetings in the coming weeks to debate accepting the Macquarie proposal. Another three –Murray, Lindon and Orem–are polling residents to gauge sentiment on key elements of the plan, including its proposed $18 to $20 fee. Brigham City is studying the proposal and Midvale and West Valley City already have accepted it. Macquarie’s deadline is June 27.
Utah’s largest city, Salt Lake, in a prescient decision, opted out of UTOPIA participation.
It’s another unenviable position for cities that went the muni broadband route. The UTOPIA 11 are on the hook for the debt, and the best option means hitting up customers for service that’s inferior to what’s on the market now. At the same time, advocates of free-market solutions, and limiting government activity in the commercial sphere, warned that taxpayers were going to bear the brunt of any muni failures.
Even as this week’s news brings word of a new Google effort to launch satellites that will facilitate rural broadband, progressives still insist that government broadband is needed to respond to “market failure” in broadband provision. Really? It seems the only consistent failure in broadband have been municipal project after municipal project.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Treating like cases alike is central to the U.S. legal system. Socially and morally, the advantages of juridical consistency are assumed to be foundational. For this reason, among the very first in a line of crucial terms that a first-year law student learns is “stare decicis“, the Latin phrase for “stand by things decided.”
A prime reason that courts “stand by things decided” is that, in cases coming before them, they typically encounter standard iterations of well-defined legal themes. It is a rare day when a truly novel legal or factual issue arises. In practice, stare decicis means that, when a court encounters a case with issues of fact and/or law implicated that it has encountered before, that court will defer to its existing patterns of thought in lieu of starting from scratch. As a matter of judicial economy, stare decisis is invaluable, but it serves other, perhaps more important, purposes: judicial predictability and private coordination.
Lines of precedent are relied upon by private parties as they plan for the future. Plainly, complying with the law is less of a chore when its application is discernible. Businesses, in particular, rely on the consistent application of the law, because they are among the most likely to be targeted for liability when precedent abruptly shifts or moves.
To draw attention to the problem of a moving legal target and other court-related maladies, the U.S. Chamber of Commerce releases an annual list of the nation’s “Judicial Hellholes“. These uncertain jurisdictions, found in red and blue states alike, may not be pure creations of political ideology. Instead, what connects them is that they exist within, and create expansive and amorphous zones of, civil liability. It’s bad enough when poor or ill-considered public policy choices of the electorate and their representatives are the cause. Deplorably, courts are also playing a significant role.
To describe the phenomenon of courts making public policy choices from the bench, commentators have popularized the phrase “judicial activism.” Unfortunately, hyperbole in its use and over-breadth in its application have left the phrase unmoored from easily discernable meaning.
Commendably, Colleen Pero, writing for the American Judicial Partnership, seeks to re-introduce meaning to the phrase by outlining indicia of judicial activism. She concludes that courts are being judicially activist when they deviate from an otherwise predictable outcome in favor of a particular policy outcome. Common examples include the use of non-mandatory authority as the basis of an opinion and the insertion of novel language into a statute to allow the law to support a meaning consistent with a desired outcome.
Courts typically have a degree of freedom to determine how they will interpret statutes. Some methodological flexibility is unavoidable. However, where methodological flexibility significantly reduces consistency and predictability, justice may be distorted. Differing interpretations of different facts may be legitimate, but inconsistent interpretations of similar bodies of law marks the very definition of judicial activism. Among all forms of judicial activism, methodological activism is particularly undesirable from a coordination perspective.
To counteract or defend against such judicial unpredictability, states should be led to embrace systems of methodological stare decicis. Such systems cement into law – for the benefit of courts and the public – interpretive road maps that provide a slew of benefits. Predictability is improved, since parties coming before the court understand what to prioritize in their briefs and what types of arguments the court would find compelling. Stability is improved, because parties can better evaluate what sorts of claims are worthy of pursuing in litigation. Efficiency is improved, because courts are not forced to select from a panoply of interpretive doctrines currently in circulation.
Most importantly, when methodological decisions are given a stare decisis effect, the complexion of the judiciary’s role changes. It shifts away from a law-making task for which it is ill-suited and toward greater predictability in the application of law.
One state, Oregon, conducted an experiment with methodological stare decicis. In PGE v. Bureau of Labor and Industries, the Oregon Supreme Court instructed its courts to adhere to a strict analytical hierarchy to help observers predict when, how and what to bring before Oregon courts. The system directed interpretive action to adhere to a three-step process. First, courts were instructed to take a look to the text of the statute and its surrounding context. Next, if the text contained unresolvable ambiguity, the court was to look to legislative history. Finally, in the event that understanding could not be gleaned from the first two tiers of scrutiny, the court was bound to look to maxims of statutory construction.
The result was that private parties in conflict, the Legislature and lower courts knew broadly what to expect, both in decisional reasoning and, ultimately, outcome. In an article on the topic, Abbe Gluck observed that from 1993 to 1998, the Oregon Supreme Court looked at 137 statutory-interpretation cases, reaching legislative history 33 times and substantive canons only 11 times. From 1999 to 2006, the court applied the statutory-interpretation framework 150 times, reaching legislative history nine times and never reaching the canons of construction. Clearly, a methodological preference for textualism was carrying the day in the period that followed PGE and its progeny.
Naturally, the content of any such interpretive roadmap is crucial. Some interpretive methodologies are preferable to others from a free-market perspective. Still, for the purposes of achieving the benefits of coordination and predictability, there is no need to come to any particular conclusion about the desirability of one system of statutory-interpretation over another, since methodological stare decisis would see the development of methodological precedent. Matters relating to a particular statute would be reliably interpreted according to an attendant methodological precedent.
At its core, methodological stare decisis is an institutional solution to the problem of extreme decisional variations and a reminder that all forms of legal precedent matter. Once an issue is decided – even if it is, from one’s own perspective, decided wrongly – the way that it is decided methodologically becomes ensconced in the body of law, to be referred to and relied upon by others moving forward.
This is not to say that stare decisis is immutable. It is not. Judges consistently revise, rework and reimagine the law as it relates to novel variations in its application. But the limiting function of stare decisis remains essential. At the very least, it serves to stymie rapid transformations that upset the public’s ability to rely upon the law. In essence, it is a procedural mechanism whereby the law expresses a preference for incrementalism.
Society as a whole benefits when risk, even in the form of judicial outcomes, can be assessed and accounted for.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
“Congress could have legislated this problem away many times … but obviously that hasn’t happened,” said Andrew Moylan, executive director of the R Street Institute, a conservative think tank that favors taxing carbon over “extraordinarily expensive” regulations…
…The R Street Institute is supportive. The group has been pitching a national “tax swap” that would create a fee on emissions while cutting tax rates on income, as a way to shape climate policy toward conservative principles.
“So basically a state-level equivalent of the [national] revenue-neutral carbon tax as a substitute for regulation,” Moylan said. “We think that’s the right sort of path forward.”
Kelly Ash is California director at the R Street Institute. She previously worked at the Personal Insurance Federation of California as the communications director and managed the political action committee. Before that, she spent five years working for the California State Assembly as a legislative director for various assembly members.
She was born and raised in San Diego, California.
Andrew Moylan of the R Street Institute, an independent think tank, said that contrary to supporters’ assertions, the MFA does not “level the playing field.”
Moylan said it would impose a “different and unequivocally harsher set of rules than exist for brick-and-mortar” transactions.
“Passage of the MFA would mean states could strong-arm remote sellers into complying with the more than 9,600 separate sales-tax rates that exist across the country, not to mention the 46 states with sales taxes that can issue their own unique set of edicts and definitions,” Moylan testified at a March Judiciary Committee hearing.
But, as Reihan Salam has written, the effects may also be more subtly damaging. As wages rise, businesses could simply seek to hire better educated and skilled employees, some of whom may well live outside the city limits, but suddenly find themselves happy to commute for a fatter paycheck. Seattle’s low-wage workers, meaning those who earn less than $15 an hour, are already more likely to have attended college than their counterparts in cities like Denver and San Francisco. As the pay floor rises, it seems reasonable to suspect that college-educated workers from around the region will take a growing share of jobs that might have once gone to high-school grads.
WASHINGTON (June 2, 2014) – New carbon emissions regulations proposed today by the U.S. Environmental Protection Agency are likely to prove expensive and damaging to a still-fragile economic recovery, according to the R Street Institute.
The regulations, which will be subject to a one-year review period, rely on the spectacularly ill-fitting regime of the Clean Air Act to impose a 30 percent reduction in carbon emissions from electricity generation by 2030. The EPA’s new scheme comes after years of unsuccessful legal and legislative challenges to the agency’s authority.
While climate change is a pressing environmental and economic issue deserving of a public policy response, these overly prescriptive regulations likely will impose enormous costs for relatively modest emissions reductions. R Street has instead urged officials to embrace the power of the free market by utilizing revenue-neutral carbon pricing as a complete substitute for command-and-control regulation.
“Carbon pricing could allow states to kill two birds with one stone,” said Andrew Moylan, executive director of R Street. “They could achieve mandated emissions reductions through a price signal instead of complicated regulation, while utilizing all resulting revenue to eliminate or reduce taxes that are damaging to the economy. This could get EPA off their backs while securing real pro-growth tax reform.”
Moylan also pointed out that Congress’ failure to legislate has helped lead to the imposition of this expensive plan, which the U.S. Chamber of Commerce estimates will cost $51 billion annually. This is simply the latest example of growth in executive power as Congress stands idly by, Moylan said.
“The fact that Congress has proven unable to legislate effectively in this area led directly to the unworkable regulations released today,” Moylan said. “Perhaps now that EPA’s economic threat is more real, Congress will reengage with its duty to guide policies of such enormous consequence.”
Perhaps the surest sign that the decade-long cupcake fad is really and truly over is that even cupcake-related lawsuits are wrapping up tidily.
Most recently, news has come down of a settlement in a design patent infringement suit brought in March 2013 by Kimber Cakeware LLC of Hilliard, Ohio against Rancho Cucamonga, Calif.-based Bradshaw International Inc. Bradshaw is a major homewares manufacturer that markets its devices through a variety of brand names, including Betty Crocker, Dawn, Mr. Clean, Black & Decker and, in this case, Good Cook.
The timeline laid out in the original complaint is somewhat tortured, but as Kimber tells it, the events went down something like this:
In 2009, Robert S. Reiser, who would later serve as one of the principals of Kimber, pitched his batter separator invention – the “Batter Daddy” – to a variety of firms, including Bradshaw. Bradshaw declined the pitch, on grounds that the then-current financial crisis had led them to hold off on developing new products.
In May 2010, Reiser founded Kimber, which would begin marketing an adapted version of the device called “Batter Babies” in December 2010.
In April 2011, Reiser submitted an application for a patent on the ornamental design of his batter separator. The illustration in the design patent application looked like this:
In March 2012, Bradshaw introduced at the International Housewares Show its own product, dubbed the “Sweet Creations by Good Cook cupcake divider.” That product looks like this:
In November 2012, Reiser’s application was approved as design patent D671376.
There is, of course, one major problem, which should be pretty obvious at first glance. Even in the exhibit Kimber included in its complaint, the design of the Good Cook cupcake divider just simply does not look anything like the Kimber batter divider:
Bradshaw felt the case was so weak that not only did they seek summary judgment to dismiss, but they argued in a filing that Kimber should be slapped with Rule 11 sanctions for bringing a frivolous lawsuit without any merit whatsoever.
Bradshaw conceded that, like Kimber’s design, theirs is a batter separator designed to fit inside a standard muffin tin, allowing two different types of batter to be cooked simultaneously. But since the only grounds Kimber would have for infringement is that the design so closely resembled their own as to confuse an ordinary observer, Bradshaw accused Kimber of committing the cardinal sin of design patent suits:
Kimber has wrongly asserted protection for a functional, utilitarian article through its design patent, which is the epitome of what cannot be claimed in a design patent.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
LUS Fiber, the municipal broadband system in Lafayette, La., last month received another warning from city auditors, an advisory that appears to have become an annual thing.
Although losses were anticipated during the initial five years of offering retail services to customers, management should carefully monitor the financial results of operations of the communications system. The projections calculated by operating and finance management should be compared to actual results on a regular basis and appropriate measures should be taken to minimize any significant negative variances. Additionally, management should continue to enhance its market strategy in order to increase its revenue base.
Lafayette’s auditors voiced similar concerns in their reports the last two years. In 2012, they punctuated it with a calculation that the $140-million system was costing the city $45,000 a day.
Now, after six years of operation, prospects aren’t much better. The city’s financial reports, provided by a source in Lafayette, show that for the fiscal year ended Oct. 31, 2013, LUS Fiber reported $23 million in operating revenues, compared to $36.7 million that was forecast in its feasibility study. The system incurred a $2.5 million operating loss for the year. According to the original plan, this was to be the point where the operation swung to a profit of $902,000.
The most staggering number, however, is LUS Fiber’s deficit, which stood at $47 million at the end of October, up from $37.1 million the year before.
LUS officials have been trying to put a smiling face on this by noting that the operation is cash-flow positive, which simply means that LUS Fiber is taking in more than it’s spending on a day-to-day basis, but does not factor in its enormous long-term debt liability. They’ve also tried some sales gimmicks, like doubling the amount of bandwidth capacity for an additional $5 a month. This deal goes for everyone except low-income customers, for whom provision of quality high-speed service was a major justification for LUS Fiber’s creation. They remain stuck with a 3 Mb/s connection, about a quarter to a fifth of the speed you now get from cable.
Officials also continue to assure Lafayette residents that profitability, like a Cubs pennant, is just one more year away, and that there is robust subscriber growth ahead. Yet it’s time to seriously question this proposition in light of broadband business trends. Cable customers have been turning to wireless and video on demand for video programming. Citing data from the ISI Group, an equity research firm, BusinessInsider.com reported that nearly 5 million cable TV subscribers cut the cord in the last five years, and that the number of cable TV-only subscribers remaining could sink below 40 million (see graph below).
True, in many cases you still may want a landline broadband connection to get Netflix or Amazon Prime, but the triple play business model—phone, cable, Internet—looks like it’s toast. Like the cable industry, muni broadband operations like LUS Fiber bet their entire long-term viability on triple play. Phone’s been gone for years. Cable TV is going. Commercial cable companies are scrambling to deal with this market shift, and muni operations are in the same boat. The difference is that, with private companies, the cost falls on shareowners. For municipalities, the cost is paid by taxpayers. In Lafayette, the meter stands at $47 million
What’s distressing is that today, when the turmoil in the service provider market is so measureable and visible, there are still cities and towns—Westminster, Md.; Princeton, Mass.; Vallejo, Calif., to name three I found here—think that municipal broadband is a sound idea.
For an in-depth look at the challenges municipal broadband faces, see my case study of LUS Fiber, downloadable here.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
From the Washington Examiner:
Steven Titch for R Street: The European Union’s highest court has ruled that Internet search engines must give serious consideration to users who request they remove links to any content or article that is personally or professionally unflattering, unfavorable or embarrassing.
This Sunday, June 1, marks the start of the 2014 Atlantic hurricane season. And though experts are predicting a relatively quiet one this year, our dutiful representatives in Congress have done their darnedest to ensure that it would take just one big storm to push a particularly troubled federal agency over the brink of bankruptcy.
The National Oceanic and Atmospheric Administration’s Climate Prediction Center recently unveiled its projection of a near-normal or below-normal hurricane season, with a 70 percent likelihood of eight to 13 named storms, three to six hurricanes and one or two major hurricanes.
Those projections more or less jibe with those released in April by forecasters Philip Klotzbach and William Gray of Colorado State University, who note the impact of a cooled Atlantic and at least a moderate-strength El Niño in their projection of nine named storms, three of which would become hurricanes and one of those would become a major hurricane. Klotzbach and Gray assign a 35 percent chance of a major hurricane making landfall somewhere on the U.S. coastline, compared to 52 percent in an average year.
Of course, it takes just one big storm to knock those living near the coast, as 126 million Americans do, for a major loop. It also would take just one big storm to render the National Flood Insurance Program – the federal agency that writes most flood coverage in the United States – unable to pay its claims.
In fact, the program is only able to pay any claims thanks to billions in loans from federal taxpayers. The NFIP currently owes the Treasury $24 billion, a tally mostly rung up during Hurricane Katrina in 2005 and Hurricane Sandy in 2012. The program hasn’t made a payment against its principal since 2010 and it’s been able to keep up with its interest payments only because interest rates have dropped dramatically in recent years. In 2008, the NFIP paid $730 million in interest to service its then-$18 billion of debt, compared to just $72 million in interest payments in 2011.
Current law allows the program to borrow up to $30.4 billion before it would need once again to go back to Congress, hat in hand. But the bottom line is that the premiums paid in — $3.8 billion last year, on $1.3 trillion of insured property – simply aren’t enough to keep up with claims. A major hurricane hitting any of the significantly populated areas along the Gulf and East coasts could easily cause another $10 billion or so in flood insurance claims, enough to bankrupt the program.
Congress had a plan to fix this mess. Years in the making, both houses ultimately passed – by wide margins and with strong bipartisan support – and President Barack Obama signed legislation in 2012 intended to set the NFIP on the path to solvency. The plan involved phasing out subsidies for the roughly 20 percent of policyholders who were getting them – more quickly, for vacation and business properties, less quickly for primary homes. It also involved updating and redrawing the program’s long-outdated maps, to ensure those properties most at risk were charged the rates they should be.
Alas, it didn’t take long for Congress to think better of its momentary rush into sane, responsible policymaking. The first complaints about rising rates from coastal policyholders started pouring in while the ink was still dry on the reform bill. Eventually the deafening chorus of complaint – emanating, as it did, from states like Louisiana, Arkansas, Georgia and North Carolina, which will prove crucial to control of the U.S. Senate this fall – was sufficiently to prompt action. Earlier this year, Congress set up re-breaking the program they had just attempted to fix, passing a law that, as the non-partisan Government Accountability Office put it, “will address affordability concerns, but may also reduce program revenues and weaken the financial soundness of the NFIP program.”
Entering into any storm season, one always hopes for the best and particularly, for no major loss of life or limb. But should the storm surge flood in and the flood program collapse, you’ll know where to point the finger of blame.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
Fifty-three tobacco research and policy experts from 15 countries today endorsed many of the tobacco harm reduction principles that I have advocated for 20 years. In a widely publicized open letter to Dr. Margaret Chan, director of the World Health Organization, they declared:
Tobacco harm reduction is part of the solution, not part of the problem. It could make a significant contribution to reducing the global burden of non-communicable diseases caused by smoking, and do so much faster than conventional strategies. If regulators treat low-risk nicotine products as traditional tobacco products and seek to reduce their use without recognising their potential as low-risk alternatives to smoking, they are improperly defining them as part of the problem
Just as I have done before, the experts warn that harsh regulation of e-cigarettes could have the unintended effect of protecting cigarettes:
On a precautionary basis, regulators should avoid support for measures that could have the perverse effect of prolonging cigarette consumption. Policies that are excessively restrictive or burdensome on lower-risk products can have the unintended consequence of protecting cigarettes from competition from less-hazardous alternatives, and cause harm as a result. Every policy related to low-risk, non-combustible nicotine products should be assessed for this risk.
The letter’s signatories also endorse a tax strategy that I have promoted for many years:
The tax regime for nicotine products should reflect risk and be organised to create incentives for users to switch from smoking to low-risk, harm-reduction products. Excessive taxation of low-risk products relative to combustible tobacco deters smokers from switching and will cause more smoking and harm than there otherwise would be.
The letter points to the enormous public health gains that are possible with tobacco harm reduction:
The potential for tobacco harm reduction products to reduce the burden of smoking-related disease is very large, and these products could be among the most significant health innovations of the 21st Century – perhaps saving hundreds of millions of lives.
It is encouraging to see such widespread international support for my long-held positions.
The U.S. Chamber of Commerce, a corporate advocacy group, has charged that regulations would cost $51 billion and 224,000 jobs. The National Mining Association ran advertisements predicting a near-doubling of electricity costs. Those claims were quickly rebutted, but they reflect a deeper fear that the EPA will try to micromanage the cuts rather than trusting companies to find their own solutions. That sort of command-and-control approach “is likely to be a costly disaster,” said Andrew Moylan, a senior fellow at R Street, a free market think tank…
…In addition to cap-and-trade, noted Moylan, states and power companies could also consider a carbon tax like British Columbia’s. Set at $30 Canadian dollars per metric ton, and ultimately reflected in the price of gasoline and fuel, it presently raises $1 billion in annual revenue, which is in turn used to fund business and income tax cuts.
We’ll be hosting a Google Hangout with the Mercatus Center at George Mason University, the Electronic Frontier Foundation and the Competitive Enterprise Institute to discuss copyright reform on Thursday, June 5, at 3 p.m. ET.
Copyright has long been a source of division among conservatives and libertarians. While some see creative works the same as any other property, and thus a natural right, others argue that copyright is different than traditional property, and that special interests have bloated the copyright system to the point where innovation is stifled. How much copyright is too much? At what point does cronyism trump innovation?
Panelists Tom W. Bell, author of “Intellectual Privilege,” Derek Khanna of R Street, Mitch Stoltz of EFF and Ryan Radia of CEI will debate these issues. We invite you to join the discussion by asking questions through the Hangout at j.mp/copyrighthangout or using the hashtag #iphangout on Twitter.This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.
There has been a great deal of heated discussion in North Carolina regarding auto insurance reform during the past several years. The insurance industry is divided and parties are trying to make their case to legislators in the N.C. General Assembly.
It can’t be denied that the average premium for drivers here is ranked as one of the lower in the nation and the market is not in a state of crisis.It also can’t be denied that our regulatory system is unique nationally and the rate bureau model is dated. This approach worked in the 1980s, before advances in technology, risk assessment and online resources. We now have usage-based insurance options that give the driver the ability to install a device that monitors driver habits in real time and allows discounts for driving carefully. Technology enables insurers to more accurately price risk on individual characteristics and reward less risky behavior. This is fairer than charging average rates based on large indiscriminate groupings.
We do not need to completely dismantle the current system, but it does seem reasonable to modify the system so that it makes it easier for insurers to operate and offer discount programs and enhancements commonly used in other states.
As it stands currently, the states with simpler, more modern systems are the first to receive new programs and North Carolina is left for last, or left out. Insurers could be given the opportunity to opt out of the current rate bureau system, provided that the insurance commissioner still has the option to review and approve new programs. This should lead to additional benefits for consumers, as insurers have more flexibility and incentives to compete.
I’ve taught insurance and personal finance courses for two decades – the last 12 at Appalachian State University. Each semester, I assign a project in which the students are required to obtain a quote for their auto insurance. I do not tell them where to get their quote or how to do it; they have to figure that out on their own.
The students are consistently able to complete this assignment and typically have the same general comments each semester. They are surprised how easy it is to get a quote online, over the phone or by visiting an agent. They also ask about commercials they have seen on TV and wonder why they couldn’t enroll in certain programs. “I have good grades, why doesn’t my insurer offer the good student discount in North Carolina?”
I believe the consumer is best served by allowing the commissioner to have authority to review and deny rate changes when appropriate. Standard forms with consistent policy language should continue and also be subject to the commissioner’s approval. There is concern that insurers may try to raise rates, and this could certainly happen for higher-risk drivers. However, if an insurer attempts to raise rates to an unfair level, they will quickly price themselves out of the market, as consumers shop around for the best deal.
I believe many consumers would be better served by revising our system and giving insurers the option to operate as they do in most other states. This would encourage insurers to market additional discount programs and unleash competitive products that would benefit consumers.Readmorehere: http://www.charlotteobserver.com/2014/05/27/4936249/nc-auto-insurance-price-controls.html#.U4dKgijEZOk#storylink=cpy