Feed aggregator

How the Wind Production Tax Credit undermines wind power

Out of the Storm News - October 13, 2014, 8:00 AM

U.S. renewable-energy policy is largely defined by mandates and subsidies that maintain an artificial market for investment and generation. Heavy dependence on government creates significant vulnerability for the sector and risks an eventual collapse of the industry, as we have seen in parts of Europe.

Given the collapse of investment in wind when the credit expired last year, it’s obvious that, at this point, a real and significant market for wind power simply does not exist. As Warren Buffet, who has significant electricity generation holdings, explained so well: “on wind energy, we get a tax credit…That’s the only reason to build them. They don’t make sense without the tax credit.”

If the private sector won’t build wind turbines without the credit, it’s time for America to rethink its approach to wind power and renewable energy in general. To start, Congress should abandon the idea of reviving the federal Wind Production Tax Credit, because it actually undermines efforts to make wind competitive.

That statement may seem odd to many. We have heard renewable advocates and their political allies argue countless times that America should simply continue mandates and subsidies, including the PTC, until renewables become truly competitive. But this does nothing to address the fundamental reason why investors like Mr. Buffet don’t actually want to invest their own money in wind power. Renewable-energy systems today cannot provide reliable electricity to homes 24 hours a day, or even to factories for eight hours a day.

Grid operators simply cannot count on the wind blowing or the sun shining when electricity demand is high. Because of their intermittency, renewables require other generation – coal, natural gas or nuclear – to back them up. If a utility has to maintain backup generation that can produce power when the wind isn’t blowing, why would that utility need wind turbines that only work part time at all? Perhaps the better question is, why should taxpayers pay for both when only the backup is needed? Would the average consumer buy a car or a washing machine that only worked part of the time?

That is not to argue that wind does not have a future in America’s electricity mix. There are strong public policy reasons why government should work to put wind power on track to become a generation source that the private sector chooses, without the mandates and subsidies. This transition depends largely on a breakthrough in energy storage technology that could provide baseload attributes to renewables, including wind (i.e., producing electricity 24/7). Such a program would cost billions of dollars in research and development, a cost that is too great for government alone. Private-sector investment is also needed, including the billions of dollars at Mr. Buffet’s disposal.

Current policies like the wind PTC actually deter private-sector investment, thereby undermining the goals that these good-intentioned policies seek to achieve. The PTC rewards wind farm operators a $23 credit per MWh for producing electricity regardless of market demand. This incentive is especially problematic, because the wind blows mostly at night when people are asleep and factories are idle – when demand for power is at its lowest.

Without the PTC, the majority of wind farm operators would turn off their turbines at night to avoid paying congestion charges to the grid. Losing that revenue would create an incentive for operators to invest in storage technology that could store the electricity and allow it to be sold during the day. As long as the congestion charge is less than any tax credit benefit, wind farms will continue to dump their power on the grid, pay the charge and pocket the government-created profit.

The success of the wind PTC in promoting investment has enabled the build-out of more than 60 GW capacity of wind power, plus another 12 GW in the pipeline. With more than 70 GWs of wind capacity, the United States should now have a critical mass of private-sector investment that can be leveraged to support research and development in storage technology. But if the wind PTC is renewed, we can count on wind farm operators to act rationally: why should they invest in a technology that would enable their power to be sold when the market wants it, when they already receive a tax credit that allows them to sell at a loss and still make money?

Advocates for wind should be pushing for an increase in government research and development funds to accelerate the development and commercialization of energy storage technology – a breakthrough that would reduce the vulnerability of renewables to shifts in government policy. They should also seek rational policies that maximize the flow of private sector dollars into storage. However, wind promoters, in particular, don’t want to acknowledge publicly that they need storage technology, because doing so would be an admission that wind technology is not competitive or reliable on its own.

That’s unfortunate. The United States would benefit substantially from competitive wind power and energy storage. Giving more money to the Warren Buffets of the world to build wind turbines that only work part of the time does little to advance U.S. energy security or air-quality goals. It only creates countless future graveyards of towering, rusting wind turbines scattered across the United States that will eventually cost billions of dollars to dismantle and throw in a waste dump.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Metropolitan Housing: More Space, Large Lots

Somewhat Reasonable - October 12, 2014, 12:09 PM

Americans continue to favor large houses on large lots. The vast majority of new occupied housing in the major metropolitan areas of the United States was detached between 2000 and 2010 and was located in geographical sectors associated with larger lot sizes. Moreover, houses became bigger, as the median number of rooms increased (both detached and multi-family), and the median new detached house size increased.

These conclusions are based on an analysis of small area data for major metropolitan areas using the City Sector Model. City Sector Model analysis avoids the exaggeration of urban core data that necessarily occurs from reliance on the municipal boundaries of core cities (which are themselves nearly 60 percent suburban or exurban, ranging from as little as three percent to virtually 100 percent). It also avoids the use of the newer “principal cities” designation of larger employment centers within metropolitan areas, nearly all of which are suburbs, but are inappropriately joined with core municipalities in some analyses. The City Sector Model” small area analysis method is described in greater detail in the Note below.

Increase in Detached Housing

America’s preference for detached housing was evident across the spectrum of functional city sectors between 2000 and 2010. Overall, there was a 14% increase in detached housing in the major metropolitan areas. Among the major metropolitan areas (over 1 million population), the number of occupied detached houses rose the most (35%) in the later or generally outer suburbs and exurban areas (24%). Detached houses increased 2.8 million in the later suburbs and 2.5 million in the exurban areas. A smaller 50,000 increase was registered in the earlier or generally inner suburban areas. Most surprisingly, there was also a small increase (20,000) in the number of detached houses in the functional urban cores (Figure 1).

Smaller Increase in Multi-Family Housing

The increase in detached housing dwarfed that of new multi-family housing (owned and rented apartments). The increase in detached housing in the major metropolitan areas was six times that of multi-family housing. Overall, there was a four percent increase in multi-family housing in the major metropolitan areas, less than one-third the increase in detached housing.  There were slight decreases in the number of multi-family houses in both the urban cores and the earlier (generally inner) suburbs. At the same time, there has been a healthy increases in the number of multi-family houses in the later suburbs and exurbs, where the growth rates exceeded the increase in major metropolitan population (11%). In the later suburbs, multi-family housing increased 29% and in the exurbs the increase was 14% (Figure 2).

Larger Houses, Larger Lots

Yet overall, houses were getting bigger. The median number of rooms per house rose from 5.3 in 2000 to 5.6 in 2010. Increases in median rooms were registered in each of the city sectors (Figure 3). Nationally, the median size of new detached housing edged up five percent between 2000 and 2010. (By 2013, median new house size had increased another 17 percent to a record 2,384 square feet).

Lots also were getting bigger. Nearly all of the population growth (99 %) was in the later suburbs and exurbs between 2000 and 2010, where population densities are much lower and lots are larger than in the earlier suburbs and the urban core (Figure 4).

The preponderance of  urban planning theory over the past decade has been based on the notion that people would increasingly seek houses on smaller lots. For example, Arthur C. Nelson of the University of Utah predicted that the demand for housing on conventional-sized lots (which Professor Nelson defines as more than 1/8 acre, which is smaller than the smallest lot size reported by the Census Bureauwould be only 16% in the major metropolitan areas of California by 2010, relying in part on stated preference survey data. In fact the revealed preferences — in other words what people actually did — was four times the predicted demand (64%) in the conventional-lot-dominated later suburbs and exurbs of California’s largest metropolitan areas between 2000 and 2010. This is despite California’s regulatory and legal bias against detached housing on conventional lots (See: California’s War Against the Suburbs). Outside California, later suburban and exurban detached housing represented 77% of new housing demand over the period.

Planning and Preferences

Urban cores and multi-family housing are favored by urban planning policy. Yet, large functional urban cores (high density and high transit market share, as defined in the City Sector Model, Note below) are few and far between, with only seven exceeding 500,000 population, a modest number equaled or exceeded by approximately 100 metropolitan areas. Overall, the functional urban cores of major metropolitan areas lost more than 100,000 residents between 2000 and 2010, while suburban and exurban areas gained more than 16.5 million. Predictably, the housing forms typical of the later suburbs and exurbs made strong gains. The preferences of planning are not those of people and households.

Note: The City Sector Model allows a more representative functional analysis of urban core, suburban and exurban areas, by the use of smaller areas, rather than municipal boundaries. The more than 30,000 zip code tabulation areas (ZCTA) of major metropolitan areas and the rest of the nation are categorized by functional characteristics, including urban form, density and travel behavior. There are four functional classifications, the urban core, earlier suburban areas, later suburban areas and exurban areas. The urban cores have higher densities, older housing and substantially greater reliance on transit, similar to the urban cores that preceded the great automobile oriented suburbanization that followed World War II. Exurban areas are beyond the built up urban areas. The suburban areas constitute the balance of the major metropolitan areas. Earlier suburbs include areas with a median house construction date before 1980. Later suburban areas have later median house construction dates.

Urban cores are defined as areas (ZCTAs) that have high population densities (7,500 or more per square mile or 2,900 per square kilometer or more) and high transit, walking and cycling work trip market shares (20 percent or more). Urban cores also include non-exurban sectors with median house construction dates of 1945 or before.

Photo: Northern Suburbs of Minneapolis-St. Paul (by author)

[Originally published at New Geography]

 

Categories: On the Blog

Taiwan High Speed Rail Near Bankruptcy

Somewhat Reasonable - October 12, 2014, 11:20 AM

Efforts are underway by the Taiwan government for a government led restructuring to avoid bankruptcy (Plan to stop Taiwan’s high-speed rail going bust set for review). Since opening in 2007, this privately financed and operated system has been plagued with ridership well below projections. The Taiwan experience is consistent with the research showing that ridership on high-speed rail lines has been frequently over-projected.

Minister of Transportation and Communications (MOTC) Yeh Kuang-shih offered this sobering assessment:

“This is not the best time to address the financial problems, but it is the last window of opportunity. The Taiwan High Speed Rail Corp will definitely go bankrupt if the problems are not addressed by the end of the year. The only other solution would be a government takeover. If the company files for bankruptcy and the government is forced to take over operation of the system, the banks will probably collect on their loans, but neither large nor small investors will get anything back.”

Kuomintang Party legislator Lin Kuo-cheng said that the “debt” and “accumulated losses” mean that the Taiwan high speed rail line is “broke.”

 

[Originally published at New Geography]

Categories: On the Blog

Obama Condemns Tax Inversions, but Pillages America with His Regulatory Agenda

Somewhat Reasonable - October 11, 2014, 10:31 PM

It’s no mystery why American companies have stockpiled over $2 trillion of overseas earnings in foreign bank accounts. If they bring it to the United States, the IRS would grab 35% of it. That’s the US corporate tax rate – the highest in the developed world, double the average in EU nations.

Medtronic found a creative way to repatriate its cash, allowing it to bring money to the USA subject to just a 12.5% tax. The company acquired Covidien, another, smaller medical device firm in Ireland and will establish its formal headquarters in Dublin, thereby slashing its tax rate by two-thirds, and leaving it with far more cash for plants and equipment, innovation, hiring and keeping workers, and tapping new markets.

Pharmaceutical, biotechnology, healthcare and other companies have concluded or are pursuing similar “tax inversion” strategies. The actions have outraged the White House, “progressive” activists and many Democrats in Congress – except when President Obama’s BFF Warren Buffett engineered Burger King’s acquisition of Canada’s Tim Horton café and bakery chain.

The President says the practice is “unpatriotic” and “immoral,” calls the companies “corporate deserters,” and says businesses must start acting like “good corporate citizens.” Congressional Democrats have issued similar denunciations and want inversions prohibited or punished. They’re barking up the wrong tree.

The proper solution is comprehensive tax reform. However, Republicans want to address both corporate and individual tax issues, Democrats insist that only corporate taxes on the table, and Mr. Obama is typically not inclined to do the hard work of forging bipartisan compromises. Instead, he wants his IRS and Treasury Department to review “a broad range of authorities for possible administrative actions” and ways to “meaningfully reduce the tax benefits after inversions take place,” as one Treasury official put it.

Companies, workers and investors are bracing for the coming executive fiats. The diktats epitomize a huge problem that neither Congress nor the courts have been willing to address, but which continues to drag our nation’s economy and employment into the abyss: an out-of-control federal bureaucracy that is determined to control virtually every aspect of our business and personal lives – at great cost, for few benefits, and with little or no accountability for mistakes or even deliberate harm.

Of course we need taxes, laws and regulations, to set norms and guidelines, safeguard society, punish miscreants and pay for essential government programs. No one contests that. The question is, How much?

What we need right now is regulatory patriotism – and Executive Branch morality, citizenship, and fealty to our Constitution and laws. The federal behemoth today is destructive, and unpatriotic.

  • The confiscatory 35% corporate tax rate is embedded in a Tax Code that’s 74,000 pages long, counting important cases and interpretations. It totals some 33 million words (compared to 788,280 in the King James Bible) and is loaded with crony corporatist provisions and complex, indecipherable language.
  • A 906-page, 418,779-word (un)Affordable Care Act that has already metastasized into more than 10,000 pages of complex, often contradictory regulations, with more interpretations and clarifications to come.
  • The 2,300-page Dodd-Frank law has already spawned over 14,000 pages of banking and financial rules.
  • Over 175,000 pages in the Code of Federal Regulations are coupled with more than 1.4 million pages of tiny-type Federal Register proposed and final rules published just since 1993, at the rate of over 71,000 pages per year. Doctors, patients, insurers, businesses large and small – much less average citizens – cannot possibly read, comprehend or follow this onslaught.
  • At least 4,450 federal crimes are embedded in those laws and regulations (with some 500 new crimes added per decade) – often for minor infractions like failing to complete or file precisely correct paperwork for selling orchids or importing wood for guitars. Neither inability to understand complex edicts, lack of knowledge that they could possibly exist, nor absence of intent to violate them is a defense, and the “crime” can bring military swat teams through doors, and land “violators” in prison for months or years.
  • Production Tax Credits and other sweetheart “green” energy subsidies and grants total some $40 billion a year – for ethanol producers and folks like Tesla CEO Elon Musk and Mr. Tom Kiernan, who is both CEO of the American Wind Energy Association and treasurer of the League of Conservation Voters, which gives millions to mostly Democratic candidates to perpetuate the arrangements.
  • American businesses and families must pay $1.9 trillion per year to comply with these mountains of regulations. That’s one-eighth of the nation’s Gross Domestic Product; it’s almost all the corporate money now held overseas: $5,937 a year for every American citizen – and far more than the $1.6 trillion in direct economic losses that re-insurer Munich Re blames on weather-related disasters between 1980 and 2011.
  • $353 billion of these regulatory costs are inflicted by the Environmental Protection Agency alone, say Competitive Enterprise Institute experts who prepared the $1.9 trillion regulatory costs analysis for 2013.

Even worse, these criminal complexities and costs are being imposed by increasingly ideological, left-of-center, anti-business “public servants” who target conservatives and are intent on advancing President Obama’s agenda of “fundamentally transforming” the United States. They are determined to redistribute wealth, pit economic and ethnic groups against each other, close down coal-fired power plants, ensure that electricity prices “necessarily skyrocketing,” and stop drilling, mining, ranching, fracking and pipelines.

Poll after poll finds Americans focused on jobs and the economy, and on ISIL, terrorism and Ebola. Not so our federal government. Secretary of State John Kerry says climate change is “the world’s most fearsome weapon of mass destruction,” posing “greater long-term consequences” than terrorism or Ebola. For EPA the biggest issues are global warming, “environmental justice” and “sustainable development.”

How is the US economy responding to these policies? Median household income is down $2,000 since Obama took office, while costs of living continue to rise. Despite the subsidies, electricity prices have soared 14-33% in states with the most wind power. Some 45 million Americans now live below the poverty line – a 50% increase over the 30 million in poverty on inauguration day 2009.

While the official unemployment rate is now under 6% for the first time in six years, University of Maryland economist Peter Morici puts the real jobless rate at closer to 20% – which includes the millions who have given up looking for work, those who want to work full-time but must settle for part-time, and students enrolled in graduate school because their employment prospects are so bleak.

The labor force participation rate now stands at 62.7 percent, the lowest level in 36 years, with over 92 million adults not working. Over the past six years, one million more Americans have dropped out of the labor force than have found a job.

Indeed, a hallmark of the Obama recovery is its unique ability to convert three full-time jobs with benefits into four part-time positions with no benefits – and then say unemployment is declining.

It’s hardly surprising that dozens of senators and congressmen who voted with Mr. Obama 90-99% of the time now want to be seen as “moderate independents” – and do not want to be seen with the President.

But as President Obama told Northwestern University students October 2, “Make no mistake, [my] policies are on the ballot, every single one of them.”

He’s absolutely right. So are his economic and employment records. Time will tell how many people remember that when they vote November 4.

Categories: On the Blog

Time for Travel Caution in Dealing with Ebola Outbreak

Somewhat Reasonable - October 11, 2014, 12:00 PM

Breaking news as this article was being written is that Howard University hospital in Washington, D.C. has admitted a patient — a recent traveler to Nigeria — who has symptoms that could be associated with Ebola. Receiving little coverage was a report on Thursday, October 3, that an American freelance television cameraman working for NBC News in Liberia has contracted Ebola, the fifth U.S. citizen known to be infected with the deadly virus.

To date, there has been one confirmed case of Ebola in Texas, Thomas Duncan, a visitor that arrived by commercial air from Liberia on September 20. He died October. 8 at Texas Presbyterian Hospital. He was infected with Ebola before he left for the U.S., when he helped carry a convulsing pregnant woman who later died of the virus along with four more of his neighbors. How Duncan was permitted to board the plane on September 19 to travel to the U.S. has now come to light. According to Liberian authorities, Duncan allegedly lied on his airport departure screening questionnaire about whether he had had contact with a person infected with the virus. Liberian authorities plan to prosecute Dallas Ebola patient Thomas Eric Duncan when he returns home.

Four days after Duncan’s arrival in the U.S. he sought treatment at Texas Presbyterian Hospital for non-specific symptoms and was sent home with a prescription for antibiotics. During the interim, before returning to Texas Presbyterian Hospital with full-blown Ebola two days later, Duncan had contact with several family members, including five school children, who attend four different schools in Dallas. These children are now being monitored.  The four individuals having direct contact with Thomas Duncan were quarantined on Thursday, Oct. 2 in the Dallas apartment where Duncan stayed. His sheet and other items used were sealed and taken away in plastic bags. More recently the same four individuals were moved to a place in a gated community.

We can’t know for sure, but dozens, possibly hundreds of individuals, including medical personnel, were exposed to Thomas Duncan after he developed symptoms. The CDC, which can’t seem to keep track of viable smallpox samples, assures us all is under control. They are tracking possible contacts, but have no plans to quarantine these contacts as a precaution.

Obama is as defensive about African affairs as he is about Islam, especially when the two overlap and reinforce. The administration draws a parallel to the SARS epidemic, which “never really was as bad as predicted.” Tell that to the hundreds of thousands of Asians who contracted it. What about the thousands of canceled flights? Face masks mandatory in public throughout much of China? It could be worse, because China filters bad news, and deals firmly with leaks. The administration’s response to a potential Ebola epidemic is rife with political correctness.

The Obama administration in 2010 quietly dumped Bush-era plans to enact quarantine regulations supported by the Centers for Disease Control that were designed to prevent travelers from spreading infectious diseases. The regulations were proposed by the Bush administration in 2005 during the height of avian and swine flu fears. The rules would have required airlines to report to federal authorities any ill passengers. They mandated that airlines collect information on international passengers – including email addresses, traveling companions and return flight details – to make it easier to trace passengers in any investigation of a disease outbreak.

Despite the calls for heavy travel restrictions between the U.S. and those west African countries hardest hit by the outbreak, with one advocate even warning against the possibility of “Ebola tourism” by patients seeking better care here, the administration is rejecting calls for a visa ban for West Africans. Visas are held by 13,500 people in three Ebola countries in Africa, Sierra Leone, Guinea, and Liberia, to visit the U.S.

Meanwhile, the World Health Organization reported on Wednesday, Oct. 1, that the manufacture, financing and distribution of a large-scale Ebola vaccine is not possible until the middle of next year at the earliest. The WHO is expediting Phase 1 and Phase 2 trials on two highly promising experimental Ebola vaccines, hoping to obtain approval next February.

Can we believe Ebola victims are only contagious once symptoms start? If so, Ebola must be unique among viral diseases. Besides, symptoms are not turned on and off like a switch, and the hospital in Texas missed even the most obvious ones. Is it only spread by direct contact with bodily fluids? Perhaps, but bodily fluids are spread by coughs and sneezes too. We don’t catch colds from exhaled carbon dioxide (not even the Climate Change Cult believes that, so far). Furthermore, bodily fluids linger on clothing and other things. How long is the virus viable under those conditions? Duncan was carried into the ambulance, where he continued to vomit. Have those EMTs been quarantined? How many patients and hospital workers did they contact after being contaminated? It’s simple math. If one person infects two others, the spread is by definition, exponential — and the chart resembles a hockey stick.

Well worth reading is this article, Ebola: The Truth About How Viruses Work by Suzanne Hamner (pen name).  To be considered is the final sentence in Hamner’s article:

Officials, right now, appear to be over-confident which can be dangerous because no one at this juncture can say they know “all there is to know” about Ebola.

This underscores the utter madness of sending 4,000 American soldiers to West Africa to help deal with this disease. There will be no “accidental” contact in their case, it is virtually assured. And what of their friends and loved ones when they return? Will they all be quarantined too? Are we absolutely certain that the disease is spread only by direct contact, and not airborne or through other vectors? Are we sure the virus is not contagious when it has run its course, even though it is still present in body tissues for months or years afterward.

Congress must be urged to stop this troop movement before it occurs. Render all the humanitarian aid as practical, but at arm’s length. Flights to and from West Africa and the US must be stopped or severely limited, as well as flights that might transfer through other countries.

History tells us what can happen in we proceed in ignorance. The Plague (Yersinia pestis) ravaged Europe in several waves from the 6th to the 18th century, spread by returning Crusaders.  Nearly one third of Europe’s population was lost.  The Plague is still endemic in certain areas, including the American Southwest, despite antibiotics.

In truth, Ebola is probably not as virulent as other pandemic diseases and can be treated and controlled by exercising due diligence. However, this diligence is unlikely to occur as long as the White House is in denial. Obama’s fantasy administration isn’t just a game anymore. In the events of this week, we see that the resources used to deal with a handful of Ebola cases in the United States are already strained. Ad hoc efforts at quarantine for relatives of Thomas Duncan in Texas were ignored until enforced with an armed guard. Voluntary compliance is likely to be poor, and cases may go unreported if doing so results on confinement for a month.

The United States is the only Western nation permitting virtually unrestricted air travel from West Africa. This is not a reason to cry “panic,” but a call to exercise firm and effective measures to prevent the spread of a devastating disease and its economic consequences, while exercising compassion for those in suffering.

[Originally published at Illinois Review]

Categories: On the Blog

Common Core Proponents Blame Their Victims

Blog - Education - October 11, 2014, 10:59 AM

It has become fashionable to blame the effects of nationalizing education on anything but the national curriculum mandates and the tests that accomplish it.

Teachers unions have seized on Common Core to undermine testing mandates and teacher evaluation schemes, bemoans Stanford University economist Eric Hanushek. Bad model lessons are undercutting Common Core’s potential, exclaims Robert Pondiscio of the Thomas B. Fordham Institute. Common Core teacher retraining sessions teem with learning theories that research has proven ineffective, complains E. D. Hirsch, founder of the Core Knowledge Foundation. And textbook publishers have twisted Common Core into a resurgence of “fuzzy math,” asserts College Board’s Kathleen Porter-Magee.

In other words, our nation’s 50 million schoolkids enter a storm of curricular chaos this fall, but, like them, Common Core is just a hapless victim. Has Common Core really been hijacked, or has it been a rogue vessel all along?

To answer that question in education terms, consider the current furor among New York educators over whether Common Core supports phonics-based literacy or a content-lite approach known as “balanced literacy.” The two are essentially pedagogical polar opposites, yet both sides claim Common Core justifies their approach.

A look at the standards themselves, as its proponents often demand, suggests this controversy is at least partly Common Core’s fault. Its curriculum mandates are wordy, obtuse, and inaccurate. Try this representative directive, for kindergarten: “Associate the long and short sounds with the common spellings (graphemes) for the five major vowels.” After wading through the blubbery language, an astute reader will ask, “How many ways can there be to spell the five vowels? And are there any minor vowels?” There is precisely one spelling for each of the five, and only five, vowels. So what could this mandate mean?

It’s unclear, and so is the rest of Common Core, as in-depth analysis along these lines from Hillsdale College’s Terrence Moore shows in his book The Story Killers. So no wonder New York teachers, and teachers everywhere, must muddle about, prey to contradictory education theories, in the name of Common Core. The lack of curricular clarity in Common Core has spawned mass confusion. Follow the money: The Common Core beneficiaries are consultants and test developers.

Aside from such complaints, Common Core proponents suggest the curriculum makes for good political arrangements. If it undercuts mediocrity by demonstrating the flabbiness of American children’s mental muscles, or makes U.S. education more efficient and orderly, perhaps all this pain might produce some gain. Or, in the words of Common Core’s biggest financial backer, Bill Gates, “It’s ludicrous to think that multiplication in Alabama and multiplication in New York are really different.”

That’s the real essence of Common Core: a political movement, a neat and tidy scheme to streamline U.S. education through a set of rapid, enormous policy changes rather than undergo the tedious process of convincing people and their elected representatives they should assent to a new way of organizing education. To speed things along, the people who created Common Core requested back in 2008 that the federal government play “an enabling role” and “offer a range of tiered incentives” to get states to sign onto national curriculum mandates and tests.

Once President Barack Obama came into office, he obliged, and then some. Thanks to federal grants offered during the recent recession, 40 state departments of education offered to accept this complete overhaul of their schools’ curricula and tests more than five months before the actual curriculum requirements were published in June 2010 and two months before even a draft was made publicly available. Taxpayers still await the final version of these new national tests.

Given the speed, secrecy, and arm-twisting of this initiative, the resulting chaos is no surprise. Potential pitfalls and a broad base of support never emerged during public debate, because there was no public debate. What is surprising is that people still insist on blaming Common Core’s victims rather than its perpetrators.

Joy Pullmann is a 2013–14 Robert Novak journalism fellow and education research fellow for The Heartland Institute.

Common Core Proponents Blame Their Victims

Somewhat Reasonable - October 11, 2014, 10:59 AM

It has become fashionable to blame the effects of nationalizing education on anything but the national curriculum mandates and the tests that accomplish it.

Teachers unions have seized on Common Core to undermine testing mandates and teacher evaluation schemes, bemoans Stanford University economist Eric Hanushek. Bad model lessons are undercutting Common Core’s potential, exclaims Robert Pondiscio of the Thomas B. Fordham Institute. Common Core teacher retraining sessions teem with learning theories that research has proven ineffective, complains E. D. Hirsch, founder of the Core Knowledge Foundation. And textbook publishers have twisted Common Core into a resurgence of “fuzzy math,” asserts College Board’s Kathleen Porter-Magee.

In other words, our nation’s 50 million schoolkids enter a storm of curricular chaos this fall, but, like them, Common Core is just a hapless victim. Has Common Core really been hijacked, or has it been a rogue vessel all along?

To answer that question in education terms, consider the current furor among New York educators over whether Common Core supports phonics-based literacy or a content-lite approach known as “balanced literacy.” The two are essentially pedagogical polar opposites, yet both sides claim Common Core justifies their approach.

A look at the standards themselves, as its proponents often demand, suggests this controversy is at least partly Common Core’s fault. Its curriculum mandates are wordy, obtuse, and inaccurate. Try this representative directive, for kindergarten: “Associate the long and short sounds with the common spellings (graphemes) for the five major vowels.” After wading through the blubbery language, an astute reader will ask, “How many ways can there be to spell the five vowels? And are there any minor vowels?” There is precisely one spelling for each of the five, and only five, vowels. So what could this mandate mean?

It’s unclear, and so is the rest of Common Core, as in-depth analysis along these lines from Hillsdale College’s Terrence Moore shows in his book The Story Killers. So no wonder New York teachers, and teachers everywhere, must muddle about, prey to contradictory education theories, in the name of Common Core. The lack of curricular clarity in Common Core has spawned mass confusion. Follow the money: The Common Core beneficiaries are consultants and test developers.

Aside from such complaints, Common Core proponents suggest the curriculum makes for good political arrangements. If it undercuts mediocrity by demonstrating the flabbiness of American children’s mental muscles, or makes U.S. education more efficient and orderly, perhaps all this pain might produce some gain. Or, in the words of Common Core’s biggest financial backer, Bill Gates, “It’s ludicrous to think that multiplication in Alabama and multiplication in New York are really different.”

That’s the real essence of Common Core: a political movement, a neat and tidy scheme to streamline U.S. education through a set of rapid, enormous policy changes rather than undergo the tedious process of convincing people and their elected representatives they should assent to a new way of organizing education. To speed things along, the people who created Common Core requested back in 2008 that the federal government play “an enabling role” and “offer a range of tiered incentives” to get states to sign onto national curriculum mandates and tests.

Once President Barack Obama came into office, he obliged, and then some. Thanks to federal grants offered during the recent recession, 40 state departments of education offered to accept this complete overhaul of their schools’ curricula and tests more than five months before the actual curriculum requirements were published in June 2010 and two months before even a draft was made publicly available. Taxpayers still await the final version of these new national tests.

Given the speed, secrecy, and arm-twisting of this initiative, the resulting chaos is no surprise. Potential pitfalls and a broad base of support never emerged during public debate, because there was no public debate. What is surprising is that people still insist on blaming Common Core’s victims rather than its perpetrators.

Joy Pullmann is a 2013–14 Robert Novak journalism fellow and education research fellow for The Heartland Institute.

Categories: On the Blog

What We Lose In A World Without Saturday Morning Cartoons

Somewhat Reasonable - October 11, 2014, 9:19 AM

Last weekend was the first without Saturday morning cartoons, and you have government to thank for it.

What killed Saturday morning cartoons? Cable, streaming, and the FCC. In the 1990s, the FCC began more strictly enforcing its rule requiring broadcast networks to provide a minimum of three hours of “educational” programming every week. Networks afraid of messing with their prime-time slots found it easiest to cram this required programming in the weekend morning slot. The actual educational content of this live-action programming is sometimes debatable, but it meets the letter of the law.

This is the sort of shift which, for Boomers, Xers, and Millennials, marks a moment of fond recollection of the Honey Nut Cheerio days that were. In our house, we had a 2-3 hour maximum (based on chore completion), and some combination of superheroes and meta-commentary on superheroes – the Lone Ranger, Batman, Superman, X-Men, Spider-Man, The Tick, Freakazoid, Animaniacs and Looney Tunes – filled it. Others had The Johnny Quest, The SuperFriends, The Laff-a-Lympics, The Teenage Mutant Ninja Turtles, The Transformers, G.I. Joe and more.

For many of us, Saturday morning cartoons were also a shared bonding experience with our parents. A Transom reader wrote me to make the point that “My dad usually watched with us and I continued the tradition, watching (Muppet Babies!) with my kids when they were small.  I remember thinking, ‘how cool that Dad wants to watch this show’, and I’m sure my kids appreciated that unique time spent with me.” It also served as a moment in the week where kids could just be kids, as opposed to being shuffled from class to practices to games, prepping that high-achiever resume for future Ivy League applications.

There is an interesting paradox here, though. While we may view the animation of the 1980s and 1990s through the lens of nostalgia, the reality is that much of the material produced in this era was subpar and unimpressive. For the most part, what came out of Kroft, Rankin-Bass, and Hanna-Barbera was just filler, with terrible plotting, acting, and artistry. Today’s animated content is leaps and bounds ahead of what was being done then. The idea of a show like Adventure Time was completely out of the question. And in those days, what was on TV was your only option. So it’s that episode of Spider-Man which has the lousy villain you’ve seen a dozen times? Tough. There’s no ability to switch to something you’d rather see. Most Saturday morning shows weren’t Heart of Ice.

Today, the camaraderie of the Saturday morning routine has been exploded by the marketplace. The quality of animation is higher; the availability of options is essentially unlimited. Children today have the ability to pull up a fun, dark, or interesting cartoon anywhere at anytime, streaming through the air a fix of three decades of takes on Batman or any other superhero, a lifetime of homicidal cats and mice, giant robots out the ears. No longer limited to a single slot in the week, they are able to access this entertainment for themselves on-demand whenever they please.

But we should recognize there’s something lost here, as well, as it is in so much of the on-demand economy. The Saturday morning cartoons and breakfast cereal meant that children experienced things as they came, together, and could talk about those shared experiences after. It was inherently a community activity, a definitional moment in childhood, with ties of shared reaction to surprising moments and turns of plot, not individualized to the user’s priorities. What replaces it is more responsive to our desires, with more instant gratification, but also more atomized. The loss of that shared experience is a minor thing in the scheme of Burkean collapse – but it is something, and I suspect something more meaningful than we might understand. And we won’t get it back.

[First published at The Federalist.]

Ben Domenech is the publisher of The Federalist and a senior fellow at The Heartland Institute. Sign up for a free trial of his daily newsletter, The Transom.

Categories: On the Blog

What Would Milton Friedman Say About Renewable Electricity Mandates?

Somewhat Reasonable - October 10, 2014, 1:34 PM

Photo Credit: Academy of Achievement

In a May 23, 1977 column for Newsweek, titled “A Department of Energy?” the late Nobel laureate economist Milton Friedman wrote:

Do not be misled into supposing that the energy problem is a purely technical problem that engineers can solve. No government engineer is in as good a position as you are to decide whether you would rather use expensive energy for heating your home, driving your car, or helping to produce one or another product for you to buy. No government engineer is in as good a position as the owner of a factory to choose the most economical fuel for his purposes or the cheapest way to conserve energy. No government engineer can replace the market in calculating the indirect effects of energy use or conservation. And no government engineer will enforce the ever more numerous edicts that will come down from on high. That will be done by policemen.

Such a line is even more relevant today than it was back then. Not just because we do now, in fact, have a Department of Energy, but because of the 29 state laws across the country that more specifically do exactly what Prof. Friedman said would inflict great harm on the economy. Namely, force people to consume expensive energy – whether or not they value or can afford it – simply because the government believes it has the technical authority to determine what form of energy is the best for us to use.

These laws, called renewable portfolio standards (RPS), or renewable electricity mandates, are laws that require utilities to sell or produce a certain percentage of electricity from renewable sources. Find out if your state has an RPS law here.

The Heartland Institute has written about RPS policy and traveled the country testifying in favor of its repeal for years. Last June, Ohio Gov. John Kasich became the first Governor to sign legislation that reduced the RPS. Hopefully that will dramatically improve the odds for similar bills in other states to become law in 2015.

Photo: Academy of Achievement

Categories: On the Blog

Desperate Wildlife Extinction Claims are Part of the Warming Hoax

Somewhat Reasonable - October 10, 2014, 11:57 AM

One thing that those of us who have been longtime observers and debunkers of the lies surrounding global warming and/or climate change have noticed is that the “Warmists” have gotten increasingly desperate after more than eighteen years in which there has been no warming.

As what they call “a pause” continues, they are coming up with some of the most absurd “research” to make their case.  When you consider that not one single computer model produced by the UN Intergovernmental Panel on Climate Change or any of the other charlatans was accurate, one can imagine their sense of panic at this point.

The latest claims were made by the wildlife group, WWF, the Zoological Society of London, and other affiliated groups. On October 1st, it was reported that, based on “an analysis of thousands of vertebrate species,” populations had fallen 52% between 1970 and 2010. In 2012 the same group had claimed they had declined 28% over a similar period of time. So now we are expected to believe that within two years’ time a massive larger decline had been detected.

The claims are absurd. I won’t insult you by repeating them. Suffice to say they did not begin to cover the thousands of species that share planet Earth with humans, but you can be sure that it was humans that got blamed for the alleged declines, along with the usual recommendations that we give up the use of fossil fuels and other aspects of modern life to save some furry creature somewhere.

For years we have been hearing that polar bears have been in decline, but one of the leading authorities on this species, zoologist Dr. Susan Crockford, has a report, “Ten good reasons not to worry about polar bears”, posted on the website of the Global warming Policy Foundation, led by Dr. Benny Pieser, a longtime critic of those behind the global warming hoax.

Dr. Crockford called polar bears “a conservation success story. Their numbers have rebounded remarkably since 1973 and we can say for sure that there are more polar bears now than there were 40 years ago.

Over on CFACT’s Climate Depot.com website, similar claims about walruses were debunked by Dr. Crockford who noted that mass haulouts (areas where they congregate) of Pacific walrus and stampede deaths are not new, now due to low ice cover. “The attempts by WWF and others to link this event to global warming is self-serving nonsense,” said Dr. Crockford,, “that has nothing to do with science…this is blatant nonsense and those who support or encourage this interpretation are misinforming the public.”

“The Pacific walrus remains abundant, numbering at least 200,000 by some accounts, double the number in the 1950s.”

The same time I read the article about the wildlife extinction claims, an email arrived from the Sierra Club—the kind they send to thousands who support its agenda—saying “For a mother polar bear and her cubs, the ice is already melting around them. The last thing they need to contend with is an oil spill.”  The claim about ice is another lie because Arctic ice has been expanding, not melting, in the same fashion as ice in the Antarctic. The real reason for the email was to protest “two massive drilling leases” and prevent access to Alaskan oil.

The Sierra Club and other environmental organizations have been on the front lines to get the Obama administration to keep the Keystone XL oil pipeline from being constructed. It has been senselessly delayed for six years despite the jobs and energy independence it will provide. One wonders if the top brass at the Sierra Club actually drive cars or do they all just bike to work?

In a similar fashion, in May the Union of Concerned Scientists announced that, thanks to climate change, our national landmarks such as Ellis Island, the Everglades, and Cape Canaveral will be endangered, claiming that face a serious and uncertain future in a world of rising sea levels, frequent wildfires, flooding and other natural events. Only the sea levels are rising in millimeters, not inches or feet. There have been fewer forest fires and far fewer hurricanes of late. In short, this is just one more desperate Green claim.

If the Greens are so concerned for wildlife, why don’t they protest the wind power turbines that slaughter thousands of birds and bats, and are exempted from prohibitions on the killing of eagles and condors? Because they are hypocrites, that’s why.

Species extinction, like climate change, is a normal, natural aspect of life on Earth. It has nothing to do with human activities. There were no humans around to blame for the Great Permian Extinction when 90% of all life on Earth was destroyed. Global warming periods and abundant carbon dioxide have never been causes of mass extinctions.

Craig Rucker, president of CFACT, the Committee for a Constructive Tomorrow, says the Warmists “actually want us to believe that global warming is responsible for the Ebola virus, the rise of ISIS, and for tens of thousands of walruses getting together to ‘haul out’ on a beach in Alaska. Attributing such things to global warming is among the most shameless tactics in the warming campaign’s playbook.”

© Alan Caruba, 2014

[Originally published at Warning Signs]

Categories: On the Blog

EPA Antics on Sea Level Rise–Gina McCarthy’s Miami Trip

Somewhat Reasonable - October 10, 2014, 11:48 AM

EPA Administrator McCarthy is going to be in Miami October 8 during or close to a King Tide and I suspect call the high tide of the year due to global warming.  The reason for the name of King Tide is given by Wikipedia that follows this paragraph.  If global warming is blamed on King Tide’s, this will be another example of EPA distorting science to promote their damaging policies for the nation.

I believe Hurricane Sandy hit New York in 2012 during a King Tide; it definitely was during a full moon.

“King tides are simply the very highest tides. Conversely, the low tides that occur at this time are the very lowest tides. They are naturally occurring, predictable events. Tides are actually the movement of water across Earth’s surface caused by the combined effects of the gravitational forces exerted by the Moon and the Sun and the rotation of Earth which manifest in the local rise and fall of sea levels. Tides are driven by the relative positions of the Earth, Moon, and Sun, the elliptical orbits of the celestial bodies, land formations, and relative location on Earth. In the lunar month, the highest tides occur roughly every 14 days, at the new and full moons, when the gravitational pull of the Moon and the Sun are in alignment. These highest tides in the lunar cycle are called spring tides. The proximity of the Moon in relation to Earth and Earth in relation to the Sun also has an effect on tidal ranges. The Moon moves around Earth in an elliptical orbit that takes about 29 days to complete. The gravitational force is greatest when the Moon is closest to Earth (perigee) and least when it is farthest from Earth (apogee – about two weeks after perigee). The Moon has a larger effect on the tides than the Sun but the Sun’s position also has an influence on the tides. Earth moves around the Sun in an elliptical orbit that takes a little over 365 days to complete. Its gravitational force is greatest when Earth is closest to the Sun (perihelion – early January) and least when the Sun is farthest from Earth (aphelion – early July).

The king tides occur when the Earth, Moon and Sun are aligned at perigee and perihelion, resulting in the largest tidal range seen over the course of a year. Alignments that are ‘near enough’ occur during approximately three months each winter and again for three months in the summer.[contradiction] During these months, the high tides are higher than the average highest tides for three or four days. The predicted heights of a king tide can be further augmented by local weather patterns and ocean conditions. Winter king tides may be amplified by winter weather making these events more dramatic. In the northern hemisphere, the term king tide is used to describe each of these winter high tide events. On Australia’s East Coast, the highest of each of these periods (i.e., one in winter and one in summer, totaling two per year) are known as the king tides. In this region of the world, the winter king tide usually occurs at night and therefore goes unnoticed. Consequently the summer king tide usually catches the most attention.”

These activities are part of the reason the nation is going in debt between $1billion and $1.5 billion per day.

James H. Rust,  Professor of nuclear engineering and policy advisor The Heartland Institute

Categories: On the Blog

Airbnb regulated into legality

Out of the Storm News - October 10, 2014, 10:26 AM

One of the many oddities of San Francisco is that the city is full of libertarians who love regulation. You can do your own thing, unless you’re a tech bro, a landlord or a big corporation, and then you must be legislated into submission. The city’s housing market is distorted by a series of regulations that seemed like good ideas at the time, such as rent control and tight zoning. Instead of making the city more charming and affordable, they drew tension between those who can afford housing (the very rich, the long-tenured tenant) and those who don’t. The result is a nasty edge to daily life in an otherwise gorgeous city.

The twin pressures of rent control and a booming economy have created occupations that can scarcely be imagined elsewhere, such as the master tenant: this is a person who has a large rent-controlled apartment and who makes a living by subletting rooms at market rate. Sure, the subtenants can complain, but they aren’t likely to in a city where the shortage of housing is a serious issue.

Then there’s Airbnb. The zoning and construction limits that affect the housing market also affect the hotel market. In 2007, Airbnb was formed in this world of semi-anarchy: a service that allowed people to rent out rooms to visitors. The host received more money per night than he or she would from taking on a roommate. The money offset the very high cost of living in SF, and the visitor saved money on hotel bills.

Win-win? Not quite. With no regulation, participating in Airbnb raised questions: could renters rent out space in their apartments without violating their leases? What if the renter moved in with her boyfriend but kept the rent-controlled lease to make a living as a full-time hotelier? Could landlords kick out tenants in order to rent out apartments to short-term guests? Would the hosts have recourse against crazy, violent or thieving guests – or squatters? Likewise, would the guests be protected against difficult hosts? And was the city due taxes for the lodging services? If so, should it go after the hosts, the guests or Airbnb itself to collect?

Excessive regulation led to the creation of Airbnb, and less-excessive regulation may just save it. On Oct. 7, the San Francisco Board of Supervisors passed legislation allowing residents to rent out rooms if they register with the city and hold $500,000 in liability insurance. Also, Airbnb must remit lodging taxes to the city. Airbnb is now legal, and guests and hosts alike, at the very least, know where they stood relative to the law.

Regulation is such a complicated beast. It would be nice to say that there should be no regulation whatsoever, but let’s face it: some people will behave badly unless they are given limits. On the other hand, too much regulation creates its own issues. Rent control is a bad idea; it is an economic transfer from the landlord to the long-term tenant with no social advantages, as the tenants receive the benefit without regard to need. As with any transfer payment, once it’s in place, the beneficiaries form a tight constituency to keep it. No politician has the will to take on an issue like rent control, and there’s no time machine to undo it.

On the other hand, there’s the very interesting phenomenon of creativity acting in response to constraints. Because regulation creates problems, it creates demand for work-arounds to solve them. Airbnb is one example. Another, also from SF, is Uber: restrictions on the number of taxis meant that people who lived in San Francisco’s neighborhoods could not get cabs. The taxi drivers would rather serve tourists than troll for passengers in the Fog Belt. The market for medallions may be limited, but other forms of on-demand transportation solved the problem.

Maybe that’s the secret to economic growth in Northern California. We like to think that a high-tax, high-regulation jurisdiction would be a terrible place to do business, but people are flocking to San Francisco and surrounding cities in the hope of hitting it big. The tight regulations force creative thinking to work around them – and maybe lead to their destruction.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Abusive Tax Policies Are to Blame for Corporations Going Overseas

Somewhat Reasonable - October 10, 2014, 10:15 AM

American companies that reincorporate abroad are not doing so to avoid paying taxes on U.S. earnings, despite the often misleading impressions left by the rantings of Senators Carl Levin, Dick Durbin, Elizabeth Warren, and others to the contrary. They are doing it to avoid paying U.S. taxes on earnings in other countries.

The United States is the only industrialized nation that uses a “worldwide” tax system, in which a U.S.-based corporation must pay taxes to our government regardless of where the corporation earns its money. Most of the rest of the world uses a “territorial” tax system, in which a corporation pays taxes only where it earns income.

For instance, Volkswagen and BMW pay taxes to the federal and state governments on income earned in the United States. If they bring that money back to Germany, where they are headquartered, Germany taxes none of it, because the United States has already taxed it. On the other hand, if a U.S.-based company earns income in Germany and wants to bring some of it back into this country, the company must pay federal tax even though the money has already been taxed in Germany.

This is a huge disadvantage to multinational corporations based in the United States and a big reason for corporate “inversions,” the word used to describe U.S. companies reincorporating in foreign countries. Compounding the disadvantage is this: The United States has the highest corporate tax rate in the industrialized world. The federal rate is 35 percent, and most states levy their own corporate tax on top of that.

The Tax Foundation earlier this year noted the combined (state and federal) average corporate tax rate in the United States is 39.1 percent, while the average rate is 25 percent among the 33 other nations in the OECD (Organization for Economic Cooperation and Development). OECD nations include Australia, Canada (this nation’s largest trading partner), France, Germany, Japan, Korea, Mexico, Sweden, and the United Kingdom.

In a recent interview with Budget & Tax News, Chris Edwards, director of tax policy at the Cato Institute, noted Canada has a net corporate tax rate of 15 percent—less than half the U.S. federal rate of 35 percent—and receives as much corporate tax revenue as a percentage of its gross domestic product as the United States receives.

“We don’t find companies trying to invert out of Canada or Ireland these days, because they have reasonable corporate tax policies,” Edwards said. “Left wingers like Durbin and [Sens. Carl and Rep. Sander] Levin talk about how government is losing money because of these inversions. The government is losing because their policies are inducing companies to move offshore.”

Pete Sepp, executive vice president of the National Taxpayers Union, also noted, “PricewaterhouseCoopers’ annual ‘Paying Taxes’ study shows that for a hypothetical medium-sized firm, the time and cost spent just on tax paperwork puts the U.S. 61st out of 189 countries. Somehow the chant of ‘We’re 61!’ doesn’t seem to have much appeal to a beleaguered business.”

Sen. Carl Levin (D-MI) has introduced a bill that would virtually end the ability of American companies to do inversions. So has Durbin (D-IL), who has seen the news that two Illinois-based companies, including Walgreen Co., the nation’s largest pharmacy chain, are mulling overseas mergers to do inversions. In response he has introduced the “Patriot Employer Tax Credit Act,” a bill his press statement says “would provide a tax credit to companies that provide fair wages and good benefits to workers while closing a tax loophole that incentivizes corporations to send jobs overseas.”

Notice the slam against the “patriotism” of companies that do inversions.

Lack of patriotism and “tax loopholes” are not the problem. The problem is a nation with the highest corporate tax rate in the industrialized world, a government that taxes income earned anywhere in the world, and an outrageously time-consuming and costly system just to pay taxes.

Until those problems are addressed, expect more U.S. companies to try to reincorporate outside the United States, and expect almost no companies outside the United States to try to reincorporate here.

Steve Stanek (sstanek@heartland.org) is a research fellow at The Heartland Institute in Chicago.

Categories: On the Blog

Let’s help the strivers

Out of the Storm News - October 10, 2014, 10:13 AM

In 2009, Bryce Harper—then a sophomore at Las Vegas High School and already the best high school baseball player in the nation—made the unusual and controversial decision to forgo his final two years of high school, on the grounds that there was simply no effective competition for him at that level. He passed the GED test and enrolled in the two-year College of Southern Nevada.

Harper’s choice turned out to be the right one. In his only season at CSN, he more than doubled the school’s single-season home run record, was awarded the Golden Spikes award as the best player in college baseball and was the first player taken in the 2010 Major League Baseball draft. Starring for the Washington Nationals, Harper was the National League’s rookie of the year in 2012.

The choice Harper made is not one limited just to once-in-a-generation athletes. Based on results from some limited experiments, proposals to allow students to finish high school a year early in favor of two-year community college scholarships have a lot to recommend them.

Texas and Utah currently offer small grants for students who forgo a fourth year of high school to enroll in college, while Arizona provides forgivable loans for the same purpose. Connecticut’s Yankee Institute for Public Policy has promoted the idea in conservative circles. But the idea has hardly caught fire, even though it could appeal across party lines, saving taxpayer money while also expanding opportunities for some of those poorly served by the educational system.

Liberals have obvious reasons to like such scholarships. They would provide 14 years of free schooling to students, rather than the current 13 years. They would relieve financial pressures for those who would struggle to pay the $2,700 a year that full-time community college costs, on average. They also would mark a significant public sector investment in professional training, greatly increasing the potential earning power of those who otherwise might receive only a high school degree.

Fiscal conservatives should be attracted by the fact that high school is far more expensive than community college, and even trading two years of the latter for one of the former will usually be a net savings. In Boston, for example, high school costs an average of about $17,000 per year, per student, while the most expensive community college option is only $4,500. In some areas, free community college could avoid pricey duplication of resources. A rural high school might not need to build an advanced placement physics lab if students could get essentially the same instruction at a community college.

Community college scholarships also would bend the cost curve for many who eventually go on to a four-year college, but would need to finance only two years there. This could prove especially helpful to those ambitious strivers who might not be ready or able to complete a four-year degree, but could “ease in” through community college. Those that didn’t complete the degree quickly would still leave with at least some college credit and new skills.

The feasibility of such plans will vary by jurisdiction. In most states, a high school diploma requires four years of class credits. However, in some localities, students may finish school early by compressing their schedules. And local boards of education in some places have broad powers to decide when to award diplomas. In others, students may be able to complete high school and an associate degree simultaneously, by applying community colleges courses for high school credit. (This is already pretty common.) In still others, the GED test may be the most efficient way to accelerate the process.

There are potential drawbacks that policymakers must consider. Students who take a chance on free community college would be left with no credential if they dropped out, and community college drop-out rates are very high. One reason community colleges cost less than high schools is that they do less: Class sizes are larger, total class time is more limited, and there are often fewer extracurricular opportunities like sports and theater. Students also are financially responsible for books and other materials that high schools typically provide for free.

But these issues can all be addressed, and the idea of getting high school students to complete college classwork already has broad appeal. In recent years, both the Democratic and Republican national platforms have called for more opportunities to earn college credits in high school. Most high schools have offered at least some advanced placement courses for decades. A full third of the class of 2013 took at least one AP exam, and the overwhelming majority scored well enough for most colleges to award them credit.

Most larger school districts also allow dual enrollment in some college courses already. The Gates Foundation’s Early College High School initiative has helped students in 28 states take college and high school classes simultaneously, sometimes earning an associate degree in the process. (The programs generally take place at special high schools rather than traditional community college campuses.) At least one very well respected freestanding program, Bard College at Simon’s Rock, exists exclusively for students who want to start college after 10th grade. Furthermore, many four-year college admissions offices will consider applications from sufficiently prepared high school juniors already.

Nonetheless, the idea of trading some high school for guaranteed community college scholarships has not attracted much support, and implementation of current programs leaves something to be desired. Arizona provides loan forgiveness only if students complete associate degrees or the equivalent. Students in most of the programs aren’t able to apply the grants to tuition at a four-year school, which limits their appeal. Since the programs don’t usually attract the very best students, who are bound for four-year colleges anyway, they haven’t found as many takers as they might. Not only should the grants be more broadly applicable (including as a way to pay part of the tuition for a four-year college), but the window in which to take them should be expanded to accommodate those who might need to work after high school or simply aren’t ready for college right away.

For most students, a standard four-year high school experience is still probably best. Few students want to miss out on prom, homecoming games, or many of the other senior-year rites of passage. Community colleges, while great resources, aren’t necessarily intended for the very brightest and most ambitious students. As with any choice, some who make this decision might find that they are worse off. But it is an option that could benefit many, and for that reason alone, it’s an idea that deserves a closer look.

Wireless Taxes Growing Out of Control

Somewhat Reasonable - October 10, 2014, 10:00 AM

Wireless tax rates have reached all-time highs. Almost half the states nationwide now impose a wireless tax above 10 percent. According to a new report released this morning by the Tax Foundation, the national average, consisting of  the combined federal, state, and local taxes and fees on cell phone bills, has now reached as high as 17.05 percent. Broken down, this historically high tax rate is comprised of a 5.82 percent federal rate and an average 11.23 percent state-local rate. Even as revenue earned per wireless phone falls, taxes and fees continue to climb.

In a media release on the study, Joseph Henchman, Vice President of Legal & State Projects at the Tax Foundation argues that state and local legislators should look away from wireless taxes for new tax revenue.

“Accessing content on our phones these days is easier than ever before, but paying cell phone bills remains difficult for many,” said Joseph Henchman, Tax Foundation Vice President of Legal & State Projects. “Instead of singling out wireless services with stealth tax increases, state and local governments should seek more neutral and less disruptive sources of revenue.”

According to CTIA, a wireless industry trade group, around 326 wireless device connections exist in the United States today (this number includes devices like smartphones, feature phones, tablets and personal wireless hotspots). In addition, according to the National Center for Health Statistics, around 41% of U.S. households have only wireless phones in the second half of 2013, indicating a move away from traditional landlines.

Scott Mackey of KSE Partners, co-author of the report argues in the media statement that wireless taxes are regressive and pose a threat to wireless network development.

“Wireless taxes and fees are regressive and have a disproportionate impact on poorer citizens,” said Scott Mackey of KSE Partners and co-author of the report. “Excessive taxes and fees may reduce low-income consumers’ access to wireless service at a time when such access is critical to economic success.”

Additionally, targeted cell phone taxes may slow investment in wireless infrastructure by lowering consumer demand for wireless service. “The reduced demand impacts network investment, because subscriber revenues ultimately determine how much carriers can afford to invest in network modernization,” adds Mackey.

In the report, Wireless Taxation in the United States 2014, the authors examined state, local and federal wireless taxes, creating state and local tax rankings. Below is a review of some of these findings:

The report finds that:

  • The five states with the highest state-local rates are: Washington State (18.6 percent), Nebraska (18.48 percent), New York (17.74 percent), Florida (16.55 percent), and Illinois (15.81 percent).

  • The five states with the lowest state-local rates are: Oregon (1.76 percent), Nevada (1.86 percent), Idaho (2.62 percent), Montana (6.00 percent), and West Virginia (6.15 percent).

  • Four cities—Chicago, Baltimore, Omaha, and New York City—have effective tax rates in excess of 25 percent of the customer bill.

  • The average rates of taxes and fees on wireless telephone services are more than two times higher than the average sales tax rates that apply to most other taxable goods and services.

  • States favor the taxes because they can raise revenue in a relatively hidden way.

With wireless taxes growing out of control, legislators should take another look at Wireless Tax Fairness Act, a bill designed to slow the growth of these taxes. The Act would put a five-year moratorium on discriminatory state wireless phone and data service tax increases. Although this wouldn’t prevent governments from creating new taxes and fees on all communications, it would disallow them from targeting any one service. A five-year freeze would slow the rate of tax increases while allowing more time to create a new taxing system for wireless that is more carefully developed, fair, and non-disruptive.

High wireless taxes drag down both consumers and the wireless market, deterring innovation and infrastructure improvements, while disproportionately affecting minority and low-income populations. Many of these groups support lower wireless taxes. As an example, according to a MyWireless study conducted by McLaughlin & Associates partnered with Penn Schoen Berland, nine in ten Hispanics believe the wireless tax rate should be the same or less than the taxes they pay on general goods and services.

Placing a moratorium on these discriminatory tax hikes would benefit the economy and consumers.

Categories: On the Blog

Is all-or-nothing better than nothing at all?

Out of the Storm News - October 10, 2014, 9:11 AM

The loudest criticism of the ongoing Transatlantic Trade and Investment Partnerships negotiations between the United States and the European Union is that they are conducted behind closed doors. On both sides of the Atlantic, there are concerns that the negotiation process’ lack of transparency is inherently undemocratic, ignoring the will of the people and violating national sovereignty. Particularly in Europe, many question the choice to present the European Parliament with an “all-or-nothing” proposition, in which the final version must be voted on without modification.

This is an argument we’ve seen before, in an earlier century over the ratification of a different document: the U.S. Constitution.

Many of the strongest opponents to the Constitution were opposed to the procedure for ratification, rather than the content of the document, as revealed by the late historian Pauline Maier in her award-winning book, Ratification. Following the Constitutional Convention, state conventions were required to vote “yea or nay” on the final document without any modifications. Opposition leaders, like George Mason of Virginia and Robert Whitehill and William Findlay of Pennsylvania, blamed the demand to “take this or nothing” for converting “men who had had hoped to ‘perfect’ the Constitution into its opponents.”

There is no denying that the foundational text for United States was created through a relatively undemocratic process. As historian Ray Raphael explained in an analysis of Maier’s book:

Without any means for amending the document prior to ratification, the people, in whose name the Constitution was supposedly written, were being asked merely to add their assent to a document not of their own making.

It is an issue that gets at the core of federalist politics, particularly when it comes to trade. Are there instances when supranational agreements should be negotiated above the level of democratically elected national governments, especially in the interest of speed and efficiency? What takes precedence, national or supranational law? Which body adjudicates disputes?

In theory, TTIP is not an unpopular or polarizing concept. According to a Pew Research survey conducted in April 2014, 75 percent of Germans and 72 percent of Americans believed increased trade between the United States and the EU would be a good thing. But the undemocratic negotiating process has soured public opinion on a number of leaked TTIP developments. The controversial investor-state dispute settlement has been targeted as a mechanism for promoting corporate sovereignty. Edward Snowden’s surveillance leaks have provided an excuse for greater data protectionism and the increased regulation of large content and service providers such as Apple, Google and Amazon.

Digital trade should be an area where TTIP could make progress and do good. It lies at the heart of the global Internet economy and is naturally suited for seamless transnational transactions. As the European Commission implements an ambitious plan to realize a “digital single market” in the next six months, TTIP will play a major role in harmonizing digital trade both between the United States and Europe and within the EU by addressing key regulatory discrepancies in intellectual property and data flows. But even provisions that would effectively encourage and facilitate the transnational flow of content online — by removing intermediary liabilities, such as a version of Section 230 of the Communications Decency Act — have been tainted by the secretive nature of the negotiations.

Whether negotiation outcomes would be different with public input is impossible to know, but the perception that the voices of the people are being ignored is almost equally damaging. One of the major strategic goals of TTIP is to promote transatlantic unity. A deal in which nations feel unable to defend their interests will do quite the opposite.

Possibly in recognition of such concerns, the EU just yesterday chose to publish the TTIP negotiating mandates. It is a welcome move, though whether it will assuage transparency concerns sufficiently to complete negotiations before the 2016 U.S. presidential election campaign remains to be seen.

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

No, A Carbon Tax Cannot Create Jobs, Jobs, Jobs

Somewhat Reasonable - October 09, 2014, 9:02 PM

A tax on carbon dioxide emissions would destroy far more jobs – and wealth – than it would create, despite the well-intentioned hopes of Forbes.com contributor James Conca.

In a Forbes.com article titled “Can A Carbon Tax Create Jobs, Jobs, Jobs,” Conca argued a carbon dioxide tax would result in a net increase in jobs if the tax revenues were spent wisely. Key to this hopeful prognosis, Conca asserted, is the requirement that a newly imposed tax on carbon dioxide must be revenue-neutral, with carbon dioxide tax collections being offset on a dollar-for-dollar basis by tax reductions in other sectors of the economy.

Conca never explained how merely shifting tax burdens from one sector of the economy to another creates jobs and wealth. Instead, he simply cited three short articles and one longer paper written and published by liberal activists. On important policy issues of the day, however, blindly deferring to self-serving papers written by liberal activist groups, such as the notorious Center for America Progress, is a recipe for disaster. Yes, that is the same Center for American Progress that championed Solyndra and promised Obamacare would lower healthcare premiums, create jobs, and make American families richer.

There are many reasons – economic and otherwise – why a tax on carbon dioxide is a bad idea. Let’s examine just two of the economic reasons.

First, Conca concedes that higher taxes are economically harmful. His solution is to reduce taxes in other sectors of the economy. The problem is the same liberal activist groups who want to implement carbon dioxide taxes oppose corresponding tax cuts. The Center for American Progress, for example, says carbon dioxide tax revenue should be given to the renewable energy industry rather than returned to the American people.

Curiously, the Center for American Progress fails to disclose that it is funded by the renewable energy industry and its founder and chairman of the board has a long and successful career as a renewable energy lobbyist. Conca must first convince his liberal activist group allies to not pilfer carbon dioxide tax revenues before he can plausibly argue that carbon dioxide tax revenues would be returned to the American people. (Good luck on that, by the way, because the Center for American Progress argues very strongly that the renewable energy industrymust get to keep the tax spoils rather than government returning the tax money to the American people.)

Second, even in the unlikely event that government returned carbon dioxide tax revenue to the American people on a dollar-for-dollar basis, this would be revenue-neutral for government but not for the American people. The entire purpose of a carbon tax is to raise the price of inexpensive coal and natural gas so high as to become more expensive than carbon-free wind and solar power. However, if the carbon tax fulfills its goal of raising coal and natural gas prices higher than wind and solar prices, energy providers will no longer use coal and natural gas and energy producers will therefore pay little if any carbon tax.

As a result, consumers will pay dramatically higher energy prices but receive little if any compensating tax cuts in return. American families’ net disposable income will drop, which will reduce spending and destroy jobs in all other sectors of the economy. The only beneficiary of this energy-policy Ponzi scheme will be the renewable energy industry. This explains why the renewable energy industry-funded Center for American Progress supports the Ponzi scheme so much.

No credible economists claim that reducing American households’ disposable income will grow the economy and create jobs. Yet taxing carbon dioxide sufficiently to reduce carbon dioxide emissions will by purpose and designdramatically raise energy costs in a manner that will substantially reduce American household income while generating few corresponding tax rebates. Economically, all that will be accomplished will be poorer American families, economy-wide economic contraction, jobs destroyed in virtually every American industry, and a Solyndra-style transfer of wealth from hard-working American consumers to incompetent, uncompetitive, politically connected renewable energy companies.

It is a nice thought, James Conca, but no, a carbon tax cannot create jobs, jobs, jobs.

[First published at Forbes.]

Categories: On the Blog

Ban on Internet Access Taxes Should be Retained

Somewhat Reasonable - October 09, 2014, 3:41 PM

The Permanent Internet Tax Freedom Act is common-sense Internet policy that is a long time coming. Internet access taxes are particularly damaging to the growth of the Internet economy by placing an unnecessary burden on consumers. A permanent Internet access tax moratorium would help broadband access and development expand while reducing the need for government broadband spending. The moratorium is currently set to expire November 1, but legislation is now moving through Congress that would permanently extend the moratorium.  The bill, titled the Permanent Internet Tax Freedom Act (PITFA), was written by Judiciary Chairman Bob Goodlatte (R-VA) and co-sponsored by 138 Republicans and 76 Democrats.

While most states are currently covered under the moratorium, taxpayers in the states currently imposing these taxes could see their Internet bills decrease. If passed and signed into law, PITFA would make the ITFA moratorium permanent and force these seven states to cease imposing taxes on Internet access. These states are able to impose these taxes due to a “grandfather clause” in IFTA that allowed the states that already imposed the tax to keep them. These seven states, include Hawaii, New Mexico, North Dakota, Ohio, South Dakota, Texas and Wisconsin.

While the seven states will see a drop in tax revenue, experts do not expect the end of the tax to be a budget busting problem. According to Stateline, the seven states and their local governments stand to lose about $500 million annually in revenues, which while not insignificant only represents a small portion of most state budgets. Wireless services are already taxed higher than almost all other goods and services, the Tax Foundation found that almost half the states nationwide now imposing a wireless tax above 10 percent. The wireless consumers in seven states that would be freed from Internet access tax under PITFA, allowing them to expand their Internet services or use the savings elsewhere in the economy.

Making the Internet access tax moratorium permanent is a necessary step in promoting wider access to the Internet while keeping the cost down and eliminating discriminatory taxes. As the Internet has become one of the driving forces behind economic growth across the United States, ensuring affordable access for businesses and consumers is crucial. The Internet Tax Freedom Act Coalition, a group including telecom companies, tax watchdog groups and free market think tanks sent a letter in June to Chairman Goodlatte supporting his work to pass the Permanent Internet Tax Freedom Act.

Dear Representative Goodlatte,

The Internet Tax Freedom Act (ITFA) Coalition, a group of communications and technology companies, business associations and consumer groups, applauds the House Judiciary Committee for taking the first step to avoid new Internet access taxes on millions of Americans across the country with today’s markup.

We greatly appreciate your continued leadership on this issue, and stand ready to work with you and your colleagues to ensure swift passage of a clean bill to make the moratorium on taxes on Internet access and multiple and discriminatory taxation of Internet commerce permanent before the current Internet tax moratorium expires on November 1, 2014. With strong bipartisan support in both chambers of Congress, these bills should be considered for passage without unnecessary delays to protect American consumers from new taxes on their Internet access.

Again, we thank you for your leadership on this issue, and the ITFA Coalition looks forward to working with you to achieve the goal of making the Internet tax moratorium permanent for all Americans.

Sincerely,

The Internet Tax Freedom Act Coalition

Andrew Lundeen of the Tax Foundation noted in an article on PITFA that no real policy purpose exists for a tax on Internet access. “Additionally, there doesn’t seem to be a good reason to tax internet access in the first place. Governments tend to levy taxes on goods or services as a way of correcting for an externality or paying for the costs of a provided service.” “The internet does not create any evident externalities and may, in fact, have positive externalities associated with it. Additionally, state and local governments don’t seem to be providing any services associated with internet access.”

In a separate letter to the House of Representatives, Americans for Tax Reform takes the argument even further, pointing out that communication taxes are in many instances far worse than other sales taxes.

“Taxation of communications services is punitive and discriminatory. The average sales tax rate on voice services is 17 percent, and 12 percent on video services, while the average general sales tax rate is 7 percent. PITFA would at the very least prevent targeted taxes on Internet access, and disproportionate sales or other taxes on ecommerce.

Increased costs hinder continued growth in the digital space. As reported by the FCC’s National Broadband Plan, the largest barrier to consumer adoption and expanded use of Internet based services is cost. Allowing higher costs through Internet access taxes, which increase consumer cost and affect the rate of adoption, undermines America’s economic competiveness.”

While supporters of increased access taxes have argued that the taxes are needed to fund programs to help expand broadband to underserved areas, broadband coverage is already widely available and these programs may be unnecessary. Internet access taxes place an unnecessary burden on consumers in order to do something the market is already handling quite effectively. The current system is a hodgepodge of state and local access taxes competing against states without a tax. Making the Internet access tax moratorium permanent and ending the grandfather clause would help broadband access and development expand while reducing the need for government broadband spending.

Categories: On the Blog

Heartland Daily Podcast: Ilya Shapiro – Article V Amendment

Somewhat Reasonable - October 09, 2014, 2:30 PM

A new movement is spreading to state legislatures across the nation, attempting to do something which has never been done before: amend the United States Constitution from the grassroots up.

Using mechanisms embedded within the Constitution itself, activists are seeking to make reforms that have been successfully demanded of elected officials for decades. However, many people are unaware of how this process works — and many fewer aware that it’s even occurring.

Cato Institute Senior Fellow in constitutional studies Ilya Shapiro recently joined Heartland Institute Research Fellow Jesse Hathaway, explaining how the Article V amendment process works, and how it might be used to enact sound fiscal policy at the national level.

 

Categories: On the Blog
Syndicate content