Gasland was many Americans’ first exposure to hydraulic fracturing, and the film sparked anti-fracking organizations around the country. These activist groups used the film in efforts to convince people that fracking is responsible for a whole host of environmental problems, including contaminated water supplies, overuse of water, and even earthquakes.
Despite the theatrics employed in the film — the famous flaming faucet, for example, was caused by naturally occurring methane and had nothing to do with fracking — science has proved that fracking poses no greater risk to the environment than traditional oil and natural-gas development. In some respects, fracking is actually better for the environment than conventional drilling, and people with good sense should challenge anti-fracking activists when they say otherwise.
The flaming faucet convinced many people that fracking contaminates groundwater by fracturing the rock that separates water supplies from oil and gas wells. Scientific research, however, has found it is not “physically plausible” for chemicals to migrate upward to drinking water, there being simply too much rock (thousands of feet of it) protecting the water supplies.
Confirming this, an analysis released this year — an authoritative five-year study conducted by EPA —found no evidence of widespread or systemic impacts on drinking-water resources. Impacts are in fact rare.
In terms of water consumption, the U.S. Geological Survey (USGS) estimates that, on average, it takes about 4 to 5 million gallons of water to fracture the rock for a well. Although this may sound like a lot, it’s less than ten minutes’ worth of water consumption for New York City, and fracking uses far less of the nation’s water than crop irrigation does. In drought-stricken California, irrigation uses approximately 80 percent of the water, whereas fracking consumes 0.00062 percent.
Earthquakes have become one of the general public’s largest concerns about fracking. Science should help allay that concern. USGS reports hydraulic fracturing has been used in more than one million wells since 1947, yet there have been only three instances in which fracking was directly responsible for tremors large enough to be felt at the surface. This has led scientists to conclude hydraulic fracturing is not a mechanism for causing perceptible earthquakes.
But what about Oklahoma’s dramatic increase in earthquakes? Those quakes are caused by the disposal of oil and gas wastewater into underground injection wells, not the process of fracking itself, an important distinction. An average fracked well does produce between 800,000 and 1 million gallons of wastewater that must be disposed of in underground injection wells. However, fracking wastewater accounts for only a small portion (5 to 10 percent) of total wastewater disposal in the state. Most of the wastewater comes from oil production, which uses no hydraulic fracturing.
This isn’t to say hydraulic fracturing has zero environmental impact; in fact, all human activity affects the environment. But the environmental risks of fracking are manageable and vastly outweighed by the economic benefits.
A new kind of business model connecting customers and providers is cutting out inefficient middlemen and reducing costs. Unfortunately, some governments are trying to undercut these new services at the request of the old-economy companies that are displacing them with their greater efficiency.
Called the “peer-to-peer economy” by economists, this new model directly connects consumers and service providers who are owner-operators of their services, not just employees of a big company. By making that connection between users and a much wider range of providers, everyone wins; the consumer benefits from the use of someone else’s capital good, and the provider receives money for the use of property that would otherwise go unused.
For example, someone seeking to travel to a bar that’s too far away to walk needs transportation. The consumer hails a driver through a convenient phone app, such as the ones offered by Uber and Lyft, because the value of being transported to (and especially from) the bar is greater than the cost he or she will be charged. The driver answering on the rideshare service’s smartphone app has some free time and a car that’s not being used at the moment. The transaction concludes, and both parties benefit from the deal.
The benefits of rideshare services aren’t found exclusively on the microeconomic scale. Rideshare services have an empirically proven positive effect on neighborhoods the government-sanctioned taxicab monopoly has neglected.
Studying a full year’s worth of micro-geographic metadata from Uber trips in New York City and surrounding neighborhoods, Manhattan Institute researcher Jared Meyer discovered Uber’s most enthusiastic growth came not from business travelers going from downtown to the airport, but in the city’s poorer and more ethnically diverse neighborhoods.
In other words, Uber’s entry into the city’s for-hire transportation market empowered consumers in underserved markets to go about their business and engage in more economic activity than they might otherwise have done.
Helping people get where they’re going, in a city whose government-sanctioned taxi industry is well-known for red-lining and discrimination, is undeniably a good thing.
Another benefit of the peer-to-peer economy is the resurrection of what economists call “dead capital.” Dead capital is capital owners are unable to leverage and consumers are unable to benefit from because of regulatory barriers or other constraints.
Bedrooms are a great example of dead capital. As Mercatus Center at George Mason University Executive Director Daniel Rothschild points out, there are about 1.5 bedrooms for every man, woman, and child in the country, meaning there are literally more bedrooms than people in the United States.
“This represents a great deal of capital that people own but aren’t leveraging to earn returns,” Rothschild wrote.
Peer-to-peer companies such as Airbnb not only help consumers using the service, but also have a significant effect on all consumers in the market as a whole.
In his study of the effect of Airbnb’s entry into the hospitality market, Boston University assistant professor Georgios Zervas examined monthly hotel revenue data from about 4,000 Texas hotels dating back to 2003. Zervas found Airbnb actually causes hotel rooms to become less expensive, because “a hotel that exerts no response to a supply shock would exhibit a reduction in occupancy, whereas alternatively, a manager could maintain occupancy levels via a price response.” As a result, he writes, “Reduced prices, is a net benefit for all consumers, whether they use Airbnb or not.”
Empowering consumers, cutting out middleman and directly connecting buyers with sellers, and reducing consumers’ costs across the board are all good things. Lawmakers should allow people to benefit from the peer-to-peer economy’s rising tide, instead of trying to hold back the wave of the future.
The federal government could make a sizeable dent in that figure — without raising taxes or cutting spending — if it simply did a better job collecting taxes it’s already owed.
Tax deadbeats owe the federal government at least $380 billion. The IRS has proved unable to collect that debt. Between 2000 and 2010, the agency’s tax debt inventory increased more than sevenfold.
Rather than let tax cheats get off scot-free, the feds should explore new ways of collecting back taxes. A provision in the Highway Trust Fund bill currently under consideration in Congress would do just that by deputizing private contractors to collect back taxes. Such a move would reduce the federal budget deficit and lower the tax burden on law-abiding Americans.
The IRS doesn’t have the manpower to collect even a fraction of the back taxes it’s owed. According to one former IRS tax lawyer, the “odds [of avoiding prosecution] are overwhelmingly in the crooked taxpayer’s favor.”
When deadbeats cheat with impunity, law-abiding Americans must shoulder an even greater share of the deficit.
Private collection outfits are perfectly positioned to relieve this burden. They already collect $55 billion of public and private debt annually.
Much of that debt is actually owed to the federal government for things such as fines for littering in national parks and parking tickets issued on federal property. Over the past decade, the Treasury and Education departments have used private collection agencies to recover billions in debts owed to the government.
Employing private collectors to recover unpaid taxes would essentially build upon the status quo.
Thirty-eight states rely on private companies to help collect taxes and other debts. In fiscal 2012, an Arizona program that used private collectors to follow through on court orders and criminal restitution brought in $45.5 million, prompting one key official to call the system “very successful.”
In Philadelphia, an officer of the Chief Revenue Collection Office said, “These funds are essential to support important community services, like public safety, a clean environment and quality public schools. Failure to collect all funds owed to the City jeopardizes much needed services and increases the financial burden on compliant taxpayers and residents.”
Private collection agencies could deliver the funds the federal treasury needs. A 2006 pilot program employing private contractors to collect back taxes brought in $98 million before it was abruptly ended in 2009. That money covered the program’s one-time start-up costs — and still delivered the IRS a surplus of $8 million.
The Joint Committee on Taxation projects that private debt collectors would return $2.4 billion in unpaid federal taxes over the next decade — cash that would otherwise stay in the pockets of dishonest taxpayers.
Much of that money would go to the IRS, which has seen its funding cut by $1.2 billion between 2010 and this year. The provision within the Highway Trust Fund bill would direct one of every $4 collected to the IRS to hire personnel to ensure fewer citizens get away without paying their fair share.
The House Ways and Means Committee estimates that the IRS could add more than $100 million a year to its enforcement budget by contracting with private debt collectors. That would enable the agency to add as many as 1,000 new employees.
Critics of private debt collectors claim that because they’re paid on commission, they have an incentive to hound otherwise decent Americans who have simply fallen on hard times.
Such assertions are misguided. Federal law limits private collectors to calling taxpayers who have already agreed they owe money to work with them to voluntarily pay their debts. If a taxpayer claims financial hardship, the program would require private collectors to immediately refer him or her to the IRS.
In fact, during the three-year pilot program, private collection agencies received higher customer-satisfaction marks than did the IRS.
With budget deficits steadily rising, there’s no reason for the federal government to let hundreds of billions of dollars in taxes go uncollected. Teaming up with private collection agencies can help the feds shore up the public finances — and ensure that honest taxpayers aren’t played for suckers.
Jeff Stier is a senior fellow at the National Center for Public Policy Research.
In this episode of the weekly Budget & Tax News podcast, managing editor Jesse Hathaway is joined by Ayn Rand Institute fellow Don Watkins to talk about the American Dream, income inequality, and Selena Gomez… it makes sense when you listen to the podcast, we promise.
Contrary to what some people may say, Watkins says income equality is not the problem facing American society. Instead, the problem with society is political inequality, when some people receive more privileges than other people. Whether it be bailouts and corporate welfare for the politically well-connected, or welfare for those in poverty, unequal treatment before the law and government is the real source of problems in today’s society.
Instead of worrying about how much money people are willing to give producers of value, Watkins says we should be allowed to give our money—voluntarily—to people and businesses bringing value to our lives, and allowed to not contribute to people we don’t want to interact with. In other words, if we want to pay for a ticket to a Selena Gomez concert, we should be allowed to do so because we enjoy her music, not because someone else is forcing us to attend.
On September 22, 2015, Bruno Behrend, The Heartland Institute’s senior fellow for education policy, debated Troy LaRaviere, principal of Blaine Elementary School, and James Thindwa, the community engagement coordinator for the American Federation of Teachers, on the topic of school choice.
LaRaviere’s argument centered on data, in particular data showing year over year student achievement growth and how poverty directly relates to educational attainment. He provided charts showing the poverty relationship from The Suburban Chicago Daily Herald and his own data claiming Chicago Public Schools outperform charter schools in Chicago. LaRaviere’s school is based on the North Side of Chicago in the Wrigleyville neighborhood.
Thindwa’s primary argument centered on poverty and how America cannot fix education without fixing poverty. He said government should “not be picking winners and losers” and that school choice was a system of “winners and losers,” with many students being left behind.
Behrend focused on how the education bureaucracy in the United States stifles educational improvement by preventing competition. He also discussed how the current public school system is allowing students to fall through the cracks and yet there is no accountability, just requests for more money for the system. Behrend said the best vote a parent could make was not for mayor or alderman, but rather voting with their money. An education savings account would follow their child to the school of their choice through a “vast new array of education opportunities.”
The crowd of 20 gathered at The Heartland Institute was heavily skewed in favor of school choice. Questions from the audience concerned poverty intervention and the proper role of government in education. Examples from multiple districts were raised, including exorbitant salaries for teachers and administrators..
One question distilled the essence of school choice in a few words: Who gets to decide which school the student attends? LaRaviere answered, “The research.”
Behrend countered, “The parents.”
Video goes here
John Chubb, “A Critique of ‘The Public School Advantage: Why Public Schools Outperform Private Schools’”
Patrick Wolf, “Comparing Public Schools to Private”
Education Policy: What Matters?
As mentioned during the forum, the Bughouse Square Debate can be watched at https://www.youtube.com/watch?v=vjMGoxRwm9k.
Illinois and Montana will be an early adopter of the Common Core-aligned arts standards, known as the National Core Arts Standards. The National Coalition for Core Arts Standards released its art standards in June 2014, which were revised to align with the Common Core. These standards can be viewed in their entirety at NationalArtsStandards.org
The new standards now contain five components:
- Media Arts (new)
- Visual Arts
In today’s edition of The Heartland Daily Podcast, Adam Andrzejewski, founder of OpenTheBooks.com joins Environment & Climate News managing editor H. Sterling Burnett to discuss the new oversight report on the spending of the Environmental Protection Agency.
OpenTheBooks.com is a project of American Transparency. The website’s objective is to increase transparency by recording “every dime” of federal, state, and local spending.
The new report details the shocking amount of money the EPA is spending on weapons, ammunition and body armor. By budget, the agency ranks larger than many states. As Andrzejewski explains, the EPA has more than 1,000 lawyers, avoids transparency and information requests that prevents proper oversight of the agency’s spending. This podcast is a real eye-opener on the EPA’s waste, abuse, and misallocated funds.
World consumption of ozone depleting substances has been reduced to zero over the last three decades, but the ozone hole is as large as ever. Did humans really save the ozone layer?
In 1974, Dr. Mario Molina and Dr. Sherwood Roland of the University of California published a paper asserting that chlorofluorocarbon (CFC) pollution from industry was destroying the ozone layer in Earth’s stratosphere. CFCs were gases used in hair spray, refrigerators and insulating foams. The ozone layer is a layer of atmosphere located between six and 25 miles above the Earth’s surface.
Molina and Roland postulated that human-produced CFCs migrate upward through the atmosphere to the stratosphere, where ultraviolet radiation breaks down CFC molecules, releasing chlorine atoms. Chlorine then reacts as a catalyst to break down ozone molecules into oxygen, reducing the ozone concentration. The more CFCs used, the greater the destruction of the ozone layer, according to the theory.
In 1983, three researchers from the British Antarctic Survey discovered a thinning of the ozone layer over Antarctica, which became known as the ozone hole. Their observations appeared to confirm the theory of Molina and Roland. Molina and Roland were awarded a Nobel Prize in chemistry in 1995 for their work.
The ozone layer is known to block ultraviolet rays, shielding the surface of Earth from high-energy radiation. Scientists were concerned that degradation of the ozone layer would increase rates of skin cancer and cataracts and cause immune system problems in humans. Former Vice President Al Gore’s 1992 book claimed that hunters reported finding blind rabbits in Patagonia and that fishermen were catching blind fish due to human destruction of the ozone layer, but this has not been confirmed.
In an effort to save the ozone layer, 29 nations and the European Community signed the Montreal Protocol on Substances that Deplete the Ozone Layer in September of 1987. Over the next decade, the protocol was signed by 197 nations agreeing to ban the use of CFCs. Since 1986, world consumption of ozone depleting substances (ODS) is down more than 99 percent, effectively reaching zero by 2010.
The Montreal Protocol has been hailed as an international success in resolving a major environmental issue. It has been praised as an example to follow for elimination of greenhouse gas emissions in the fight to halt global warming. But despite the elimination of CFCs, the pzone hole remains as large as ever.
During the period from September to October, just after the Antarctic winter, the ozone hole is the largest for each year. NASA recently reported that from Sept. 7 through Oct. 13, 2015, the ozone hole reached a mean area of 25.6 million kilometers, the largest area since 2006 and the fourth largest since measurements began in 1979.
The hole remains large, despite the fact that world ODS consumption all but disappeared about a decade ago.
Scientists are mixed on when the stubborn ozone hole will disappear. NASA recently announced that the hole will be half-closed by 2020. Others forecast that it will not begin to disappear until 2040 or later. But the longer the hole persists, the greater the likelihood that the ozone layer is dominated by natural factors, not human CFC emissions.
Bernie Sanders is trying tried to divert attention from his bumbling performance in the recent Democratic Party presidential debate by making false and incendiary accusations that Exxon lied about global warming. Sanders claimed on national television that Exxon’s alleged lies likely broke the law and Exxon should be charged by the Obama administration with racketeering. Unfortunately for Sanders’s rush to create a convenient villain, there is no evidence to suggest Exxon lied about global warming.[Full disclosure: Exxon was once a minor donor to The Heartland Institute, for whom I serve as a senior fellow. Exxon last donated to The Heartland Institute in 2006.]
The “Exxon lied” storyline began in July when the UK Guardian published an article titled, “Exxon knew of climate change in 1981, email says – but it funded deniers for 27 more years.” The article quoted a former Exxon employee saying Exxon employed a team of scientists to study the potential warming effects of carbon dioxide emissions. Some Exxon scientists believed global warming was a serious concern while others did not.
Ultimately, Exxon’s top management sided with those scientists concluding humans are not creating a global warming crisis. Nevertheless, Bernie Sanders and his left-leaning media allies are attempting to portray as a “cover up” Exxon’s scientific conclusion and subsequent decision to fund scientists and groups who similarly report a non-alarmist global warming narrative.
Exxon’s top management received differing opinions on global warming. No matter which way they decided the issue, they would be agreeing with some of their consulting scientists and disagreeing with others. To say that Exxon management covered up or lied about global warming because some of their scientists agreed with them and other disagreed is like saying the United Nations Intergovernmental Panel on Climate Change (IPCC) is covering up and lying about global warming because some of IPCC’s own scientists disagree with the assertion of IPCC’s top brass that humans are causing a global warming crisis.
The Exxon experience appears to be the story of a corporation that decided to engage in due diligence on global warming issues and ultimately determined global warming activists were overstating global warming concerns. Exxon’s conclusion during the 1980s has subsequently been validated by real-world climate observations. For example, IPCC’s first assessment report, published in 1990, predicted 0.3 degrees Celsius of warming per decade, and perhaps as much as 0.5 degrees Celsius per decade. In the 25 years since 1990, however, temperatures have increased by just over 0.1 degrees Celsius per decade, or less than half the IPCC predictions. Also, IPCC predicted 6 centimeters of sea level rise per decade. In reality, sea level has risen at less than half that pace. IPCC reports predicted more frequent and severe extreme weather events, yet hurricanes, tornadoes, droughts, and nearly all other extreme weather events have become less frequent and less severe as our planet continues is modest warming. IPCC predicted declining agricultural production, yet global crop production continues to set new records virtually every year. In short, Exxon made a judgment call that “skeptics” had a better handle on climate science than alarmists, and Mother Nature has proven Exxon correct.
The American consumer is resistant to marketing aimed at selling them electric and hybrid vehicles. For the first quarter of 2015, according to the Wall Street Journal (WSJ), Chevrolet sold 1874 Volts—its electric car introduced in 2010 with “high expectations.” That number might not sound so bad, until you read on to discover that it is equivalent to the number of Silverado pick-up trucks sold in one day.
In another report, WSJ states: “Through June, the market share in the U.S. for hybrid electric cars such as the Toyota Prius and C-Max and for electric vehicles such as the Leaf accounted for 2.8% of industry sales. That is down from 3.6% through the same period in 2014. Volumes of those vehicles fell 22% while overall industry volumes rose, according to researcher Edmunds.com.”
“Recent sales data show that consumers don’t want electric cars,” proclaims Investor’s Business Daily. “And these pitiful electric-car sales,” it adds, “mind you, come despite the very generous $7,500 federal tax credit, along with various state incentives—Illinois offers rebates up to $4,000.”
Manufacturers are slashing prices, offering low-priced leases, and 0% financing. Despite the deals, dealers view selling the existing electric vehicle inventory as a “challenge.” But selling a used electric car, like Nissan’s Leaf, is even harder. WSJ reports: “used Leafs aren’t attracting much demand.” Though Nissan offers leaseholders $4,000 in incentives to buy the used model they are driving, drivers are not snapping up the opportunity. When the leases expire there is little market for the cars and dealers are returning them to the manufacturer.
While demand for electric vehicles has dropped, contrary to logic, investment in them hasn’t. Earlier this year, USA Today said: “Automakers have already invested billions to offer a wide spectrum of vehicle choices and improve fuel efficiency with turbocharged engines, batteries and electric motors, multi-gear transmissions, more aerodynamic designs and lighter materials. Companies have also spent heavily to market eco-friendly vehicles and have no plans to stop developing them.”
“Why,” you might ask, “don’t manufacturers focus on building the cars consumers want?” The answer: government regulations in the form of the CAFE Standard. The word CAFE, here, means Corporate Average Fuel Economy and is the measure manufacturers must meet to sell cars in the U.S.
First enacted by Congress in 1975, the idea was to reduce energy use, thus preventing an over-dependence on foreign oil and improving national security. In 2009, under the Obama administration, the program morphed to include a higher focus on tailpipe emissions with a two stage implementation process. Phase One demands a 23% improvement in pollution standards and a CAFE target of 34.1 miles per gallon (MPG) by model-year 2016. Phase two calls for a further increase of roughly 35% in pollution standards, equivalent to 54.5 miles per gallon by 2025.
While the exact calculations are complicated, these standards are not meant to be met by each vehicle, but by the entire fleet produced by each manufacturer. So a company that makes small, fuel-efficient cars, such has Honda, easily meets the requirements. While a company like Chrysler, known for its Ram trucks and American muscle cars, faces an uphill climb. In fact, it is the CAFE Standards that made the Chrysler/Fiat marriage attractive, as the Fiat fleet includes a 40-MPG car. It is also what makes the Volt a good option for Chevy.
Manufacturers who don’t comply with the regulations face fines—or they can buy credits. Either way the costs ultimately get passed on to the consumer who dares to purchase a vehicle based on his or her personal preference rather than the fuel-efficient vehicles the government wants automakers to produce.
These government regulations manipulate the markets and make winners and losers that would not be the case if we had a true free market.
Interesting stories emerge.
One is Ferrari, which by the nature of the car, can’t meet the U.S. government regulations. As one report on the topic declared: “Ferraris are beautiful. They are fast. They are nimble. And they are thirsty.” The hybrid LaFerrari gets 14 MPG.
Most readers are not likely to buy one of the 499 LaFerrari cars built, but its story is illustrative of the market manipulation.
Since 1969 Ferrari has been part of the Fiat family, but that will soon change as Ferrari is being spun off to make it an independent automaker. While the sale is reportedly being done “to finance expansion plans,” it will remove the gas-guzzler from the Fiat Chrysler fleet—making meeting CAFE easier. Yet, earlier this year, CEO Sergio Marchionne said: “the U.S. auto industry should ask the U.S. government to push back fuel economy targets.”
While an independent Ferrari will have challenges meeting CAFE without Fiat to help create an acceptable average, another single focused manufacturer meets the requirements handily—so well, in fact, it has credits to sell. I am talking about Tesla, the car company that the Environmental Protection Agency smiles upon because it produces only electric cars.
Most U.S. car companies—like Fiat Chrysler—want the federal fuel economy mandates to be watered down. Tesla wants the targets to be tougher.
Companies—like Ferrari—that don’t meet the fleet standards can purchase compliance credits. CNN Money reported: “Since Tesla sells nothing but electric cars, it is rolling in the credits and is one of the few sellers.” The Los Angeles Times (LAT) says: “Since 2008, the company has earned more than $534 million from the sale of environmental credits.” It adds: “Tesla has created a brisk market in credits, selling to automakers that either don’t produce electric cars or have made a strategic decision to buy credits and cap their own sales of such vehicles.”
But it is not just Ferrari that will have trouble meeting the 2025 standard. According to LAT, Mitch Bainwol, chief executive of the Alliance of Automobile Manufacturers—which represents companies like General Motors, Ford, Toyota, Fiat Chrysler, and others—said: “While consumers have more choices than ever in energy-efficient automobiles, if they don’t buy them in large volumes, we fall short.”
With the American car-buying public resistant to doing what the government wants them to do, making a car that few can afford and that many of those who can don’t like (On October 20 Consumer Reports pulled its “recommendation” of the Tesla Model S after owners complained about a “range of issues”), Tesla is receiving a huge windfall from its competitors while the standards drive up costs for consumers.
Addressing the 54.5-MPG for the 2025 model year, Marchionne said: “There is not a single carmaker that cannot make the 54 number. The question is, at what price?”
The author of Energy Freedom, Marita Noon serves as the executive director for Energy Makes America Great Inc. and the companion educational organization, the Citizens’ Alliance for Responsible Energy (CARE). She hosts a weekly radio program: America’s Voice for Energy—which expands on the content of her weekly column. Follow her @EnergyRabbit.
The First Amendment reads (in part): “Congress shall make no law…abridging the freedom of speech….” “Abridge” is legally defined as: “…(T)he making of a declaration or count shorter, by taking or severing away some of the substance from it.” The Founders prohibited the government from not just silencing speech – but from doing anything at all to in any way reduce it.
Not at all surprisingly, two government speech regulations ended up doing exactly what the Constitution was written to prevent it doing. Behold the Federal Communications Commission (FCC)-imposed Fairness Doctrine and Equal Time provision.
These inanities were inflicted on anyone who had a broadcast license on spectrum given to them by the government. (That’s over-the-air television and radio.) These regs left most such recipients regretting mightily not having just paid for the airwaves – so as to have gotten (at least a little) out from under government’s massive thumb.
These two First Amendment assaults were imposed in the name of government-defined-and-mandated speech “equality.” As usual, government fundamentally misrepresents its charter. It is required to ensure equality of opportunity – not empowered to impose equality of outcome. Anyone can say anything they want – that’s Constitutional opportunity. Anyone else receiving a government-mandated equal chance on the same platform – is unConstitutional free speech abridgment. How so? Here goes….
The Fairness Doctrine “was a policy…introduced in 1949 that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was-in the Commission’s view-honest, equitable, and balanced.”
How’d that work in practice? So you’re a radio station owner. Your business model and imperative is to program your station – strategically planning each broadcast minute so as to maximize your audience. You of course want the most compelling people on the air.
How can you do this if every yahoo on the planet can – at any time at all – demand access to your airwaves? Yahoos who may not be…the most intriguing, articulate of individuals. And the rampant schedule volatility would render your business totally dysfunctional. You couldn’t program your station even five minutes in advance – as you would be suffering an insufferable parade of Fairness Doctrine-imposed yahoos lining up to access your microphone.
So in the name of equalized speech, the Fairness Doctrine resulted in – zero speech. Station owners would simply not inform their audiences about anything that might invoke the absurdity. How do we know the Fairness Doctrine did this? Because the Ronald Reagan Administration’s FCC voted to stop enforcing it, and months later Rush Limbaugh began his nationally syndicated radio rise – and the talk radio revolution. The results were so obvious – the Barack Obama Administration’s FCC officially removed the language that imposed the Doctrine. Not because they too thought it was awful policy – but because the politics of it were so against them.
Is this Administration finished with abridging free speech? Of course not.
The head of the Federal Communications Commission promised Thursday to enforce his agency’s regulations requiring television stations to give political candidates equal opportunities for airtime.
“The rules are pretty clear. Rules are rules,” FCC Chairman Tom Wheeler told reporters Thursday. “I hope that we have developed a reputation as folks who enforce the rules.”
Actually, you’ve developed a reputation for making up rules without any Congressional authority whatsoever – and then unilaterally imposing them. But we digress. Behold Equal Time:
…U.S. radio and television broadcast stations must provide an equivalent opportunity to any opposing political candidates who request it. This means, for example, that if a station gives one free minute to a candidate in prime time, it must do the same for another candidate who requests it.
The equal-time rule was created because the FCC thought the stations could easily manipulate the outcome of elections by presenting just one point of view, and excluding other candidates.
You think? The near-uniformly-Leftist broadcast news media bias is cataclysmic – which easily manipulates the outcome of LOTS of elections. But Equal Time doesn’t apply to news programs – and the FCC has interpreted “news programs” to include (near-uniformly-Leftist) late night shows like NBC’sTonight Show.
Which brings us back to this latest imposition of Equal Time:
Hillary Clinton’s appearance earlier this month on Saturday Night Live could trigger the so-called “equal-time” rules, as could Donald Trump’s plan to host the long-running NBC comedy show next month.
Clinton’s appearance elicited no response from the FCC. The moment Trump’s prospective appearance was announced – FCC Chairman Wheeler was suddenly activated. Well, someone should ask the Chairman how or why Saturday Night Live is regulated – while the Tonight Show is not.
That doesn’t necessarily mean Lincoln Chafee will be the next host of SNL—but it could mean that local NBC affiliates across the country will have to give presidential candidates access to equal TV time.
So in the name of equalized speech, Equal Time will also result in – zero speech. These local NBC affiliates face the same programming challenges radio broadcasters do. Like radio broadcasters, they don’t want to have their schedules constantly shredded by random, incessant, government-mandated (candidate) on-air appearances. So these affiliates will likely, ultimately demand that their parent networks stop giving airtime to ANY candidates. Thus – zero speech.
FCC Chairman Wheeler finally rid us of a Fairness Doctrine that was incredibly damaging to free speech and the marketplace. He should do the same to an Equal Time provision doing the exact same damage. Don’t enforce it – eradicate it.
Oh – and it is now abundantly clear that the FCC long ago ran out of lawful, helpful things to do. So when it is not enforcing ridiculous laws that shouldn’t exist, it is illegally making up regulations – so as to then enforce them. Idle bureaucrat hands.…
Congress should put the FCC out of all of our misery – and eradicate it.
In today’s edition of The Heartland Daily Podcast, Lennie Jarratt, project manager for education at the Heartland Institute, joins host Donald Kendal to discuss, among other topics, the difference between school choice and education choice.
Often when discussing school reform, several potential solutions are suggested. Some of these solutions include Educational Savings Accounts (ESAs), Vouchers, Income Tax Credits, Parent Trigger, ect. Linked here is a great resource that serves to detail some of these programs. Jarratt helps to explain how these solutions fit into the bigger picture of creating a school system not dominated by the government.
Jarratt and Kendal also discuss Common Core and how it originated. Jarratt describes why conservatives are skeptical of these national standards and elaborates on how they have the potential to become really dangerous.
The Cornwall Alliance for the Stewardship of Creation has long been a voice of sanity, sound science, sound economics and humanitarian concern for the poor, amongst myriad supposedly evangelical Christian groups who, like lemmings, have joined the climate alarmist march to socialism and the energy and human poverty it breeds.
Cornwall’s leadership within the Christian community on climate and other environmental issues has been unparalleled, from its 2000, “Cornwall Declaration on Environmental Issues,” to is 2006 “A Call to Truth, Prudence, and Protection of the Poor: An Evangelical Response to Global Warming,” to its 2015 “Open Letter to Pope Francis on Climate Change.” Cornwall has consistently reached across the partisan political divide, and even reached beyond the bounds of the Christian community to people of other faiths, to stress humanity’s special place in the universe as designated by God, their responsibility for managing nature, and the need for markets and modern energy systems using fossil fuels, to bring millions out of poverty while sparing the environment the ravages wrought by people living hand to mouth unconcerned about the long-term future of the environment in which they live because they are driven by the exigencies of day to day survival.
Cornwall has now launched a new powerful tool in their battle against misguided climate alarmism: Greener on the Other Side:… A series of videos examining the scientific, economic and humanitarian implications of government’s embrace of climate alarmism.
These short videos (all less than 3 minutes in length) range from discussions of whether religious and scientific worldviews are incompatible, to a clearer understanding of the importance of carbon dioxide and its role in climate, to a discussion of how current and proposed climate policies hurt the poor. There are usually two or three videos on each topic so the one’s linked above are just a taste of the topics explored.
This powerful series of video’s, which I recommend, will be rolled into a forthcoming documentary. More on the latter to come.
According to a study published in September in an obscure journal, schools are exposing kids to potentially dangerous levels of toxic chemicals from food packaging because of “schools’ efforts to streamline food preparation and meet federal nutrition standards while keeping costs low.”
The study, published in the Journal of Exposure Science and Environmental Epidemiology, was performed by Jennifer Hartle, a Stanford University postdoctoral research fellow, and her colleagues from the Johns Hopkins University Bloomberg School of Public Health. The study design was simply to visit school kitchens and interview food service staff. Not surprisingly, they found that kids were eating fruits and vegetables that had been packaged in cans and plastic, and that small amounts of the chemical bisphenol A (BPA) migrated from the packaging into the food.
Then the investigators “modeled” the exposures: “Exposure scenarios were based on United States school nutrition guidelines and included meals with varying levels of exposure potential from canned and packaged food.”
So what’s the worry? The researchers claim that “even small amounts” of the chemical BPA in food packaging can be harmful by causing “hormone disruption.” However, the FDA (which regulates such substances as “indirect food additives”) and food safety agencies around the world have thoroughly investigated the issue—and come to starkly different conclusions.
The agency’s webpage (most recently updated in June) dedicated to the issue states, “FDA acknowledges the interest that many consumers have in the safe use of Bisphenol A (BPA) in food packaging. FDA has performed extensive research and reviewed hundreds of studies about BPA’s safety. We reassure consumers that current approved uses of BPA in food containers and packaging are safe.”
Did Hartle’s article allege that schools expose kids to levels that exceed the already extremely conservative federal standards? No.
Did it show that that exposure is sufficiently high to have an effect? Again, no.
But sloppy reporting from gullible writers and bloggers might cause one to believe otherwise. Consider thisfrom the Baltimore Sun: “students could be getting anywhere from a negligible amount of BPA up to 1.19 micrograms per kilogram of body weight.” But in even this modeled estimate of the most-exposed child, the higher exposure is still “negligible.” According to the EPA, 50 micrograms per kilo of body weight is a safe intake level. Thus, a more accurate statement would be, “students could be getting anywhere from a barely detectable amount of BPA up to a still-negligible 1.19 micrograms per kilogram of body weight–less than one-twenty-fifth of the amount felt to be safe.”
The concept that dose is critical to whether toxicity is observed is not rocket science. Large amounts ofnutmeg or licorice are notoriously toxic, but the amounts ordinarily consumed are perfectly safe.
So if kids aren’t exposed to levels of BPA that exceed federal (or even hyper-cautionary European) standards, what’s the problem? In a Stanford press release, lead author Jennifer Hartle concedes that, “While most students would not consume the maximum amount, those who do would take in more than half of the dose shown to be toxic in animal studies in just one meal.” She–and certainly her more senior coauthors–should know that it is tenuous to extrapolate laboratory animal studies to dose limits in humans if the animal studies are inconsistent with what we know about how effectively humans metabolize BPA in the real world.
But it seems that to Hartle, scientific rigor isn’t really the issue, as evidenced by her rhetorical questions: “If this is an avoidable exposure, do we need to risk it? If we can easily cut it out, why wouldn’t we?” That view is a manifestation of the so-called “precautionary principle,” which implies that we should strictly regulate or ban any product, process or activity until it is shown to be absolutely safe. Well, it’s fine to advocate that we look before we leap. The trouble is that the Hartles of the world want us never to leap.
The problem with the precautionary principle is that it fails to consider the risks of excessive regulation or bans. For example, although exposures to many chemicals are theoretically “avoidable,” such avoidance comes with trade-offs. We could avoid allergic reactions to penicillin by banning it, and eliminate the carnage of high-speed car crashes by setting the speed limit at 30 miles per hour, but. . . you get the point.
BPA used as a coating in canned food prevents botulism and other bacteria-caused illnesses. BPA’s protection of canned goods allows schools to provide more fruits and vegetables, by safely preserving them in cans all year round, and at low cost. A sound scientific approach would call for an estimate of thecomparative risks to schoolchildren, with and without the availability of BPA. But none of this seems to matter to Hartle and her collaborators, who are, in effect, looking through the wrong end of the telescope.
That may be why Hartle (as quoted in the press release) wishes us not to think about the scientific or economic tradeoffs or the dictates of sound epidemiology but instead informs us that, “The bottom line is more fresh fruits and vegetables…There is a movement for more fresh veggies to be included in school meals, and I think this paper supports that.” Sounds as though that was the agenda.
The study does teach us an important lesson: Academics and scientific journals are susceptible to platitudinous advocacy-driven science that sheds more heat than light on an issue. And newspapers and (especially) blogs report it uncritically. They all need to do better.
Jeff Stier is a senior fellow and the director of the Risk Analysis Division at the National Center for Public Policy Research. Henry I. Miller, a physician and molecular biologist, is the Robert Wesson Fellow in Scientific Philosophy and Public Policy at Stanford University’s Hoover Institution; he was the founding director of the FDA’s Office of Biotechnology.
What is profoundly troubling is the abject illegitimacy of their premise for more regulation of cable, i.e. the FCC’s new arbitrary and capricious definition of broadband that illegitimately redefined long-recognized, strong broadband competition — out of existence with the stroke of a pen.
So what are the signals of more cable regulation? Two speeches from the FCC Chairman, one from the FCC General Counsel, another from the DOJ Antitrust Chief, a variety of Hill and edge-industry entreaties to regulate cable more via new MVPD or ALLVID regulatory proceedings, (but of course without regulating favored edge providers), and an explosion of new opposition to the proposed Charter-Time-Warner merger (by the exact same cast of characters whose opposition doomed the Comcast-Time-Warner merger).
This broad simultaneous level of focused regulatory chatter and organized activity is not coincidental, but highly-orchestrated and abjectly illegitimate.
Why is more cable regulation abjectly illegitimate?
The FCC’s whole regulatory, legal and PR case for more cable regulation and blocking of cable mergers is built entirely upon an obviously arbitrary and capricious redefinition of broadband that has an implicit political purpose of enabling a PR demonization narrative that cable broadband is a “bottleneck” and an implied monopoly in three quarters of the nation.
Some background is warranted. Near the end of the FCC’s review of the Comcast-Time-Warner merger, and just four weeks before the FCC voted on its Open Internet Order, the FCC radically and arbitrarily changed the main-tent-pole-standard against which both decisions would be measured, without any effective opportunity for those affected to challenge the arbitrariness and capriciousness of this new “standard” in court.
The FCC transformed their main-tent-pole-standard, the definition of broadband by an astounding 525%, from 4 Mbps to 25 Mbps downstream (when the national average was only 10.5 Mbps). In doing so, the FCC presented scant factual or merit-based reasons for this redefinition, only a policy and PR rationale that the 25Mbps should be the new “table stakes” for broadband competition.
Simply, the FCC did not base this competition standard objectively on what the average consumer uses, needs, or is willing to pay for, only what the FCC subjectively believes consumers should have in the FCC’s vision of a perfect broadband world.
Tellingly, under that subjective and aspirational FCC justification, the FCC could have justified changing its competition standard for broadband by 25,000% to one Gigabit downstream and upstream, because that was Google Fiber’s new competitive “table stakes” in about a half percent of America, and that is what the FCC subjectively is pushing municipal Government-Owned Networks to build, in contravention to state laws, in order to supplant privately-owned broadband networks.
It is also telling that both the FCC and DOJ like referring to this new broadband competition standard as “table stakes,” which is well-known as a subjective gambling concept, not an objective legal standard defensible in court.
Using the ever-evolving and subjective “table stakes” concept as a competition standard is an awfully lot like the arbitrariness and capriciousness one can suffer from driving through a remote small town which abruptly changes the speed limit and operates under kangaroo court justice, because that remote town is confident they are effectively cop, judge and jury all in one – and the unwitting traveler is effectively entrapped in their “jurisdiction” with no practical opportunity to appeal.
As for the DOJ, it would never want to defend such a subjective, arbitrary and capricious competition/antitrust standard in federal court because it couldn’t withstand the rule of reason let alone the laugh test. If they tried, any respectable appellate judge could either scream or laugh them out of their courtroom for trying to defend such an outrageously unfair and unjust “standard” in a federal court of law.
In addition, this end- result-driven standard was set obviously so that the FCC could exclude from the broadband market four national wireless broadband providers: Verizon, AT&T, T-Mobile and Sprint, which each offered 5-10+ Mbps of broadband service to consumers; and thus deny that wireless broadband was a competitor to wireline cable broadband.
The FCC also apparently rigged this competition definition process to deny any effective due process opportunity to legally challenge it in a court of law.
The FCC’s Broadband Competition Standard is also Ridiculous and Absurd
The FCC knows full well that the nearly three-quarters of Americans who use a smartphone know that one can functionally do most everything one wants on a mobile smartphone/tablet/laptop that one can do on wireline broadband.
It is ridiculous and absurd that the FCC ignored the fact that a couple of hundred million Americans routinely use only wireless broadband for all their Internet needs when they are away physically away from their home or work desktop or TV.
It is ridiculous and absurd for the FCC to rule that what over two hundred million Americans use every day in streaming video, shopping, doing work or homework, on a tablet, laptop, or smartphone — is somehow not broadband!
It is ridiculous and absurd that the FCC ruled wireless broadband could not compete with broadband when for over two decades the FCC has recognized that wireless DBS (DirecTV & Dish) has been the major competitor to wireline cable — taking roughly two-fifths of cable market. And for over a decade the FCC’s own competition reports routinely have tracked and recorded that wireless is a video and broadband competitor to cable.
It is ridiculous and absurd for the FCC to effectively argue that wireless broadband does not compete with wireline broadband when: 1) Comcast and other cable companies have exercised their contractual rights to resell Verizon’s wireless broadband service to compete with telco and wireless broadband competitors; 2) AT&T-DirecTV is offering wirelessly most everything that a wireline broadband provider offers; and 3) Verizon, Amazon, Google, Microsoft, Hulu, Netflix, etc. are increasingly offering their own cable-like, video streaming offerings wirelessly.
The FCC’s policy stance here, that wireless broadband doesn’t compete with wireline broadband, is ridiculous, absurd and utterly indefensible on the facts and merits — if it ever could get before a court of law.
In sum, more FCC cable regulation would be abjectly illegitimate because it is entirely dependent on the FCC’s arbitrary and capricious broadband competition standard of 25Mbps that was designed entirely with ends-justify-the-means thinking, in order to exclude wireless broadband competition from the market and to manufacture a rigged monopoly broadband PR frame to justify utility and monopoly regulation of broadband and cable.
Finally, the FCC adds insult to the injury of this ridiculous and absurd travesty of justice, by perpetrating a capricious and fraudulent bait-and-switch here.
For over a decade the FCC baited cable and other broadband providers to prove that the broadband market was competitive by investing hundreds of billions of dollars in much faster broadband networks. When cable and other broadband providers did exactly what the FCC wanted, the FCC then arbitrarily switched the rules of the game and punished them for their FCC compliance and good deeds.
How has the FCC so lost its way?
Scott Cleland served as Deputy U.S. Coordinator for International Communications & Information Policy in the George H. W. Bush Administration. He is President of Precursor LLC, a research consultancy for Fortune 500 companies, and Chairman of NetCompetition, a pro-competition e-forum supported by broadband interests.
What’s really scary about October is that it’s nearly time for another round in the seemingly endless series of annual climate-treaty conferences. This year’s conference, the 21st, will take place in Paris just one month after All Hallows’ Eve and All Saints’ Day.
The climate treaty under negotiation is like a vampire from a bad old horror film. Every time you think it’s dead, it rises from the grave. This vampire is not sucking blood, but money and resources from taxpayers and needy people around the world. It’s time to put a stake through its heart and cut off the head of this climate-treaty monstrosity once and for all.
Congressional Republicans are working to do just that. Politico reports that Neil Chatterjee, a top aide to Senate Majority Leader Mitch McConnell, has been making the rounds at foreign embassies alerting the countries’ diplomats that Republicans intend to fight President Barack Obama’s climate agenda until the end of his term and beyond.
In meetings with ambassadors representing both developed and developing countries, Chatterjee has reiterated McConnell’s message: “Proceed with caution before entering into a binding, unattainable deal” with Obama. McConnell has pointed out that two-thirds of the U.S. federal government — Congress and the Supreme Court — have not signed off on Obama’s plans.
Without Senate ratification, any climate agreement coming out of Paris, just like Obama’s executive orders and climate regulations, can be undone by his successors. Republicans have already made it clear that the Senate will not ratify any agreement Obama makes requiring either steep, economy-killing, greenhouse-gas emission reductions or climate payoffs to developing countries.
A stake will be driven through the heart of the Paris treaty negotiations if Obama shows up with empty pockets, unable to make good on his promise to fund the U.S. portion of the Green Climate Fund (GCF). The GCF was an idea developed by the United States to provide $100 billion a year by 2020 to help poor countries make the transition to clean-energy technologies, adapt to climate change, and compensate them for climate harms allegedly imposed on them by developed countries’ use of fossil fuels.
Developing nations have indicated that they will not sign on to any climate deal in Paris until the fund’s coffers are filled. Reuters reported that Ronny Jumeau, U.N. ambassador from the Seychelles, a member of the Association of Small Island States negotiating bloc, issued this warning: “Obama cannot come to Paris and not put money on the table. He’s got to put his money where his mouth is.”
Congressional Republicans, however, have vowed to deny Obama’s spending request for a $500 million down payment on the initial $3 billion he promised for the GCF. The GOP will probably make good on its vow, as Obama and Congress are already battling over budget priorities and funding the government for the coming fiscal year. In addition, the U.S. House passed an appropriations bill this summer that directly prohibits the United States from funding GCF. As a result, it’s highly unlikely that Obama will have any money to give away by the time treaty negotiations end in Paris.
Without that money in hand, the climate deal is bound to unravel. French president François Hollande said in September, “If there’s not a firm commitment to financing, there will be no accord, because the countries of the [global] South will reject it.”
Republicans are also working to stymie Obama’s domestic climate efforts and thus, by extension, undermine his negotiating authority in Paris. Just hours after Pope Francis challenged Congress to take “courageous actions” to tackle climate change, House Republicans took up a bill to block the government from measuring the carbon dioxide emissions from construction projects.
In addition, it appears that Republicans are considering using the Congressional Review Act to overturn Obama’s recently finalized Clean Power Plan. This would undercut Obama’s commitment to cut emissions of greenhouse gas by 28 percent below 2005 levels by 2025.
Republicans are also considering a resolution expressing their opposition to any climate deal coming out of Paris. This action would be similar to the 95–0 vote the Senate took in 1997 to repudiate the Kyoto climate agreement the Clinton administration was negotiating at the time.
Obama would almost certainly veto the Clean Power Plan bill and ignore the GOP resolution as nonbinding, but the congressional opposition to Obama’s plans would further undermine his weak negotiating position by demonstrating that there’s virtually no chance Obama can deliver on any promised emission reductions or funding commitments he makes in Paris.
If what emerges from the Paris conference is an empty shell of a climate treaty, there’s little chance that any future international treaty will manage to restrict U.S. fossil fuel use and redistribute wealth from the poor in rich countries to the wealthy in poor countries. Unfortunately, like the late-night reruns of B-movie horror films that return each October, climate-treaty negotiations will probably be shocked back to life eventually, this time in 2016, bringing all manner of doom and chaos in their train.
In today’s edition of The Heartland Daily Podcast, Medical Doctor and Senior Director of Mediine and Public Health, Dr. Gil Ross, joins Research Fellow Isaac Orr to discuss some of the latest alarmist claims against the process of hydraulic fracturing.
Does fracking cause premature births? That’s what the newspaper headlines are claiming, but Dr. Ross and Isaac Orr take a closer look at a study conducted by Johns Hopkins University, published in Epidemiology, exposing the fatal flaws in this study.
Dr. Ross explains why the methodology used to conduct the study is faulty, rendering its conclusions meaningless. Additionally, Dr. Ross and Orr discuss ways in which listeners can detect other faulty studies in the future.
If you don’t visit Somewhat Reasonable and the Heartlander digital magazine every day, you’re missing out on some of the best news and commentary on liberty and free markets you can find. But worry not, freedom lovers! The Heartland Weekly Email is here for you every Friday with a highlight show. Subscribe to the email today, and read this week’s edition below.
LeftExposed.org Profile of the Week: Natural Resources Defense Council
LeftExposed.org is a new Heartland Institute project devoted to creating accurate profiles of prominent individuals and organizations on the political Left with a special focus on groups in the global warming (a.k.a. “climate change”) debate. Project Manager Emily Zanotti and principal researcher Ron Arnold have written a devastating exposé of the Natural Resources Defense Council, a New York City-based environmental power and activist group. Zanotti and Arnold document the organization’s founding, funding, and latest scandals. READ MORE
Reforming Civil Asset Forfeiture Laws in Oklahoma
Matthew Glans, Heartland Research & Commentary
Oklahoma has terrible laws on civil forfeiture – a controversial legal process through which law enforcement agencies take personal assets from individuals or groups merely suspected of a crime or illegal activity. The law essentially considers citizens guilty until proven innocent, forcing people to prove they were not aware their property was being used illegally. A proposal by state Sen. Loveless now seeks to reform this twisted system in Oklahoma. READ MORE
Congress Ready to Drive a Stake through the Climate Vampire’s Heart
H. Sterling Burnett, National Review
Starting in a few weeks, all eyes will be on the climate conference in Paris, the last big push by climate alarmists for something more than President Barack Obama’s executive orders. Obama is pushing for an international agreement committing the United States to reducing its greenhouse gas emissions and transferring money to developing nations. Congressional Republicans, however, are letting world leaders know any promises made in Paris will face an unfriendly Congress and Supreme Court back home. READ MORE
Featured Podcast: Howard Baetjer: Government Regulation vs. Market Regulation
Towson University economics lecturer Howard Baetjer joins Budget & Tax News Managing Editor Jesse Hathaway to discuss the power of the free market to regulate the prices and quality of goods and services. When most hear the word “regulation,”they think of bureaucrats and red tape. But as Baetjer explains, regulation can refer to the voluntary actions of people cooperating to solve problems. Regulation does not require the restrictive hand of government. READ MORE
Heartland’s New Event Space Is Open for Business!
The Heartland Institute’s beautiful new event space is open, and we have several great events already lined up. Heartland is dedicated to bringing you the best content the liberty movement has to offer with debates, lectures, book talks, and luncheons. Upcoming events include a book signing with Peter Ferrara, author of Power to the People, and a panel on women in politics. Register for an upcoming event today! And if you require space for your own liberty-centered event, let us know! We can host groups up to 77 people. READ MORE
New Ozone Rule Is Unnecessary and Costly
H. Sterling Burnett, The Hill
On October 1, the Obama administration imposed a new limit of 70 parts per billion on ambient ozone levels that states and counties must meet by 2037. EPA estimates the new rule will be among the most expensive in history, costing more than $1.4 billion per year. It all seems unnecessary, given government’s own data show air quality has improved and will continue to do so without adoption of new regulations, dropping current levels of risk to near zero. READ MORE
Florida Legal Battle Reveal Dangers of ‘Certificate of Need’ Programs
Justin Haskins, Consumer Power Report
Few government policies are as destructive or as poorly understood as certificate of need laws requiring medical facilities to get permission from current businesses, their future competitors, in order to purchase new equipment, offer medical services, or expand or build a medical facility. In Florida, an ongoing legal battle between hospitals has helped illustrate why these onerous government mandates should be rejected in favor of reasonable, free-market reforms that encourage competition and reject corporate favoritism. READ MORE
Liberal Arts Can Survive in Vocational Schools
Joy Pullmann, School Choice Weekly
The Pioneer Institute has a new paper discussing how to promote vocational education without sacrificing the liberal arts. While some conservatives and Republicans look at liberal arts as either a waste of classroom time or part of a larger political indoctrination scheme, these courses do have an important role to play. A true liberal education cultivates good judgment, which is, according to America’s founders, crucial for a self-governing society. This new paper suggests liberal arts can exist in the more market-driven vocational training system. READ MORE
Bonus Podcast: Jessica Sena: Those Fracking Feds Pt. 1
Jessica Sena, communications director at the Montana Petroleum Association, joins Research Fellow Isaac Orr to discuss the continuing federal government intrusion affecting hydraulic fracturing. In part one of this two-part podcast, Sena gives an energy insider’s view on all the latest developments in fracking, discusses new federal energy regulations, and takes up the Endangered Species Act. LISTEN TO MORE
Lemon Grove, California Snuffs Out E-Cigarettes
Gabrielle Cintorino, The Heartlander
Beginning October 1, using e-cigarettes in privately owned businesses, such as bars and restaurants, or public areas, such as the city’s dog park, carries up to $500 in fines and penalties. Dr. Gilbert Ross, senior director of medicine and public health at the American Council on Science and Health said, “Given the complete lack of evidence that e-cigs, their vapor, or anything about them poses a health risk, this measure is merely an attempt by big government to suppress an adult activity for no valid reason whatsoever.” READ MORE
Why We No Longer Trust the Government’s Food Guidelines
Jeff Stier and Julie Kelly, Pundicity
Over the past 30 years, the Dietary Guidelines for Americans have become as bloated as the nation’s collective waistline, serving up a thick brew of revolving-door nutrition advice, confusing messages, and perhaps even politically influenced eating recommendations. From 1985 to 2015, the guidelines have grown from 19 to 571 pages of politically biased findings, reaching into topics including labor concerns and tax policy. READ MORE
This piece assembles the evidence that Google’s benign PR explanation and stock-enhancement justification for its Alphabet holding company restructuring — may be the truth, but apparently is not the whole truth and nothing but the truth, about the structural antitrust and privacy risks ahead that it clearly foresees, but is not disclosing.
What we have learned in the last two months is that Google is much more worried than it says about the risks it faces from a variety of real structural changes it may have to make in its core business overseas in the months and years ahead — where the vast majority of Google’s users are, and from where over 50% of its revenues come.
Google apparently foresees a coming brand winter of antitrust/privacy structural enforcement actions hanging over Google around the world. In preparation, Google has stockpiled some new investor goodwill by shrewdly mega-goosing its stock price in its July 2Q15 earnings by respecting key investor concerns for the first time, and by breaking out the financials of Google’s big “moonshot” investments to try and garner some of the high-valuation pixie-dust that hot innovation startups attract in Silicon Valley.
Think about this. If a global brand faced no real reputation or growth risks going forward, what company would rationally consider risking the world’s #2 brand (worth $120 billion) by confusing and distracting users, investors, and the media — with a new superseding “Alphabet” brand that will never be a verb? What CEO or brand expert would choose this enormous “Alphabet-Google” brand name dissonance over “Google” alone… unless they were sure that they needed to get ahead of, and mitigate, a big upcoming, uncontrollably-bad, global risk to the Google brand?
Let’s consider what the new evidence tells us.
About two weeks after announcing Alphabet, Google submitted its legal defense against the EU’s search bias antitrust charges, in which it said the EU’s conclusions, that Google was dominant and abusing its dominance, “are wrong as a matter of fact, law, and economics.”’ Effectively Google’s antitrust legal argument is that Google is not an “essential facility” and thus cannot be unbundled. This concede-nothing-stance is indicative of a defendant that knows it faces a difficult protracted legal battle and that it expects no EU interest in a fourthround of settlement negotiations with Google.
Google must have learned privately months ago what the EU’s Antitrust Chief Margrethe Vestager signaled publicly at an American antitrust conference October 2nd: “What I can say is this: The more structural the remedy, the better,” [Other types of concessions] “generally present more risks…can be particularly difficult to monitor [and] are also in place only for a defined period of time… [The preference for structural remedies for antitrust concerns applies] “across all sectors,” per WSJ reporting.
Since negotiated settlements are for “a defined period of time,” this indicates that the EU is on path to some sort of structural/unbundling remedy involving permanent EU supervision. Also remember last November, the European Parliament passed a resolution 384-174 that called “on the Commission to consider proposals aimed at unbundling search engines from other commercial services…”
In April, the EU also opened a formal investigation into whether or not Google-Android has “illegally hindered the development and market access of rival applications and services bytying or bundling certain Google applications and servicesdistributed on Android devices.” If Google is found guilty of abuses of dominance here as well, the remedy would likely be mandating Google unbundling of apps from Android.
On privacy, last week the EU’s High Court invalidated the fifteen-year-old U.S.-EU Data Safe Harbor, and France, taking the lead for the EU, rejected Google’s implementation of Europe’s High Court ruling requiring Google to allow for a European Right to Be Forgotten. In addition to those major privacy actions, top EU member countries are also seeking to enforce that Google offer an opt out from consolidation of their private data across services, and to potentially require storage of European private data on European soil under clear European legal jurisdiction.
The EU is far from the only source of international structural antitrust/privacy risks for Alphabet-Google.
In Russia, last month Russian antitrust authorities found Google-Android guilty of abusing its dominant Android licensed operating system dominance by tying/bundling search and other apps with Android, and this month the same authorities ordered Google to unbundle Google apps from Android by removing contractual tying arrangements by next month. In June, Russia mimicked the EU and proposed a strict Russian Right to be Forgotten. In addition, this October the Moscow City Court banned Google from using bots to read Russian users’ personal emails – i.e. effectively illegal wiretapping.
In China, we learned last month that China’s Internet czar is requiring Google and other top American tech companies to provide “secure and controllable” backdoors/encryption key access to their systems and even potentially hand over source code – if they want to provide products and services in China.
In sum, Alphabet-Google knows it faces a difficult gauntlet of antitrust investigations into search and Android, in addition to many investigations into privacy violations that could pressure Google to localize data in different countries.
All of this presents ongoing risks to the Google and Android brands with a large percentage of Google and Android users overseas, over a long period of time. Being branded by antitrust authorities as dominant and abusing its dominance to harm consumers and innovation could threaten to devalue the Google global brand, explaining in part why Google has been demoted to just an Alphabet company, albeit its largest.
As I have written for a couple of years, the Snowden NSA revelations marked the peak of America’s dominance of the Internet. Since then the rest of the world has been actively de-Americanizing the Internet with the unwitting help of U.S. policymakers. For Alphabet, the company formally known as Google, this means that much of the rest of the world will be increasingly sovereign-izing Alphabet-Google in requiring it to respect their varying sovereign rules of law.
In a nutshell, Alphabet-Google apparently foresees that its former maximally-efficient, unencumbered, infrastructure and operating model, is going to become less efficient and more encumbered due to its serial and systemic abuses of its dominance and data protection.
If the Google Era was largely about the world submitting to Google’s dictates to others, apparently the Alphabet Era may become more about Alphabet-Google increasingly submitting to sovereign nations’ rule of law – at least outside the U.S.
Scott Cleland served as Deputy U.S. Coordinator for International Communications & Information Policy in the George H. W. Bush Administration. He is President of Precursor LLC, an emergent enterprise risk consultancy for Fortune 500 companies, some of which are Google competitors, and Chairman of NetCompetition, a pro-competition e-forum supported by broadband interests. He is also author of “Search & Destroy: Why You Can’t Trust Google Inc.” Cleland has testified before both the Senate and House antitrust subcommittees on Google and also before the relevant House oversight subcommittee on Google’s privacy problems.
Despite the push by the UN through its recently approved Agenda 2030 by member nations this past August, the words of President Obama declaring that Global Warming is the most pressing issue this country is facing, and the brainwashing of school children through the Common Core science curriculum that man is responsible for Global Warming, there are still skeptics of global warming who have not been intimidated into remaining silent about what is now a contentious world-wide issue.
One such individual is Steve Goreham, Executive Director of the Climate Science Coalition of America, a speaker and author on environmental issues, a former engineer and business executive, and a father of three. He is a frequent invited guest on radio and television, as well as a free-lance writer and a policy advisor to The Heartland Institute. Goreman has authored two books on climate change: “Climatism! Science, Common Sense, and the 21st Century’s Hottest Topic” published in 2010 and The Mad, Mad, Mad, World of Climatism published in 2012. Steve is now working on his third book.
Steve Goreham was the featured speaker at an event sponsored by the North Shore Tea Party held at the Northfield Community Building on Tuesday, October 13, 2015. His topic: “Energy and Climate Change: The Rest of the Story.” As Goreham initially related as insight into what was to follow, in light of all the manure that resulted from the popularity of horse drawn carriages (taxis) in cities such as NYC before the automobile, when automobiles replaced horse drawn carriages they were considered the anti-pollutants.
Steve then went on to describe the 20th Century as a “Golden Age for Mankind”, when income increased by a factor of seven times from the proceeding century, along with an increase of life expectancy. Why? The driver of both can be attributed to the low cost of energy that ushered in the “Golden Age.”
Global Warming Theory
An equation was developed in the 1970s (I= P A T) during the course of a debate between Barry Commoner, Paul R. Ehrlich and John Holdren. Commoner argued that environmental impacts in the United States were caused primarily by changes in its production technology following World War II, while Ehrlich and Holdren argued that all three factors were important and emphasized in particular the role of human population growth. Such was the thinking behind the I=P A T hypothesis: that Environmental Impact Increases with Population,Affluence, and Technology.
The “Environmental Kuznets Curve” proves otherwise. As wealth increases in a nation, its people will address the pollution issue. The air and water in this nation is much cleaner than it was three decades in ago.
The theory of man-man global Warming includes these four components:
- The Greenhouse Effect.
- Rising Atmospheric CO2 Level.
- Rising Global Temperatures
- Computer model projections of temperature readings in the future.
Obsession over Carbon Dioxide
Proponents of man-made global warming obsess over the level of CO2 in the atmosphere (The Greenhouse Effect).
First of all, Carbon Dioxide is not a pollutant. CO2 is an odorless, harmless, invisible gas. We inhale air with little CO2, but breathe out 4% CO2. Furthermore, Greenhouse Gasses account for only 1% – 2% of the Earth’s atmosphere. Nature’s most abundant greenhouse gas is water vapor. Water vapor and clouds account for 75% of the greenhouse effect; natural sources (ocean activity, volcanoes, plants & animals) are responsible for 24%; while only 1% of greenhouse gasses are man-made, making mankind responsible for about 1 part in 100!
Despite the push (and the billions spent) for renewable energy to replace carbon-based sources such as fuel, coal, gas, and oil, the reality is that coal, gas, and oil dominate as power sources. Following is the U.S. Energy consumption by Fuel sources in 2014. As a power source nuclear supplied 8.5%, with Hydro at 2.5%. Other renewable energy sources, lumped together, supplied a mere 7.3% of energy needs and are broken down accordingly: Biomass (Including Ethanol) 4.8%; Wind, 1.8%; Geothermal, 0.2%; and Solar, 0.4%.
Climate madness in Europe an California
Through his slide presentation, Steve Goreham spoke about climate madness in Denmark where there are 5,000 Wind Towers with an average output of 1.3 GW (A GW is equal to one billion (109) watts or 1 gigawatt = 1000 megawatts.). All 5,000 Wind Towers in Denmark could be replaced by one Conventional Power Plant!
In Germany there is solar and wind power mania despite limited sunlight in parts of Germany. German Chancellor Angela Merkel made a massive commitment to ‘renewable’ energy and has gone further down the ‘renewables’ path than any country in the world. Now it’s paying the price.
A situation in the UK could qualify for an entry in “Believe It Or Not.” The Drax Power Station in the UK is being powered by wood from the USA! Not classified as a pollutant, wood burning is allowed in England. Half of the Drax Power Station, with an output of 3,960 Megawatts, was converted to wood chips at a cost of $1.1 billion. The U.S. ships 7.5 million metric tons of wood per year to Drax Power Station. Even though the plant receives a subsidy of about $1 billion, the cost of electricity produced by Drax Power Station is double the price of what it was before its conversion from coal to wood.
This truism exists in Europe, and the same will exist for this nation if solar and wind mania craze is successful:
The more a country depends on wind or solar power for its energy, the more a country will pay for power.
Europe can no longer afford its green revolution. Accordingly, subsidies to invest in and develop green energy sources are being cut.
It is not surprising that the state of California is promoting the same renewable energy sources that Europe now perceive as unworkable and expensive. The California Valley Solar Ranch covers an area of 1,000 acres, 100 times the area of a gas-fire plant. Its average output of 57 MW amounts to 10% of what a gas fired plant would produce. Although the California Valley Solar Ranch receives $1.4 billion in federal and state subsidies, the price per kilowatt hour is 15 – 18 cents, three times the current California wholesale rate.
The Role Played by History
History tells us many things, one of them being that weather and climate are not the same thing. During the Medieval Warm Period (900 – 1300 AD) the Vikings settled in Greenland. The settlement grew to 5,000 by 1300, but dies out by 1408 as the climate cools. The Little Ice Age (1300 – 1850 AD) resulted in shorter growing seasons, famine in Europe, and the Thames froze over at London to hold ice fairs on the ice.
In the 20th century, widely reported in the 70’s was the threat of an approaching mini ice age. Less than thirty years later, the American people were told that 2000 – 2009 represented the warmest years on record. How could this be so when for the last 18 years, since 1997, there has been no global warming? To be questioned are the 102 climate models and their faulty projections. Consider also that government record keeping of temperatures only dates back 100 years.
Do 97% of Scientist Believe in Global Warmng?
In 2007 the following Global Warming Petition was signed by 31,478 American Scientists, Including 9,029 with PhDs:
“…There is no convincing scientific evidence that human release of carbon dioxide, methane, or other greenhouse gases is causing or will, in the foreseeable future, cause catastrophic heating of the Earth’s atmosphere and disruption of the Earth’s climate. Moreover, there is substantial scientific evidence that increases in atmospheric carbon dioxide produce many beneficial effects upon the natural plant and animal environments of the Earth.”
How then did the 97% consensus figure of scientists agreeing with man-made global warming originate, which is meant to construe that the science is settled and can no longer be questioned?
It was through an AGU (American Geophysical Union) climate survey sent out seeking the opinions of the most complete list of earth scientists they could find around the world as listed in the 2007 edition of the American Geological Institute’s Directory of Geoscience Departments. Two questions were asked:
1. When compared with pre-1800 levels, do you think that mean global temperatures have generally risen, fallen, or remained relatively constant?
2. Do you think human activity is a significant contributing factor in changing mean global temperatures?
Of the 10, 257 surveys sent, 3,146 were returned. Of those returned it was determined 77 of the returned responses came from climate scientists. Of the 77 climate scientists, 75 had answered “Yes” to question two, so it was concluded that 97% of scientists accepted man-made global warming.
A newer 2013 study shows only 36% of geoscientists and engineers believing in AGW
Al Gore’s War
Al Gore is well known for his war against hydrocarbons, organic compounds made of only hydrogen and carbon atoms found in crude oil and natural gas. In Gore’s prize winning documentary of 2006, “An Inconvenient Truth”, he tried to make the issue of global warming a recognized problem worldwide.
Even before Al Gore’s war on hydrocarbons in 2006, there was Amory Lovins. He has been writing misguided books since the 1970’s. Lovins most recent book with the Rocky Mountain Institute, “Reinventing Fire: Bold Business Solutions for the New Energy Era”, was published in October of 2011. Notable is that the forward is by John W. Rowe, who at the time was Chairman and CEO, Exelon Corp. President Bill Clinton described the book as “A wise, detailed, and comprehensive blueprint.”
It would be foolish to go the way of Europe, when it has become apparent to European leaders that renewable energy sources are responsible for soaring energy prices. Yet this nation is being told that renewable energy sources represent the future. How is this so? It is fact that CO2 is not a pollutant, even though the EPA declared it to be so under the Clean Water Act in 2009. Then too, mankind is responsible for about 1 part in 100 of the Greenhouse Effect in Earth’s Atmosphere. Yet Illinois and other states have mandated unrealistic and unobtainable goals of how much energy is to be produced by renewable sources at a given time in the not so distant future. Illinois’ renewable mandate status of 25% for 2025 is not on track. Here are individual State Renewable Portfolio Standard and Goals, updated October, 2015.
It is low cost energy that drives our economy. Without low cost energy (and there is the miracle of shale shock), not only would electricity rates soar, but the price of all goods would rise and blackouts could result because of insufficient energy supply. In other words, the American people would experience a reduced standard of living.
Crucial is the UN Paris climate conference to be held from November 30 to December 11. The governments of more than 190 nations will gather in Paris to discuss a possible new global agreement on climate change, aimed at reducing global greenhouse gas emissions and thus avoiding the threat of dangerous climate change.
Educate yourself and challenge the consensus over what many consider a hoax by the elites to gain power and control over the masses.