Viewing entries in
Science

Comment

Fred Dicker Live: Hydrofracturing

 

James writes:

This morning I talked with Fred Dicker, dean of the Albany press corps, on AM 1300 about hydrofracturing in Pennsylvania and New York and speculated on the reasons for its opposition downstate. I am at work on a story about hydrofracturing for an upcoming issue of City Journal.

Be sure to tune in also for some nice words about The New Criterion and Hilton Kramer. The broadcast is below.

BE1kWRCCAAE6l4c

James Panero touring a drill rig in Susquehanna, Pennsylvania, the first step in natural gas extraction by means of hydrofracturing. Similar reserves exist twenty miles north across the New York state line, but Albany has so far blocked such gas development.

Comment

Comment

The Culture of the Copy

2885065_f520
THE NEW CRITERION
January 2013

The Culture of the Copy
by James Panero

On the printing press, the Internet & the impact of duplication.

 

We now live in the early part of an age for which the meaning of print culture is becoming as alien as the meaning of manuscript culture was to the eighteenth century. “We are the primitives of a new culture,” said Boccioni the sculptor in 1911. Far from wishing to belittle the Gutenberg mechanical culture, it seems to me that we must now work very hard to retain its achieved values.
—Marshall McLuhan, The Gutenberg Galaxy, 1962

Technological revolutions are far less obvious than political revolutions to the generations that live through them. This is true even as new tools, for better and worse, shift human history more than new regimes do. Innovations offer silent coups. We rarely appreciate the changes they bring until they are brought. Whether or not we become the primitives of a new culture, as the Futurist Umberto Boccioni observed, most of us still live behind the times and are content to do so. We expect the machines of the present to fulfill the needs of the past even as they deliver us into a future of unknowns.

World-changing inventions almost always create new roles rather than fill old ones. It’s a great invention, but who would ever want to use one? was the classic response to the telephone, variously attributed to Ulysses S. Grant or Rutherford B. Hayes but probably said by neither of them. Life-altering technologies often start as minor curiosities and evolve into major necessities with little reflection on how they reform our perceptions or even how they came to be.

In the eighteenth century, Edmund Burke could see the significance of the French Revolution while observing its developments in real time. Yet “in the sixteenth century men had no clue to the nature and effects of the printed word,” writes Marshall McLuhan in The Gutenberg Galaxy, his 1962 book on the printing revolution and the dawning of the electronic age. It wasn’t until nearly 200 years on that Francis Bacon located the printing press alongside gunpowder and the compass as changing “the whole face and state of things throughout the world.” Writing in his 1620 book Novum Organum (“New Instrument”), Bacon maintained that “no empire, no sect, no star seems to have exerted greater power and influence in human affairs than these mechanical discoveries.” In the nineteenth century, Victor Hugo called the invention of printing the “greatest event in history” and the “mother of revolution.” Political revolution began in this technological upheaval.

An argument can be made, and so I will make it here, that the invention of the Internet is the under-recognized revolution of our time. The world-changing technology of the Internet, of course, is already apparent and barely needs retelling. The Internet is more significant than the telephone, the television, the transistor, or the personal computer because it subsumes all these prior inventions into a new accumulation that is greater than the sum of its parts. As the network of networks—the “inter-network”—the Internet is a revolution of revolutions.

Yet while we appreciate the Internet’s technological wonders, the cultural landscape it leads to is less explored. We acknowledge the Internet’s effect on information but are less considering of its influence on us. Even as we use its resources, most of us have no understanding of its mechanics or any notion of the ideas, powers, and people that led to its creation.

One way to situate the Internet is to see it as inaugurating the next stage of copy culture—the way we duplicate, spread, and store information—and to compare it to the print era we are leaving behind. New technologies in their early development often mimic the obsolete systems they are replacing, and the Internet has been no different. Terms like “ebook” and “online publishing” offer up approximations of print technology while revealing little of the new technology’s intrinsic nature.

Just as the written word changed the spoken word and the printed word changed the written word, so too will the digital word change the printed word, supplementing but not replacing the earlier forms of information technology. Speaking and writing both survived the print revolution, and print will survive the Internet revolution. The difference is that the Internet, with its ability to duplicate and transmit information to an infinite number of destinations, will increasingly influence the culture of the copy.

What the world is today, good and bad, it owes to Gutenberg,” wrote Mark Twain. “Everything can be traced to this source, but we are bound to bring him homage, . . . for the bad that his colossal invention has brought about is overshadowed a thousand times by the good with which mankind has been favored.”

The Gutenberg revolution occurred around 1440 in near obscurity. The life of Johannes Gutenberg, the German metalsmith from Mainz, is largely unknown. The exact nature of the invention that he first unveiled in Strasbourg remains a source of debate. Even as the technology of book printing spread through Germany and Italy, Gutenberg died a financial failure. His recognition as the inventor of typography only came at the start of the sixteenth century, over three decades after his death.

Gutenberg did not invent every component that gave rise to the printed page. His innovation, as commonly understood, was to put existing technologies together in a press that used oil ink and movable type to stamp Roman letters arranged in rows onto a page. Gutenberg’s expertise in metalwork helped him develop a metal alloy for the letter punches that could withstand the pressures of the printing process. He also devised a simple hand mold to recast the punches. This not only led to the rise of a book’s standardized font but also enabled the reproduction of the printing machine itself.

The rapid development of print culture in Europe occurred across two trajectories at once. Each printing press could produce hundreds and soon thousands of pages a day, just as the printing machines themselves could be duplicated. In the 1450s, the greatest early demonstration of the new technology was the production of the Gutenberg Bible. Copies of Gutenberg’s rare original Bibles are today considered among not only our most valuable printed books but also the most beautiful. Thirty years after this initial run—a start-up operation that landed Gutenberg in court with his disgruntled investors—there were 110 printing presses in operation across Europe, with fifty in Venice alone. By 1500, European presses had already produced over twenty million books. A century after that, the number was 150 to 200 million copies. The printing press made bestsellers out of writers in their own lifetimes. Erasmus sold a remarkable 750,000 copies. Luther distributed 300,000 printed tracts.

The rise of print culture had countless benefits, but it also overturned many of the achievements of the manuscript culture it replaced. The great proliferation of printed books meant that scholars no longer had to seek out rare copies of written material, but literature also no longer enjoyed the protection of a scholarly class and a culture of scholasticism went into decline. As the sixteenth century saw a renewed interest in ancient writing, due to the wide reproduction of classical works, Latin also lost ground as the lingua franca of high culture. An increasingly literate middle-class public, unfamiliar with Latin, sought out books in their own vernaculars, which helped give rise to new national identities. As reading became a silent activity to be accomplished alone, the printed book challenged the oral tradition. Likewise grammar and syntax were regularized to illuminate sense rather than stress.

The printed page made literature individual. Before the printing press, the medieval student would not have regarded the “contents of the books he read as an expression of another man’s personality and opinion,” writes E. P. Goldschmidt. “He looked upon them as part of that great and total body of knowledge, the scientia de omni scibili, which had once been the property of the ancient sages.” The manuscript era belonged to the scribe—the one writing out the manuscripts. The print era belonged to the author, because a book could now be set just as the author intended. The printed book, in fact, distinguished finished and completed work from drafts and papers in a way that exclusively written technology could not. Printed matter powered ideas with new range and fixity.

The development of the Internet was a more collaborative process than the invention of the printing press, but the two events share many similarities, including an initial disregard for the figures who made them possible. Leading up to his 2000 Presidential run, Al Gore said in an interview that he “took the initiative in creating the Internet,” based on his sponsorship of legislation that removed commercial restrictions on Internet traffic. This statement was widely recast into a claim that Gore had “invented” the Internet. What this absurdity revealed was that, even if a politician had not invented the Internet, almost no one knew who did. This fact remains true even as the Internet continues to expand through the products of widely celebrated but ultimately less significant industrialists and developers.

Starting in 1960, the American computer scientist J. C. R. Licklider was among the first to speculate on the potential for close “man-computer symbiosis” and the possibility of an “intergalactic” network to foster “on-line man-computer communication.” Marshall McLuhan likewise observed the dawning of a new electronic age that is “not mechanical but organic” and would be navigated by “surf-boarding” from one point to the next.

At the same time, the U.S. Department of Defense was identifying a far more practical need for networked computer communication. In the event of a nuclear strike, traditional circuit-based communication systems that required fixed lines would be rendered inoperable. The Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense therefore set about developing the technology for a networked information infrastructure with multiple communications pathways, one that could be robust enough for “survivable communications” to be maintained even if portions of the network were destroyed. The computer network that resulted was called ARPANET, the progenitor and first networked component of the Internet that was switched on in 1969.

The key development that made networked routing possible was what became known as packet-switching. The notion of sending information in distinct bursts that could be routed and rerouted through a networked system was invented in 1960 by Paul Baran at the RAND Corporation and Donald Davies at the National Physical Laboratory in England. Packet switching was soon implemented by Lawrence Roberts, the project manager at ARPA’s Information Processing Techniques Office, through mathematical models developed by Leonard Kleinrock, first at MIT and later at UCLA.

Few of the Internet’s founders, mostly academic computer scientists, have become rich or even known for their early inventions. In 2011, the death of Apple Computers co-founder Steve Jobs received worldwide recognition, but few noted the passing of Baran, a father of the Internet whose contribution to history will ultimately be more consequential that the development of the iPhone.

The history of the Internet is not a “story of a few heroic inventors,” writes Janet Abbate in her book Inventing the Internet. “It is a tale of collaboration and conflict among a remarkable variety of players.” Yet if any one invention could be considered the Internet’s Gutenberg moment, it was the development of the Transmission Control Protocol and Internet Protocol, together known as TCP/IP.

In the early 1970s, Vint Cerf and Bob Kahn came together at ARPA to solve the problem of inter-network communication. The question was how to create packets of information that could be sent not only over the singleARPANET network but also from network to network without considering where it was from, where it was going, or what it passed through. “Some were larger, some went faster, some had packets that got lost, some didn’t,” says Cerf of this variable landscape. “So the question is how can you make all the computers on each of those various networks think they are part of one common network despite all these variations and diversity.”

TCP/IP was their answer, a dual protocol in which IP deals with addressing and forwarding and TCP contends with flow control and error correction. Together TCP/IP became the backbone for the inter-networked communication upon which today’s Internet expands and communicates. More than a mere technical innovation, TCP/IP, like the printing press, is where several technologies came together in one revolutionary development. “It made possible Wi-Fi, Ethernet, LANs, the World Wide Web, e-mail, FTP, 3G/4G,” in the words of Wired magazine, “as well as all of the inventions built upon those inventions.”

This technology allowed the Internet to become infinitely modular and adaptable. Because no one entity controlled its pathways, TCP/IP left the Internet open and provided the world’s greatest conveyance of unregulated information. “The design of the internet architecture is captured in the TCPand IP protocols,” says Cerf, who is now the “Chief Internet Evangelist” for the Google corporation. “It confers equality on all interlocutors on the network (a supercomputer is treated as equal to a laptop from the protocol point of view).”

In 2005, Cerf and Kahn received the Presidential Medal of Freedom for the invention of TCP/IP. “The Internet is proving to be one of the most powerful amplifiers of speech ever invented,” Cerf has written about the technology he shaped. “It offers a global megaphone for voices that might otherwise be heard only feebly, if at all. It invites and facilitates multiple points of view and dialog in ways unimplementable by the traditional, one-way, mass media.”

The story of the Cold War began with the nuclear bomb and ended with the Internet. Both were military developments. Yet unlike the unfulfilled promise of peacetime nuclear energy, the Internet has quickly evolved from a tactical weapon to a strategic instrument of world-wide importance. By promoting the spread of democratic ideas across unregulated networks, the Internet is proving to be an even more effective weapon against totalitarianism than nuclear deterrence. This cultural potential is the reason “we must dedicate ourselves to keeping the network unrestricted, unfettered and unregulated,” argues Cerf, who has campaigned against giving the keys to the Internet over to foreign powers. “We must have the freedom to speak and the freedom to hear.”

The peacetime dividends of the Internet pay out as each new invention and each new network tie into it. In 1990, Tim Berners-Lee invented the World Wide Web, the public face of the Internet, and made the first successful communication between a server and a client running his research team’s new Hypertext Transfer Protocol (HTTP). Social media programs like Facebook and Twitter, search algorithms like Google, weblog interfaces like Blogger, ecommerce sites like Paypal, Voice over Internet Protocol (VoIP) services like Skype, and video streaming like YouTube all emerged thanks to HTTP, which in turn operates throughTCP/IP.

With each new stage of copy culture, the ease of duplication is countered by the increasingly complex technology required to produce and use the copies it creates. Just as Twain wrote that the bad of the printing press was “overshadowed a thousand times by the good,” the Internet age presents its own problems even as it solves countless others.

Through inks, pens, archival writing surfaces, and the required literacy of both the writer and the reader, manuscript culture replaced the simplicity of oral culture. Print culture turned the reproduction of the word into an even more specialized field, yet the information in printed books could still be accessed by any literate person with nothing more than the light to read by.

Not so for digital information. While the Internet has leveled the relationship between producer and consumer—publication is no more difficult than acquisition—both tasks now employ a host of technologies to support them. Access to Internet-based information requires personal computer interfaces, routers, digital storage devices, broadband connections, and electricity. If any one of these technologies fail, the Internet becomes useless. An old book can be as readable as the day it was printed, but digital media from a mere decade ago can become unusable, with unreadable formats and corrupted data.

To be sure, the Internet has given us access to literature as never before. To lament the decline of printed text as a more rarefied medium in a digital age mimics the complaints of those Renaissance elites who favored manuscripts and turned their noses up at middle-class print culture. As digital interfaces improve, it becomes harder to argue that print is an altogether preferable medium to Internet-based text. Printed books are unclean technologies. They are heavy and flammable. The paperback edition was certainly a great invention of early twentieth-century printmaking. Yet is the tiny text of a paperback, sandwiched between flimsy covers, preferable to a book downloaded for free from an initiative like Project Gutenberg, the oldest digital library of out-of-copyright books founded by Michael Hart in 1971, and read on a digital tablet or mobile phone?

The rise of the Internet will no more destroy literature than did the invention of the printing press. On the contrary, the Internet’s new copy culture will almost certainly increase literacy and the spread of democratic ideals, furthering the legacy of the printing press. Currently 2.4 billion people worldwide use the Internet across an estimated 2 to 3 billion connected devices. Seventy-eight percent of the U.S. population is now connected, which is not surprising considering the U.S. origins of the Internet, but half a billion people in mainland China are connected as well, albeit with government interference (known as “The Great Firewall of China”). Just as printed books gave rise to new libraries, and new libraries became new universities, the Internet also has the power to transform education and end-run corrupt institutions by delivering the library, the newspaper, and the classroom to any corner of the world.

Yet a great challenge still exists in the way the Internet records and stores information. A published book is a fixed and polished record of a moment in time. The Internet always operates in the present. Aside from web portals like the “Wayback Machine,” which can provide historical snapshots of webpages, the Internet has no past. With “time stamps” and footnoted “corrections,” web culture has attempted to import the rules of fixed publication, but the Internet still treats all information the same. Any information on the Internet can be updated, changed, or erased at any moment. On the plus side, the mutable quality of Internet-based information has permitted the rise of great user-maintained databases such as Wikipedia. In this way the Internet mimics scribal culture more than print culture: New readers add new insights, and the information the Internet contains is forever evolving.

On the downside, Internet-based information is infinitely more fugitive than printed matter. In order to eliminate the information in a book, each copy must be rounded up and destroyed. For Internet-based information to go down, only the data hosts need be eliminated. Unlike letters sent in the mail, emails are often poorly archived, challenging our ability to preserve important correspondence. As more and more data enters what is known as the Internet cloud and no longer sits on personal storage devices, a centralized loss could be catastrophic.

Such concerns are among the reasons why we should not rejoice in the demise of print culture even as we embrace the Internet’s possibilities. Online resources like Google Art Project reveal the Internet’s great cultural potential by giving us access to visual artifacts as never before. Google’s high-resolution scan of The Harvesters by Pieter Bruegel reveals nuances in the painting that even a close physical inspection cannot show. Still, few would claim that the Metropolitan Museum should move this painting to storage. Likewise with the integration of digital maps, satellite imagery, 3-D rendering, and Streetview photography, which can provide an unparalleled overview of a landscape: no one would pretend an Internet-based map is a substitute for visiting a real place and time.

So, too, for printed matter. In the coming years, the ease of duplication, storage, and transmission will tempt institutions to economize their use of printed books. In “Reproductions for the Plebes,” an essay published in these pages in June 1984, Hilton Kramer warned about a proposal put forward by Edward C. Banfield, a professor at Harvard University, for museums to sell off their original work and replace them with passable facsimiles. Calling it “ghastly,” Kramer attacked Banfield’s idea as “antidemocratic as well as anti-aesthetic.”

A version of Banfield’s theoretical notion is now a real idea, called the “Central Library Plan,” being put forward by the New York Public Library. At stake is the future of the library’s 42nd Street headquarters, completed in 1911 by the firm Carrère and Hastings and recently renamed the “Stephen A. Schwarzman Building” in exchange for a $100 million donation from the private equity investor. As one of the world’s flagship institutions, the NYPL is moving in a direction that is sure to influence all of library culture.

Claiming to appeal to a democratic mandate, the library will seek to “open this iconic building to millions more users—scholars, students, families, job seekers and more.” This will be done by removing much of its non-circulating book collection to a storage location in New Jersey, demolishing the seven floors of stacks that support its main Reading Room, and replacing this area of the building with social space and computer terminals. At the same time, two significant branch libraries nearby will be closed and integrated into the main building’s newly hollowed-out core.

The New York Public Library has a history of undertaking egregious action in order to supplement its bottom line. In 2005, the library sold a painting that was a significant part of its patrimony, Kindred Spirits by Asher B. Durand (1849), for a $35-million payout. From the top down, theNYPL behaves like a government bureaucracy, not the guardian of one of the most important archives in the world.

It is therefore hard to fathom the true motivations behind the Central Library Plan. Talk of access and progressive thinking may just be cover for mere cost savings and the freeing up of valuable real estate for its developer board members to pursue. Yet supposing the library has the public’s best interests at heart, such a move still demonstrates little understanding of the power of print or how libraries should transition into the Internet age.

The digitization of books, a great undertaking, argues against new public space, as Internet-based research can be done from anyplace with a connection. As the culture of the copy shifts away from print media, the preservation and accessibility of printed artifacts becomes an even more vital and pressing concern, just as the rise of print culture did not make ancient manuscripts any less important. The New York Public Library is already the most democratic of institutions, because anyone can access its singular print resources regardless of academic accreditation. Although pursued in the name of democracy, to make these resources any less accessible would be “antidemocratic as well as anti-aesthetic,” as Kramer rightly labeled the Banfield proposal years ago.

Libraries should lead the charge in advancing the values of print culture, just as they need to consider the ways that Internet-based information should be archived and preserved. Print media fills in for the vast limitations of Internet media­—serving as its ultimate backup and giving fixity to information. As we drive technology forward, an equally important task is to preserve the best of what’s left behind. We are living in the Internet’s revolutionary generation. The decisions we make now will affect culture for many years to come.

Comment

Comment

The Hudson River Destruction Project

Hudson
WILLIAMWALDRON/GETTYIMAGES

CITY JOURNAL
Spring 2011

The Hudson River Destruction Project
by James Panero

How the EPA is harming nature and ruining communities

Visit Fort Edward, 200 miles up the Hudson River from New York City, and you’ll find the waste hard to miss. That isn’t because General Electric once used polychlorinated biphenyls, the chemicals known as PCBs, to manufacture electrical equipment at two local plants. Rather, the waste on display in Fort Edward—now boasting a 110-acre “dewatering” facility built on once-fertile farmland and dozens of ugly barges bobbing on the river—is the wastefulness of the Environmental Protection Agency, which is imposing a costly river cleanup that is both unnecessary and environmentally destructive.

By ordering a dredging operation along 40 miles of the Hudson, the EPA has created a disaster of governmental proportions in this quiet upstate community. For six months in 2009, floating clamshell diggers shoveled day and night, pulling sludge from the river bottom around Fort Edward and depositing it onto barges. Six days a week, 24 hours a day, these barges, containing a total of 286,000 cubic yards of sediment mixed with old PCBs, were offloaded into that massive dewatering facility. There the soggy material was treated and squeezed in giant presses. The cakes of compacted sludge were then moved by truck onto 81-car trains, parked on a new spur of the Canadian Pacific Railway extending into the site. Five of these trains were in constant rotation, circulating the 4,400-mile round trip between the facility and the final dump site in Texas.

It was a Herculean attempt at remediation but one that actually increased PCB levels in the Hudson for a time; it also wreaked havoc on locals’ lives and imposed huge costs on General Electric. And all this work was only “Phase I” of the EPA’s plans. The government is now compelling GE to spend billions of dollars on Phase II, an even larger and longer operation. Dredging will recommence this spring.

The mighty Hudson once secured New York City’s commercial dominance, linking it to Canada, the Great Lakes, and the American heartland via the Erie Canal. For centuries, the river also served as the drainpipe for companies in the Empire State—more often than not, with the government’s blessing. From 1947 until 1977, General Electric’s plants at Fort Edward and nearby Hudson Falls discharged up to 1.3 million pounds of PCBs—the overflow waste of production—into the Hudson, and they did so with the full approval of state and federal agencies, which issued GE all the necessary permits.

This complacency wasn’t surprising, because PCBs had long been regarded as miracle compounds. Developed as a by-product of gasoline refinement and licensed by the Monsanto Company in 1929, PCBs were oily substances that conducted heat but were also fire-retardant. They were mixed into everything from road pavement and carbonless copy paper to household caulking and insulation. Because of their fireproof properties, the power industry found PCBs especially useful as safe coolants for electrical generation and distribution. The chemicals therefore replaced organic, more volatile oils as insulators for electrical components—for example, in the cooling liquids found in those metal cylinders that you see atop telephone poles. The rapid, safe expansion of electrical transmission, which brought prosperity and lifesaving energy to all corners of the United States, took place in a bath of PCBs—sometimes, in fact, through components manufactured at the two GE plants on the upper Hudson.

But the chemicals’ renowned stability also rendered them an environmental hazard. PCBs break down slowly in nature. Soluble in oil but not in water, they can “bio-accumulate” in animals and be passed up the food chain, probably posing health risks to people who ingest them in high enough quantities. But the exact nature of those risks has never been identified. A recent New York Times description pushes the perils of PCBs as far as the fact-checkers allow: “In high doses, they have been shown to cause cancer in animals and are listed by federal agencies as a probable human carcinogen.” So the direct human-cancer link of PCBs is unproven, and the description “probable human carcinogen” comes from the federal agencies that, as we will see, have a vested interest in maligning the chemicals.

Congress banned the manufacturing, sale, and distribution of PCBs in 1976. A year earlier, New York State’s commissioner of environmental conservation had sued General Electric, arguing that state law prohibited the company’s discharge of PCBs into the river regardless of the permits that the state had issued. In the landmark settlement adjudicated by Abraham Sofaer, at the time a professor at Columbia University and now a senior fellow at the Hoover Institution, GE and New York divided responsibility on how they would clean up the remaining PCBs: GE undertook the remediation of its plants, and New York—because it had, after all, approved the original discharges into the Hudson—would deal with the PCB sludge in the river. The settlement specifically stated that GE would not be liable for any future river cleanup.

The company met its mandate well, scrubbing its plants clean and even digging out an ingenious network of tunnels beneath the bedrock of one of its plants to capture every last ounce of PCBs that had seeped into the ground. Meanwhile, the Clean Water Act of 1972 had already begun regulating the discharge of pollutants into American waterways. As the waste pipes were shut off along the Hudson’s banks and sediment began to cover the deposits of PCBs and other chemicals spread out along its bottom, the river began to clean itself, and the recovery of its water became an environmental success story. The federal standard for PCBs in drinking water is capped at 500 parts per trillion; the river now regularly flows with 30 to 50 parts per trillion in the upper Hudson and a tenth of that downriver. The river became cleaner of other pollutants as well. Fort Edward locals remember a time when the Hudson was tinted the color of whatever pigment a nearby paint plant was processing and discharging; today, the water is safe enough to swim in. Some towns along the river even began relying again on the Hudson for their municipal tap.

New York didn’t hold up its end of the 1976 decision as well as GE did. When the state’s Department of Environmental Conservation first tried to clean up the Hudson PCBs in the 1970s and 1980s, it went looking for a convenient dump site for dredged-up pollutants. It eventually settled on a 100-acre dairy farm located near the Champlain Canal, which would allow for easy transportation of the sludge. Sharon Ruggi still lives on the farm, where her husband was born in 1935. One “supper time in October” of 1985, she recalls, state regulators showed up and sat down at the kitchen table. They laid out their papers—agreements to sell—and told the Ruggis to sign. If the Ruggis resisted, the agents warned her, the state would seize the property by eminent domain—but just the farmland. The Ruggis would be left with their house, rendered worthless by its sudden proximity to a toxic dump site.

Despite the threats, Ruggi showed the regulators the door. She then became a full-time activist, joining a farmer-led anti-dredging group called Citizen Environmentalists Against Sludge Encapsulation (Cease). She notified her town about the regulators’ heavy-handed tactics. She wrote to her representatives and testified before Congress about the negative impact of a large-scale PCB cleanup. And she won the day. Without its dump site, New York State had to back off from its cleanup commitment.

But New York had a brilliant idea: passing the buck right back to GE, despite the terms of the settlement, through the new federal law known as Superfund. Officially called the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, the Superfund legislation empowers the Environmental Protection Agency to pursue whatever chemicals it deems unsafe and to force the “responsible party” to foot the bill for a cleanup, regardless of whether that party was a willful polluter or a good citizen discharging waste with the government’s approval. (Usually, the “responsible party” winds up paying after years of wasteful litigation: one-fourth of Superfund expenses go to “transaction costs,” fees to lawyers and consultants whom even the New York Times once described as “federal officials who spun through Washington’s revolving door to trade their Superfund expertise for private gain.”)

And so in 1984, New York got the EPA to declare the entire 200 miles of Hudson from Fort Edward to New York City a Superfund site. But the EPA also at first decided against dredging the river bottom, deeming it a risky, invasive approach that might stir up more PCBs. In 1989, however, New York appealed the decision, and 13 years later—the wait time alone testifies to federal inefficiency—the EPA finally agreed, calling on GE to conduct extensive dredging.

Its reasons were novel. The concentration of PCBs in the river water had dropped to safe levels, after all. So the EPA, searching for another justification for pursuing massive remediation, settled on PCB accumulation in the river’s fish. PCBs in river water, plants, and sediment could pass in incremental amounts to the fish around them (through ingestion and respiration) and then pass to the people who eat the fish, the EPA reasoned. But here, too, the river was showing stark improvements. In 1975, before the chemicals were banned, the concentration of PCBs in Hudson fish averaged 17.39 parts per million and could go as high as 50.7 parts per million, according to John Cronin, an environmentalist who worries about the dimensions and impact of the dredging project. By 2007, the mean concentration was 0.89 parts per million—well below the two parts per million that the Food and Drug Administration has set for commercially sold fish—and the maximum was 3.56.

Through the calculus of bio-accumulation, however, the EPA has learned to claim that even infinitesimal amounts of PCBs in the environment are major health concerns. A potential exists, says the agency, for PCBs to build up through gradual ingestion, even if that would require a superhuman consumption of a single food source for years on end. This was the argument that finally allowed the EPA to compel the multibillion-dollar cleanup of the Hudson by GE. As Hudson fish were already approaching acceptably safe levels for moderate consumption, the EPA set a new target of 0.05 parts per million in the river’s fish. Such numbers, argued the EPA, would allow for “unrestricted consumption” of Hudson fish by what the agency called “subsistence fishers.” It would be an undeniable achievement to restore the river to its antediluvian glory, with fish safe to pluck and eat at every meal. And the way to achieve that goal, said the EPA, was a massive dredging of the river bottom.

At what cost would such a pristine state be achieved? The dredging in Phase I alone cost General Electric about $500 million. If GE had contested its obligations to dredge, Superfund would have allowed the EPA to conduct the cleanup itself and then collect four times the cost from the company. “If it costs the state $1 billion, we could collect $4 billion, so that’s a pretty heavy stick,” says David King, director of the EPA’s Hudson River field office.

In addition to the $500 million, GE says that it has paid the EPA another $90 million so far to cover the agency’s oversight of the cleanup. In other words, the Superfund program produces windfalls for the government agencies that enforce it at both the federal and state levels. By mandating that GE dredge the Hudson, regulators who oversee the project can submit their own expenses to the company for reimbursement. Indeed, “what propelled the PCB case to the forefront is not just the toxicity of PCBs but also the significant financial resources of General Electric,” Cronin wrote in the New York Times. Superfund only works, needless to say, when there is a viable company to pay for it. (The Hudson site is one of 50 or so Superfund obligations that GE currently faces throughout the country.)

The cost of the EPA’s quest wasn’t just financial. Strolling through Julie Wilson’s daylily garden in Fort Edward last fall, I almost forgot the enormous dewatering facility that the federal government had located next door. This area of farmland, with Vermont’s Green Mountains rising in the distance, can be particularly radiant. Nearby, a steady stream of sailboats with lowered masts floated south from Canada through the last locks of the Champlain Canal into the Hudson. Thanks to regular watering, a mountain of chemical-laden dirt, dredged from the Hudson and still awaiting pickup just over the rise behind Wilson’s flowerpots, was releasing acceptably low levels of dusty contaminants in my direction.

When the facility was in full operation during Phase I, life for Wilson was quite a bit worse. Dredging is a dirty business. Because the river bottom was being disrupted, PCB levels in water, air, and fish all rose dramatically and exceeded federal limits. By every measure, the health of the river and the surrounding community deteriorated, at least temporarily, through the EPA’s intervention. The messiness of the operation was a necessary evil, the agency maintained, the collateral damage of doing good.

Such assurances mean little to Wilson, now 72, as she contemplates the start of Phase II. Even before the processing facility went into high gear, when the neighboring farm was stripped of its topsoil to make way for the construction of the dewatering facility, she had to confront clouds of dust. Her asthmatic daughter still can’t visit on bad days. As he was dying of cancer, Wilson’s husband, James, had to leave the homestead, overcome by the commotion. “There were so many noises, clanging and banging and shouting, motors and unloaders and dump trucks dropping rocks,” Wilson tells me. “You have no idea what it is like. Twenty-four hours a day. It can drive you crazy. The stress level can affect almost every function—cardiac, gastrointestinal, and elimination.” The beeping of the vehicle backup alarms, she says, was the worst.

Wilson’s property value is now down 50 percent. Keeping clients interested in her flower business has also been difficult. “I tried to do garden tours until I could no longer compete with the noise. When you have to raise your voice to shouting, you lose the effect of the tour.” She adds that birds and other wildlife have abandoned her property. “I have such a love of the land here that when I see the site over there, I could just weep.” The sentiment puts her in an unusual position. What do you do when the organization responsible for destroying your environment is none other than the Environmental Protection Agency?

Little stands in the way of Phase II; certainly the EPA itself isn’t likely to cancel the project. Under administrator Lisa Jackson—“the agency’s most progressive chief ever” and “one of the most powerful members of Obama’s Cabinet,” according to an admiring Rolling Stone profile headlined eco-warrior—the EPA has been flexing its regulatory muscle as never before. Because of its own “endangerment finding,” the EPA is attempting to regulate carbon dioxide emissions under the Clean Air Act, a move that could have a profound effect on American industry. The agency has also been raiding New York City public schools in search of PCBs in fluorescent lighting; it recently called for a remediation plan that could, the city initially said, cost up to $1 billion. The EPA is even attempting to impose regulations on the dairy industry by arguing that the Spill Prevention, Control and Countermeasure program, designed in 1970 to prevent oil discharges in waterways, also applies to milk fat spilled on farms.

The agency’s regional administrator in charge of evaluating the Hudson dredging project, Judith Enck, is another eco-warrior. Before taking on her federal post, Enck was head of a New York environmentalist lobby tasked in part with pursuing PCBs. One wonders if an activist—someone who has spun through that “revolving door” described by the New York Times—can be a judicious regulator of a multibillion-dollar project.

The regulators also have a formidable (and tax-exempt) public-relations wing. In 1966, the folksinger Pete Seeger built an antique-style sloop, the Clearwater, to ply the Hudson’s waters and draw attention to its contamination. Since then, Seeger’s environmental group, also called Clearwater, has been joined by Riverkeeper, Scenic Hudson, and the National Resources Defense Council, all of which raise funds by preaching the evils of PCBs.

Nor will GE itself be able to resist the EPA’s plans. Jack Welch, the company’s chairman and CEO from 1981 to 2001, occupied a middle ground, cleaning up the plant sites but arguing that extensive dredging would cause more harm than good. When Jeffrey Immelt, these days a top Obama economic advisor, succeeded Welsh, however, he rebranded the company with the term “ecomagination” to highlight GE’s innovations in green technology. A year later, GE signed on to the EPA’s decision to dredge the Hudson, and in 2005, it filed a consent degree in court to undertake the project. The company did quietly contest the rollout of Phase II, on the grounds that PCB resuspension in the river water during Phase I far exceeded the EPA’s own standards. But just as it pushed down its targets for PCB concentration in fish in order to compel the cleanup, the EPA reset its standards for resuspension, allowing PCB levels in river water to spike above federal safety levels during dredging.

After GE gave me a tour of the dredging operation, I found it difficult to doubt the company’s commitment to the project. Out on the Hudson, our pontoon boat passed by the long row of barges tied up and waiting for the start of Phase II. Downriver, we approached a vessel collecting core samples of sediment to be sent off for an analysis of contamination depth—one of 50,000 data points taken along the waterway. GE divers were rebuilding the pulled-up river bottom, an underwater ecosystem destroyed through the EPA’s mandate, by painstakingly restocking it with 70,000 individual plants, mainly wild celery and American pondweed harvested from local sources.

Once ashore, I looped around to the dewatering facility bordering Julie Wilson’s property. The site was empty and resembled an airless lunar base, with a manicured pile of PCB-laden sediment at the center. The facility’s main task at the time I visited was collecting and processing the rainwater that falls on the site. Not a drop here enters the earth. A sheet of plastic runs beneath the entire facility, collecting the water and feeding it through the same colossal filters used during active dredging to “polish” the water squeezed out of the dredged material.

When Phase II begins, General Electric will again employ 500 workers here and on the river. Once more, Wilson will watch as GE excavates tons of river muck, now buried under 30 years of sediment, and stages it for processing and transportation next to her residential neighborhood. “I view it as creating a new environmental disaster,” Ruggi says, and history suggests that she may be right. In one early dredging attempt, New York State created a PCB dump site at the tip of Rogers Island, just downriver of the plant. That area has now become its own toxic hazard requiring remediation.

“Government looks very good taking corporate USA to task,” Ruggi adds. “It makes great headlines. The sad part is the health of the Hudson loses out. We grow up thinking the government works for us. To come to the realization that it can work against us is shocking.”

Comment