In 1931, historian James Truslow Adams said the American dream mandates that “life should be better and richer and fuller for everyone, with opportunity for each according to ability or achievement”, regardless of social class or circumstances of birth. But what does that actually mean?
For some, the vagueness of the American dream concept makes it difficult to quantify. Identifying a more specific metric of focus would offer a clearer picture of American opportunity for prosperity and success, and an upward social mobility for all people.
TechCrunch.com journalist Kim-Mai Cutler delivered a presentation at Earthsharing.org’s BIL Oakland 2016: Recession Generation event on July 9, in which she focused on the intersection between opportunity, technology, and land. To address this intersection, she referenced the research of Stanford University economist Raj Chetty.
Chetty analyzed the family income records of 40 million children over the past 20 years and calculated the likelihood of a child born into the poorest 20 percent (lowest quintile) of society reaching a higher quintile in income. Isolating geography as a determining factor, Chetty found that, for example, the city of San Jose provides the best opportunities for a poor child to reach the 80th percentile in income distribution, compared to all other cities across the country. This is shown in Figure 1.
Despite this, Figure 2 shows a trend reflected statewide and across the United States wherein median wages are increasing, but poverty is also on the rise, and homeownership is falling.
This trend in Santa Clara County flies in the face of conventional thinking, whereby poverty should decrease as incomes and opportunities multiply. If people are making more money, yet are less able purchase a home, the home price must be rising faster than the wage.
Similarly, apartment rent is skyrocketing. There is a lot of job growth, which would tend to indicate that labor is more in demand and that incomes will be higher, but most of the new jobs do not pay well – most make less than 50 percent of the average median income (AMI), as seen in Figure 3.
To add insult to injury, Figure 4 shows that many lower-wage workers fall well short of average asking rents, and are therefore unable to work and live in the same area. These people must either cohabitate or commute long distances in order to secure housing that they canafford.
These are direct consequences of Proposition 13, which greatly limits property taxation in the state of California. Proposition 13 defines what a parcel of real estate can be taxed, how much that tax can grow annually, and when the parcel’s value can be reassessed. Over time, this has created severe market distortions, as developers have no incentive to build additional housing that is affordable. This ultimately limits housing supply, forces workers to commute further from the urban centers, and leads to additional sprawl.
How does this all affect upward mobility? For starters, family commute times correlate with a child’s future success and earnings. Figure 5, from Chetty’s study, shows that a transit time of 15 minutes or less significantly correlates with a child’s upward mobility.
If the American dream is precipitated by upward mobility from one income quintile to the next, it is becoming an unattainable dream for an increasing percentage of the population. Without significant policy change, it will become impossible for many families to escape wage slavery.
Remedies do exist – some to resolve the problem altogether, and others to mitigate it. Metro San Francisco has seen a significant growth of working professionals choosing cohabitation, as well as the tiny house movement of 100-400 square-foot spaces. Unfortunately, these behaviors do not address the structural inequities and land misuse created by the current policy environment and Proposition 13.
With this in mind, it would be sensible for new housing construction in the Bay area to occur where economic activity is most concentrated, namely downtown San Francisco. Downtown areas tend to have the greatest land values, but traditional strategies for construction in the city center tend to be very expensive, politically treacherous, or otherwise ineffective. While cohabitation and tiny houses might make the area more affordable for a few, government must incentivize urban development in high-demand areas to effectively turn the tide of this crisis. To this end, the city and state must consider a Land Value Tax.
The economist Henry George documented this phenomenon of market exclusion 137 years ago in his seminal work Progress and Poverty. George demonstrated how rent increases faster than wages, and to expedite new construction, he recommended eliminating taxes on work and consumption and shifting the source of revenue to Land Value Taxation. His idea was to encourage landowners and developers to increase residential and commercial space in order to pay the Land Value Tax, while generating a respectable return and providing value to others. Land Value Taxation naturally becomes even more effective wherever land values are higher, like the urban core of cities. Implemented in cities, Land Value Taxation leads to a substantial increase in both living and working space.
California faces a unique challenge due to the limits imposed by Proposition 13, and overcoming this would require a difficult voter-approved constitutional amendment to completely overhaul the property tax system. State legislators and regional and city planners would be remiss not to consider a Land Value Tax, which has had demonstrated success in increasing residential space in the United States and abroad.
Watch Kim-Mai Cutler’s presentation below:
Images: Keynote presentation by Kim-Mai Cutler at BIL Oakland: Recession Generation 2016
“He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.” -Thomas Jefferson
The Angelina Jolie Effect
In 2013, Angelina Jolie shocked Hollywood by announcing her decision to undergo a preventive double mastectomy. She cited a hereditary risk of breast and ovarian cancer and what she had been told was a 65 percent chance of breast cancer due to a mutation in her BRCA1 gene.
The discovery that certain mutations of the BRCA 1 and BRCA 2 genes increase risk of breast and ovarian cancer was made in the 1990s. The company that began the BRCA analysis test claimed that a mutation in either of these genes could increase risk to as high as 87 percent for developing breast cancer and 63 percent for developing ovarian cancer by age 70.
The ensuing publicity caused a surge in genetic testing in what has been named the ‘Angelina Jolie effect’. But the cost of a BRCA test is extremely prohibitive, at more than $3,000 in the United States. Jolie wrote in an op-ed that this was a huge obstacle for many women seeking tests for breast cancer, a disease that kills almost half a million people around the world each year.
When Myriad Genetics discovered the ‘breast cancer genes’ in 1994 and 1995, it managed to acquire 20-year patents for the very genes themselves, as well as any current and future methodologies for examining them. This monopolization was a boon for shareholders, and in 2013 the BRCA analysistest brought in 75 percent of Myriad’s total revenue of $613 million.
Should Biological Phenomena be Ownable?
Conversations about property rights typically involve things that people have built, bought, or otherwise created throughout their lives. But as technology challenges our fundamental understanding of biology and ourselves, we are faced with a decision about whether to update our institutions to reflect new opportunities for ownership in nature.
In 2009, a group of organizations including the Association for Molecular Pathology and the American Civil Liberties Union filed a lawsuit challenging the BRCA gene patents, arguing that they amounted to patenting human life, robbed every person of a piece of self-determination, and violated basic human dignity.
The case was supported by testimony from many women who had been disadvantaged or put at risk by patent restrictions, from being denied a second opinion on tests, to being unable to afford testing, and having insurance rejected by Myriad. After a four-year legal battle, theSupreme Court ruled in 2013 that human genes cannot be patented in the U.S. because DNA is a “product of nature”.
This ruling annulled the patents related to more than 4300 human genes, stripping monopoly status from Myriad Genetics and dozens of other companies and institutions that had profited from them. “Myriad did not create anything,”Justice Clarence Thomas wrote in the majority opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”
What the landmark ruling didn’t cover, however, were methods for testing BRCA genes, possible new patents of these methods, or the patentability of synthesized DNA. Myriad’s two-decade monopoly has left it with a massive database of genetic data, maintaining its dominant position in risk factor analysis for BRCA genes compared to any competitor.
The main importance of the Court’s decision was establishing this boundary between innovation and appropriation of biological phenomena. In the same way that natural resource extraction methods can be patented and monopolized, so too can techniques for analyzing and repurposing genetic material. But the mere existence of compounds in nature should not be ownable in a free and clear way, not without some sort of duty to use these natural opportunities, opportunities that hold the potential to free us of a great deal of suffering and unleash human potential.
It’s not just genes that have been captured for exclusive license and rent-seeking. Consider Joseph Merrick, a so-called ‘freak of nature’ known as the Elephant Man. He spent most of his short life in circuses, where many entrepreneurs made a great deal of money exploiting Merrick’s condition. Until recently his bones were on display at the Royal London Hospital museum, andthere is no evidence to suggest he consented to this.
The most famous case of this sort of appropriation is that of Henrietta Lacks, an African-American woman whose cancer cells were harvested in 1951 and used to create an immortal cell line for scientific experimentation. In the process of radium and x-ray therapy, tissue was removed from her tumor and secretly sent to a lab at Hopkins University to be grown in test tubes.
Lacks died at the age of 31, leaving behind a husband and five young children. The family never received any financial support, and found out by chance that their mother’s cells (called HeLa cells) have been used in ongoing research. HeLa cells were used in developing the polio vaccine, were sent into space, and have been used for cloning, gene mapping and in vitro fertilization.
The practice of patenting materials in nature and people or aspects of cultural tradition is given the derogatory term ‘biopiracy’, and agrochemical and biotech company Monsanto offers an illustration which once again distinguishes between innovation and merely appropriating what freely exists in nature. In 2016, the European Patent Office revoked a Monsanto patent for a virus-resistant gene found in Indian melons. Monsanto introduced the resistance to other types of melons and managed to patent this as its own invention. But the gene responsible for this resistance was discovered in 1961 and plants containing it have been publicly available since 1966. Conversely, Monsanto has won many of its own lawsuitsagainstfarmers who infringe on patent rights Monsanto has on its seeds.
Monsanto and other institutions have appropriated these materials without obtaining consent. It then has turned around and charged monopoly prices to the same people for the right to use these materials. And while cultural remuneration is tricky, privatizing these cultural products anyway has sometimes resulted in important advances in medicine and other fields. However, in the context of patents, there has more often been a very real reduction in scientific and social advancement, as patent holders merely speculate on their patent claims. This forces real innovators to pay large sums of economic rent or go through contortions to avoid patents, all in order to add to the intellectual stock of humanity.
For example, there are hundreds of patents on Agrobacterium techniques alone, which has been the most common vector for companies like Monsanto splicing genetic code into plants. The reason there are so many is the risk of patent infringement. Researchers have come up with brilliant workarounds for these problems, but developing new ways to do the same things has huge opportunity costs. For scientists, it’s a purely bureaucratic hurdle, not a chance for real scientific advancement. Thankfully, tools are being developed to help reduce confusion over this, but they are not enough to encourage entrepreneurship without an army of lawyers.
In the mid-’80s, molecular biologist Dr. Richard Jefferson pioneered a genetic research technique that helped illuminate where genes are expressed in plant tissue. He distributed this helpful technique immediately to more than 1000 labs around the world.
Jefferson said in an interview that the litigious way in which genetic patent issues tend to be resolved is not constructive, and that both parties “end up trying to promote their particular worldview based on a lack of evidence on either side”.
“So you’ll have businesses who will pound their wingtips on the table and say ‘we must have exclusive licenses, and… on the other side, you might have civil society or thoughtful social policy engagement that says ‘it’s all wrong, you shouldn’t do it that way, everything should be free’, but they may well not be aware of the very complex natures of risk mitigation businesses have to encounter,” he said.
“There’s no real evidence base that can guide real problem-solving for policymakers or for practitioners.”
One company might be better off if techniques for analyzing genes can be monopolized, but it is likely that the market for innovation and society as a whole would be better off if these medical techniques were somehow available to all. These returns to society could manifest as wealth creation, scientific innovation, and better health outcomes.
Open source success stories in the technology world – including operating systems, programming languages and web browsers – have not offered direct profit to its community of creators, but they have provided social value and a means to create wealth. Jefferson wrote in 2006: “Many ask, ‘How do you make money in open source?’ The answer: you make money not by selling open source, but by using open source.”
There are valid reasons both for patents as well as open source. However, might there be a synthesis, a solution that would give us the best of both worlds?
Incentives are Holier than Property
Friends of Earthsharing.org, Guido Núñez-Mujica and Joseph Jackson, had a great idea for helping poor people in remote areas of Latin America. They wanted to create a light and portable machine for copying DNA (PCR) so it could be used for all sorts of things, in this case testing for tropical diseases. A standard PCR machine is fairly heavy, at least as far as jungle treks go, so a light mobile version could have really helped a lot of people get tested and then obtain treatment. However, because someone had patented the mere idea more than 25 years ago, and done nothing with it, they could not patent it themselves. This vastly reduced the pool of investors due to the increased threat of competition.
Even if others have independently thought of the same idea on their own, they are restricted from using it by an existing patent. Such ideas should not belong exclusively to the person who merely filed the patent first, at least not in an absolute way. We can, for instance, say that a patent affords the holder the opportunity to invest more into creating their idea, but that right should be coupled with an incentive to use their monopoly privilege for productive purposes.
An innovative solution to this problem parallels that of 19th-century economist Henry George, who wanted to incentivize landlords owning prime real estate to make their land available to others. He proposed a tax on the value of urban land to invigorate landlords to use prime locations productively.
Where landlords have monopoly privilege over a particular geographic location, Myriad genetics and Monsanto had, and to some degree still have, a monopoly privilege over specific ‘nucleo-graphic’ areas of DNA, untouched by the artifice of human innovation. Just like landlords who own vacant urban lots for years and leave them undeveloped, patent owners should pay increasingly more to exclude others from developing ideas that will benefit humanity.
Patent Value Tax
Patents are important because the exclusive usage rights can often provide a predictable environment that can encourage production. Banks can feel confident that they can provide loans. Inventors can feel more confident that someone won’t just copy their work and get away with it. Patents are also important because it ensures that the discovery is publicly documented.
But as previously mentioned, patents also have drawbacks. Patent trolls use patents for idle speculation, holding valuable ideas for ransom. Patents contribute to a climate of high liability for new inventors, because with so many patents it is impossible to know when violations occur.
To ensure patents are only held by people who intend to use them, and only while they are intending to use them, a tax incentive system could be very helpful.
Patent values could be self-assessed by the inventor and changed at any time. The rate of tax will gradually rise over time, based on the self-assessment. If particular patent holders decide the taxes are too onerous, they can simply lower their assessment, or relinquish it into the public domain. Anyone is allowed to place bids that are higher than that self-assessed value, and this will initiate an auction. The proceeds go to the current holder.
Auctions would be open to anyone, including the government. This would provide a vehicle by which we can use the democratic process to incentivize scientific research. Since the government could buy the patent and release it into the public domain.
Some may argue that patents are nothing more than a right to sue for violation, and do not encourage innovation. This is particularly true today considering that many technologies require a combination of existing technologies, involving multiple patent holders who are often in it to speculate. However, this dynamic would vanish if patents had high holding costs and could be publicly auctioned at any time. Patent holders would have an incentive to work with others quickly because holding onto a patent would be like holding a very expensive hot potato.
Patents as a Privilege
Founding Father and third President of the United States Thomas Jefferson is the earliest authority on American patent law, but his view on the matter was characterized by skepticism unless patents were for the public good. He was generally opposed to any kind of monopoly, and believed that ideas were both unstoppably contagious and not fit to “be a subject of property”.
“Society may give an exclusive right to the profits arising from them, as an encouragement to men to pursue ideas which may produce utility, but this may or may not be done, according to the will and convenience of the society, without claim or complaint from anybody,” he said.
The attachment of property rights to biology, and all ideas for that matter should be treated with great care, both because the natural world was not created by any one of us, and because exclusive rights to innovate need to come with a duty to use the necessary natural resources well. It is not for us to plant our flag and claim ‘this is mine!’ but to consider ourselves stewards, with a duty to use natural resources in ways that will ultimately improve our lives and the lives of others.
The San Francisco Bay Area is in the midst of a severe housing affordability and displacement crisis, the result of years of inadequate public policy, a clash of generational attitudes, and ubiquitous obstruction of new housing projects. At the BIL Oakland: Recession Generation conference, hosted by EarthSharing.org on July 9, a panel of four housing advocates shared their thoughts on where to go from here.
Zac Shore, Stephen Barton, Alex Lofton and Tim Colon described a multi-faceted crisis requiring concurrent and complementary solutions.
Zac Shore is the director of development for Panoramic Interests, a construction company focussed on affordable student housing, workforce housing and homeless housing in San Francisco.
The company has a modular construction ethos that crystallized when they traveled to the U.K. and witnessed the construction of 190 apartments in eight days using shipping containers.
“When we saw that, we were convinced, and now we’re starting to build with it on a large scale in San Francisco.”
Panoramic Interests has built hundreds of apartments for students and workers, and is now beginning to build for the homeless. Shore cited demonstrable cost savings associated with housing homeless, cutting down on chronic use of emergency services and offering an economic incentive alongside the humanitarian one.
Stephen Barton represented the Bay Area Community Land Trust and the Committee for Safe and Affordable Homes. Barton has a PhD in city and regional planning from the University of California, Berkeley, and was director of the Housing Department and deputy director of the Rent Stabilization Program in Berkeley, California before retiring recently. He has written widely on housing policy and co-authored Common Interest Communities: Private Governments and the Public Interest.
Barton argues that new construction does not have the ability to solve the Bay Area’s housing crisis.
“It’s not to say that increasing the housing supply is not important, because it’s desperately important,” he said. “But of course we have Prop. 13 here in California and its progeny designed to protect real estate investors’ windfall profits, and of course encouraging land speculation because people who own vacant and under-utilized land hardly pay anything in taxes.”
Using taxes to treat rental property like a business rather than personal real estate would be a step in the right direction, “to recapture through taxation the value that we and those who came before us have created,” Barton said.
“If you applied a two percent tax to rental property in the whole Bay Area, you would raise $500 million a year and it could lead to construction of as many as 50,000 affordable apartments.”
“About half of the rent that tenants pay in the Bay area is not, in fact, necessary to profitably operate and maintain the housing once it’s been built and the construction costs are amortized. Instead, it’s basically an admission charge – ‘welcome to the magic kingdom, here’s how much you have to pay to be here in the Bay area’.”
Alex Lofton is a co-founder of Landed San Francisco, a community-based brokerage organization that raises capital from investors interested in local real estate, and uses that money to support first home-buyers with down payments.
“Our whole system is set up on the intergenerational transfer of wealth: you’ve got to ask your mom or you dad, or brother or sister, or grandparents to help you buy your first house, especially in expensive places. So we just say ‘Why can’t there be other options than mom and dad…to borrow that money?’”
“You live in a place like this and you question if you’ll ever become an owner…the leap from renter to owner is just impossible.”
While affordability was the main problem with Bay Area housing, requiring greater supply and higher incomes, another way forward was thinking about the concept of ownership differently, and coming up with creative ways for whole communities to help people get started in the property market.
“There isn’t a silver bullet, it does take a lot of solutions.”
Tim Colen, at the time of conference, was executive director of the San Francisco Housing Action Coalition, an organization promoting well-designed and well-located housing. Prior to this, he was president of the Greater West Portal Neighborhood Association, and spent 25 years working as geologist.
San Francisco is cursed by having a red-hot economy, and highly-skilled workers flooding into a city that has a history of under-producing the amount of housing it needs.
“We have chosen policies for the last two or three decades that have led us to this position where our population is growing by about 10,000 residents per year… a city that has a historic production rate [of houses] somewhere around 1700-1800 units a year.”
“It’s already a city that’s become hostile to the young, young families, seniors, immigrants, the artists, the weirdos, the hippies, everybody. It’s going in the direction of becoming a luxury resort with a certain amount of housing we can afford to subsidize.”
In Sacramento, liberal democrat Governor Brown has taken a bold step by introducing “by-right housing”, whereby if certain conditions are met by developers then new builds cannot be obstructed.
“It’s the first tool we’ve seen in ages that says ‘you can’t appeal projects to death anymore’,” Colen said.
The dominant conversation around housing has been one of intergenerational change, and the desire of previous generations to keep things the way they are, Colen said, and this has tipped the balance of power toward those who say no to development and increase construction costs.
“We’re strangling ourselves,” he said. “There is not enough money in the world to subsidize our way out of this problem.”
This panel discussion highlights a struggle between established residents and newcomers, who should be joining forces against an entirely different threat. Renters are being squeezed out of the Bay as prices surge, while would-be newcomers, many of whom are tech workers, are kept out by the same phenomenon. Both blame each other, yet it is landowners who are making a killing off the skyrocketing costs for space in the Bay Area.
Yes, tech workers drive up the cost of land, but freezing new construction also makes apartment rents artificially high. Both groups are right, but it is unfettered and untaxed landlordism that is the real problem.
There is a way to help protect those in danger of being forced out of the Bay, while also giving access to newcomers in innovative industries: tax the rising value of land and reduce taxes on working and exchanging. A citizen’s dividend paid out of the revenue from a land value tax, what some call a basic income, should be given to everyone to be spent as they wish. They would use this money to subsidize their apartment, while construction could boom in downtown San Francisco and elsewhere in the Bay. With more people able to fill the new units in the central locations, this would take pressure off areas even slightly outside the central business district. This in turn would retard the rise in rent from what it otherwise would be, while putting more money in vulnerable people’s pockets to secure housing.
BIL: Oakland 2016 Recession Generation was an Earthsharing.org conference in Oakland, California on July 9th. Foresight Institute president Julia Bossmann presented an argument for moving toward a post-work society, and the changes both economic and social that would be required to achieve this.
“They have theoretically unlimited memory, they have a way faster speed of reading, they can find insights and facts from all across and then draw connections and find patterns. So now that we may have reached the limit in medical research – that one human mind may not be enough to figure it all out – having a machine mind may open the floodgates to finding out much more.”
Bossmann’s scenario of a post-work society presents significant economic challenges, with a disruption of millions of jobs across the professional spectrum. Truck drivers could be an early casualty, but many others earning an income by selling their time and labor stand to lose their current employment due to automation.
“How would a human even compete with someone who can drive for thousands of hours at no end and not ask for a salary?”, Bossmann says.
In general, a person’s income is derived either from time, or from ownership of assets like land and other property. Bossmann states that “once the time goes away, the only thing left is ownership. And we all know that ownership is not distributed in a way that all of us could just live on that alone; in fact, most of us need to sell our time to live”. A radical shift in how we think about ownership is required if society is to remain prosperous, Bossmann says.
As artificial intelligence progresses, those who own the valuable sites where A.I. research takes place, especially in Silicon Valley, will continue to become more disproportionately wealthy vis a vis the appreciating value of their land: rents they can charge, prices for which they can sell, etc. They will become wealthier not by doing the research and development themselves, but simply by owning valuable space in areas doing R&D. Regardless of Bossman’s predictions about the rate of A.I. progress and its replacement of human labor, a greater proportion of the wealth created will continue to go to owners of prime land.
Those who own prime locations already have a large advantage over wage earners, simply by their ever-appreciating real estate values. We have seen a huge explosion in labor-saving devices, wealth production, and wealth inequality in the last two centuries. These gains disproportionately go to the owners of property. So, there is already a need to share the returns from owning natural resources like land.
This need to redistribute the benefits of land ownership become even more obvious in Bossmann’s prediction of the future – where she assumes a lack of A.I. winters/ceilings, no comparable human intelligence augmentation, and where the Law of Comparative Advantage (between humans and robots) no longer holds. In such a scenario, obedient robots would simply produce enormous amounts of wealth, and this wealth would all go to those humans who own the natural resource inputs needed for A.I. The people who did not own land, or receive a dividend/basic income of some kind, would simply have no income.
Henry George, a prominent political economist and author from the late 19th century, argued that gains derived merely from the ownership of land and other natural resources should be considered the property of everyone, not just the title-holders. A system of land value taxation would be a pragmatic way of shifting the burden of raising public revenue from workers to landowners. It would be the obvious choice for funding a basic income that would protect people from unemployment now, and facilitate any kind of post-work society.
“Once we have figured out this dilemma, and we have machines that will do most of the work on the planet… we will look back and think that it was barbaric that people had to sell most of their living time on this planet, doing things they didn’t want to do,” Bossmann says. But reaching an economic consensus is not all that is required to reach a prosperous post-work society.
“Many of us define ourselves by our jobs, what we do for a living, how much money we make, all these things are important to so many of us. Are we willing to give up this kind of thinking for something better?”
Julia Bossmann is president of Foresight Institute, a think tank promoting transformative future technologies, and founder of Synthetic, a startup building A.I. of its own. Bossmann is a McKinsey Fellow, Singularity University GSP graduate and master of science in neuroscience and psychology. She lectures on Artificial Intelligence, hard technology, innovation, the future, and technology transforming society.
Photo: Tej3478 <a>Artificial Intelligence</a>. Licensed under Creative Commons.
BIL: Oakland 2016 Recession Generation was an Earthsharing.org event which took place on July 9th in Oakland, California. Keynote speaker Chuck Marohn presented his experiences as an engineer, city planner, and founder of the non-profit Strong Towns to explore the problems with large, specialized systems of government, and the case for localization.
In a world where city planners and engineers must work within a narrow vision on the same sorts of projects, Marohn says there is a disconnect that only leaves space for endless repairs and fix-ups, and very little room for real creative thinking or new technology. With many cities struggling financially or going broke, Marohn makes a case for innovation that not only can increase the productivity and self-sufficiency of a town, but can improve the lives of all who live there.
Marohn suggests that while big governing organisations tend towards specialists as decision-makers, localized government is most effective when generalists are in charge. People who can make connections with others, seek out the experts on any given subject, and bring together combinations of skills will be the most successful leaders.
“The large systems that we have created – really a byproduct of the things that happened in the Depression and World War II – allowed us to accomplish a lot of things in a very short period of time, but come with their own fragility, their own kind of disconnectedness.”
“You can see in things like the Brexit vote, you can see in things like the conversation we’re having in our election cycle… you can see this disconnect between the large systems we have to govern ourselves, the large systems we have to run our economies, and the way we actually live our everyday lives.”
He has also advocated for shifting from the traditional property tax to a land value tax. He explains:
“The property tax system punishes investments that improve the value of property. The land tax system… punishes property that is left idle.”
Charles “Chuck” Marohn works as a licensed engineer in the State of Minnesota. He is a member of the American Institute of Certified Planners and founder and president of Strong Towns, a national media organization that supports the development of resilient cities, towns, and neighborhoods. Marohn holds a Master’s degree in Urban and Regional Planning from the University of Minnesota and a Bachelor’s degree in Civil Engineering from the University of Minnesota’s Institute of Technology. He is the author of Thoughts on Building Strong Towns. Volume I and A World Class Transportation System.
BIL: Oakland 2016 Recession Generation was an Earthsharing.org event which took place on July 9th, 2016 in Oakland, California. Keynote speaker, Robin Hanson, shared a fascinating vision of the future in which cheap, replicable robots are able to do most human work, and the implications of such a possibility.
Hanson presents an idea divergent from what he says are the two most prevalent in the world of artificial intelligence, those being either slow, ongoing developments in AI research over the coming decades, or some “grand new theory” that hasn’t been discovered.
“The third scenario is where we port the software that’s already in the human brain,” Hanson says.
“If we have good enough models for how each of the cell types work, we have a good enough scan of a particular brain, we have enough cheap, fast computers, then we can make a model of that particular person’s brain on those computers; and if it’s cheap enough, you could run that simulation cheaper than you could rent the human, that changes everything.”
He thinks this means “humans retire” and become completely replaced in the labor market by these emulated brains. However, he says humans “start out owning everything” and “their investments double as fast as the economy, i.e. every month.” So he thinks this means that humans who have access to wealth, and he mentions real estate in particular, will profit tremendously. He implies that those who don’t have wealth will suffer.
This parallels a lot of the discussions we usually have at EarthSharing about the need to fairly share the fruits of nature, so that we can all benefit from technological progress. Even these far-future forecasts aren’t, ultimately, so different from ages past. In the Guilded Age, we had industrialists profiting enormously off resource wealth and land during a time of rapid technological growth.
What this discussion shows is that no amount of technology can be relied upon for solving the problems of political economy. Poverty, in particular, cannot be solved without economic justice.
This past July, Earth Sharing organized an event in Oakland, California entitled: BIL Oakland 2016: The Recession Generation. The aim was to help millennials navigate the uncertainties of economic life in the aftermath of the financial crisis. One of the speakers at the event was Kim-Mai Cutler, a technology reporter and columnist for TechCrunch, best known for her work on the intersection of technology and culture in the Bay Area. Cutler has worked for Bloomberg, VentureBeat, and the Wall Street Journal. In the talk below, she discusses the insights of history on the Bay Area housing crisis.
Special thanks to Robert Schalkenbach Foundation, BIL, Cohousing California, The Henry George School, Edward Miller, Frank Ortiz, Alex Wagner Lough, Raines Cohen, David Giesen, Alodia Arnold, Christine Peterson, Christy Fair, Patricia Mikelson, Betsy Morris, Nate Blair, and all of the speakers and amazing volunteers! If you don’t see your name added here or at the end of the video, we apologize. Please send us a note and we will add you. We just wanted to release the video as quickly as possible.
As the wind power industry grows, those 100-foot pinwheels are becoming more and more an accustomed part of the landscape. They could soon, however, be a thing of the past. Vortex Bladeless, a Spanish company, is proposing a radical new way to generate energy from the wind. The bladeless turbines, elongated upside-down cones that they say look like “asparagus,” not only look completely different from conventional turbines but harness wind energy in an innovative way.
The basic idea of the Vortex is similar to that of conventional wind turbines–use the kinetic energy of air currents to generate electricity. This new invention, however, achieves this through an altogether different mechanism. Instead of the rotation of propellers, the Vortex uses “vorticity,” the aerodynamic effect that creates a pattern of spinning vortices when wind breaks against a solid structure. When wind is strong enough, vorticity causes an oscillating motion in the structures it encounters. Engineers and architects have been battling this for ages, working to design buildings and other structures that resist these wind whirlpools, which caused the collapse of the Tacoma Narrows Bridge in 1940.
Studies show that the Vortex captures thirty percent less wind power than the conventional design. However, twice as many Vortexes as propeller turbines can fit into the same space, which means a net 40% greater ratio of energy production to land area. Incidentally, using land more efficiently is perhaps the most important way protect the environment. The Vortex has no bolts, gears, or mechanical moving parts, making it 80 percent cheaper to maintain than conventional turbines. It’s also about 40 percent cheaper to install, with manufacturing costs at about 53 percent less. The Vortex is silent and, without spinning blades to fly into, safer for birds.
The technology is still in development. The company has started a crowdfunding campaign with the goal of 5,000 backers and $50,000 and has raised a million dollars of government funding and private capital in Spain. They are now looking to the United States for more funding. The Vortex Mini, which stands 41 feet tall and can capture forty percent of the power of the wind when conditions are perfect (blowing at about 26 mph), is scheduled for launch for residential use in developing countries in 2016. The 490 ft commercial Vortex Grand, with a generating capacity of 1 mW (enough to power 400 homes) is scheduled to hit the market in 2018.
At Earth Sharing, we know it is important encourage similar efforts to generate clean power. Systemically, this could be achieved through high taxes on oil, coal, land, other natural resources and the pollution produced in consuming them. Simply making harmful activities more costly through taxes while eliminating taxes on what is needed to produce clean energy (labor, research, sales of hardware, etc.) would foster an entrepreneurial environment more conducive to innovation and, in so doing, align corporate financial interests with protecting the environment.
The Internet is, like, the coolest thing ever. My kids, aged 17 and 14, can’t conceive of life without it. Back in the day, it used to be called “The Information Superhighway” — but it’s more than that, now. It’s become almost a sort of worldwide collective mind, connecting us in ways what evolve faster than they can be interpreted. Back in 1990, I organized a free public seminar, an introduction to the Internet. It was held in a room that seated 50 people, and about 150 showed up. People stayed to stand in the hallway, almost entirely out of earshot of the speakers, trying to glean whatever they could. We all want to be connected. Perhaps we all need to be connected. How It All Started The Internet started out as, arguably, the single most important by-product of US military spending: the ARPAnet, whose original mission was to provide an invulnerable command-and-control network. The basic idea was to break messages up into packets, each of which carrying instructions on how to reassemble them at their destination. These packets would be sent out into the network, using whatever pathway was open. Thus began a network that could still function even if big chunks of it (say, the Washington, DC and New York metro areas) were vaporized in a nuclear war. Such a network would carry digital messages — and it began to dawn on us that any old thing — be it music, books, photos, cartoons of the Prophet, video games — can be poured into an electronic tube in the form of ones and zeros, and decoded at the other end. The most neato thing of all, the thing that gave the Internet its nerd-heroic revolutionary ethos, is that it was participatory. Essentially, every user of the Internet would have equal access to every other user — and to a significant extent this remains true, even in these days of massive mass media. If you have a cell phone and a Net connection, you can report the breaking news. And, if you’re creative, savvy and lucky, it’s possible, with a very low initial investment, to get your Web content up in front of millions of viewers. This has been a boon to advocates and activists of all kinds — and a few notably successful entrepreneurs. We’re All Content ProvidersThe Internet companies that have made it biggest have been those who have found the best ways to leverage their users’ input. Google sells advertisements whose effectiveness are maximized automatically by association with the things people choose to search for. Ebay monetizes the crap in everyone’s basement by letting people present it, for free, to those who want to buy it. And Facebook! I often look at Facebook, over morning coffee, and wonder what the heck it’s good for — but it’s amazingly good at what it does. Facebook takes the genius of Google and Ebay a step further: not only does it expertly remind you about the stuff you’ve thought about, looked at or purchased — it does so in the context of the world’s favorite time-wasting hangout. I would not be surprised if a study were to show that Facebook users exist in some sort of hyper-relaxed hypnotic state: Like… yes, and share… All of these incredibly successful Internet firms rely on their users to be content providers. Yet, notwithstanding the amazing variety of cool stuff you can do with the World Wide Web, in physical terms it is just a way of transferring digital files from one computer to another. You can dump coded 1’s and 0’s into many kinds of pipe — and the pipe you want is the one that can reach as many users as possible. Initially, this was the telephone system, with its universal service, as was mandated in the US by the Communications Act of 1934. Among many other provisions, this law designated telephone companies as Common Carriers. This meant that they had no responsibility or liability for the information their lines carried, and that they could neither refuse nor discriminate against any caller because of anything said over the phone. As you would expect, Internet Service Providers (ISPs) initially had every incentive to act as common characters. It was the textbook example of what economists call a “network externality” — the more ideas, innovations, philosophy and porn its users provided, the more people would want use the Internet. This didn’t tend to overload the information-carrying capacity (the bandwidth) of the phone lines, because in the beginning, the Net transmitted information in the form of text. People accessed the Net using dial-up modems (the ones that made the weird skritchy noises when they connected); the fastest ones pulled in 56K bits per second. Right now I am using a DSL Internet connection, whose speed is on the low end of what is currently called “broadband.” My wife is downstairs watching a streaming video, and my laptop just recorded a download speed of 3.9 M bits per second — in other words, 69 times faster than the old dialup days. Back then, we thought the Internet was way cool and full of potential, but it wasn’t a pop-culture thing. It had a learning curve, and a lingo of its own, and this gave rise to a culture of proud geekery. Nerdiness slowly became hip. We also thought that the day of streaming video on demand was about as far in the future as Star Trek’s live-streaming of human beings. Moore’s Law Marches OnInternet Culture, however, was on a collision course with the Net’s emergence as a pop phenomenon. Little by little, it got easier to use. There was no stopping it: text-based interfaces gave way to graphical browsers (which were given away free). Online commerce boomed, following the lead of Jeff Bezos, who shipped Amazon.com’s first book from his garage in 1995. Over the last fifteen years the Net has changed the way just about everyone does business. And, the list of feasible online wonders keeps expanding, to the tune of this crazy little thing called Moore’s Law. Intel pioneer Gordon Moore articulated the principle that sheer data-processing power tends to double every 18-24 months. This has held true for over three decades. While the laws of quantum mechanics prohibit this process from going on forever, predictions of when the Moore’s Law Curve would flatten out have repeatedly been pushed into the future. “In 1976,” writes Jonathan Strickland, “the Cray-1 was state-of-the-art: it could process 160 million floating-point operations per second (flops) and had 8 megabytes (MB) of memory.” The laptop on which I’m typing these words has an Intel i7 processor that can process 113 billion flops, and has exactly 1,000 times the memory capacity of the ’76 Cray. Things have gotten way faster. It may never be possible to store entire human beings in computer memory (Star Trek’s transporter is the ultimate, I guess, in Cloud Computing) — but I can now watch Star Trek on my laptop anytime I want, even at the relatively pokey download speeds available in rural Maine. The Internet has entered the era of streaming video — and that is what has made the issue of “Net Neutrality” so huge. Streaming video uses a tremendous amount of bandwidth. Netflix and YouTube alone account for more than 47% of the overall downstream bandwidth use in the US today. Net Neutrality is the principle that ISPs should be “common carriers.” The so-called “last mile” providers, who own the wires that bring the data to your home, enjoy a monopoly. According to Net Neutrality advocates, they have no business discriminating against any of the data coming through those wires. People get very emotional about this (I think the wonderful Vi Hart offers the most listenable explanation, but John Oliver’s excellent rant is a must-see, too). The Internet’s character as a wide open frontier, with equal access for everybody, is what made it such a fertile ground for innovation and creativity. If we allow ISPs to pick and choose the data they transmit to us, we’re on a slippery slope. Big money will pay for big pipes. The Internet gave normal folks a seat at the Grownup Media Table; now Big Cable wants to take it all away. The case for Net Neutrality seems pefectly obvious — and that is how advocates present it: as a simple standoff between We the People and the forces of Corporate Privilege. Cui Bono? Network Neutrality started becoming widely debated after certain bandwidth-hogging services became popular. (Before that, it wasn’t a front-page issue, because Net Neutrality wasn’t widely perceived as threatened.) First it was peer-to-peer file-sharing by services such as BitTorrent (including lots of illegal copies of copyrighted TV shows and films). A 2007 lawsuit against Comcast, the nation’s largest cable company, forced it to stop blocking BitTorrent. Recently, controversy ensued after Comcast slowed down Netflix service to its subscribers. The dispute was settled this past February when Netflix agreed to pay Comcast for faster, more reliable service. This agreement, of course, violated the principle of Net Neutrality. A few technical observations will help us to understand the issues here. Back in the days of dialup modems, many local companies competed to provide Internet access; they all had equal access to the phone lines. As demand for broadband grew, however, Internet service started to depend on privately-owned wires, of either the phone company of the cable-TV company. Because most customers have only one set of these wires available, ISPs effectively have a monopoly. The Net Neutrality debate centers around the behavior of these ISPs, which provide the vital “last mile” service to individual homes. The ISPs deliver content; they don’t provide it. Content comes to individual users from the worldwide Internet, via the ISPs. The abandonment of Net Neutrality, we are told, will allow the establishment of a “fast lane” for providers with deep pockets. However, ISPs aren’t able to deliver content any faster than it comes to them through the worldwide Internet. ISPs cannot actually speed up data; they can only slow it down — and they contend that bandwidth-heavy services clog up their currently available capacity, slowing down service for everyone. In the early days, all users of the Internet shared the infrastructure through which Net data coursed: the Internet backbone. Today, however, there exists a “fast lane” through the Internet that has nothing to do with the last-mile providers. Large content providers such as Netflix or YouTube use content delivery network (CDN) technology, which sets up cached versions of their content on servers close to high-demand areas. This greatly speeds up the delivery of the video content to the ISP — and, it greatly increases the volume of data the ISP must handle. Some ISPs have blocked content from some CDNs; others have negotiated payment agreements. Now, if one company, by utilizing a paid CDN service, is able to get faster speeds, is that not establishing a “fast lane” and violating neutrality? Well… it’s certainly establishing a fast lane, anyway — and that is how today’s Internet works. If every packet of data were required to be treated just the same — in other words, if no proprietary way-clearing equipment were allowed — most users would get poorer service than they do now. How Will We Get Our TV? The key factor in all this is that only recently have on-demand movies and TV series on the Internet become commercially viable. Before that, we consumed TV shows in broadcast form — all at the same time — either via a broadcast antenna or a cable subscription, and we consumed feature films either in movie theatres or by renting the physical media. One might ask why it’s so hard to get videos on the Internet, when we’ve been getting hundreds of TV channels through coaxial cable for decades. The difference is in the way the signal is provided. A broadcast TV show is provided via a certain frequency through a cable. It is only available at the time of broadcast. One signal can be sent to the node in, say, each apartment building, where it can be split among 1-200 subscribers. However, the consumer of a streaming video on the Internet can start the show anytime, pause it and resume it later, and simultaneously have access to the full range of sites on the Web. An Internet TV show takes up a bunch of bandwidth, which must be dedicated at that specific time to each individual user who clicks on it. That is the case for all Internet content, of course — but websites and still images take up so much less bandwidth that millions of them can bounce back and forth without degrading anyone’s service. The key to ensuring fair and innovative Internet service is competition. Under current conditions, cable or telecom companies have a monopoly on last-mile Internet service. However, there are a number of interesting developments that can, potentially, invigorate competition among Internet providers. Indeed, many commentators argue that mandating Net Neutrality rules now would stifle various forms of technological innovation, and weaken Internet service across the board. Until very recently, cable companies have been mainly in business to deliver broadcast-model cable TV via established cable networks. As demand for that service falls, they will have more incentive to devote bandwidth to Internet services. In a way, the Net Neutrality debate comes down to a conflict between two types of Big Player — the ISP, such as Comcast, and the large-scale content provider, such as Netflix — over who is going to pay for increased capacity. Each wants to preserve the viability of its own business model — but in the end, the market is going to decide who wins. Possible Sources of Competition Folks in big cities have their zippy cable modems — but, DSL service through regular old twisted-copper telephone wires is still the most prevalent form of broadband service. New technology is under development that promised to achieve Gigabit speeds over regular phone lines (i.e., some 20x faster than my Star Trek stream). It would require step-up boxes within a quarter mile or less of the home, but current DSL systems also require local boxes, only a bit less frequent — and if the market is there, there’s a good chance the hardware will be provided. The next generation of cable technology also promises considerable improvement in download speeds: the race is on. It’s worth noting that any system that reliably steams high-quality video will have no trouble handling the less bandwidth-intensive needs of all of us lowly content providers who offer mere journalism, art, poetry, advocacy, education — content, that is, in the form of text and images. In today’s market, the cost of storing and transmitting such things has been cut, effectively, to zero. This is not to say that fabulous, as-yet-unheard-of new applications might not require considerable bandwidth. Who knows when the next Google or Facebook will show up? But when it does, it will emerge on the open Internet, just as all those other sites did — and, in today’s market, when content becomes popular enough to need extra delivery capacity, content providers can afford to buy it. Many people, of course, have ideas to share or programs to promulgate, things that are very important to them, yet have failed, thus far, to “go viral.” Is the next phase of the Internet going to pass these good people by? It’s conceivable — but it seems to me that this ship has already sailed. The Internet has been a very, very big place, for some years now. Yet, it’s worth noting that neither ISP monopolies not bandwidth limitations have kept anyone from viewing the video of NYC police officers choking Eric Garner. The Internet’s democratizing potential is still strong. How About Municipal Broadband? Finally, there might be one more way to ensure that there is healthy competition in the ISP market. Some — including, lately, President Obama — have advocated municipal investment in broadband service. This would be one way to keep the big-cable ISPs on their toes. Big Cable recognizes this, because its lobbyists have been working overtime to get states to pass laws to restrict or prohibit the practice; such laws are on the books in twenty states. Tennessee, for instance, prohibits cities from establishing municipal broadband in an “area where a privately-held cable television operator is providing cable service.” Apparently Chattanooga got in under the wire, though, because the city (pop. 171,000) has provided fiber-optic cable directly to every home in it. It accomplished this feat along with an upgrade to its municipally-owned power grid, and it was funded by a combination of Federal stimulus funds and municipal bonds. Chattanoogans can get full Gigabit service for $350 per month, but most opt for the affordable 30 MB service — six times faster than the national average. Chattanooga’s fiber system carries TV and telephone signals as well. Not only is it expected to start showing a profit this year, it also makes possible a slew of other money-saving innovations, such as a smart electrical grid, traffic lights that respond in real time to changes in traffic patterns, and vastly improved responses to outages. It’s the wave of the future, and Chattanoogans are quite happy to be surfing it. Skeptics of the “Net Neutrality” position argue that Internet service is qualitatively different from public utilities such as highways, or electrical service (and the deregulation of wholesale electric power over the past few decades has yielded strong efficiencies). The key difference, they argue, is that Moore’s Law continues to be in effect; unfettered technological innovation will continue to yield unpredictable benefits, and should not be hindered by regulation. Everyone, however (everyone, anyway, who isn’t paid by Time Warner/Comcast) agrees that lack of competition in “last mile” Internet service hinders progress. Where will this competition come from? Well, it could come from a number of sources. Successful implementation of Gigabit DSL service, for example, would provide a strong competitor to the cable companies. Or, fiber-to-the-home could blow cable out of the water. This could be done by local governments, as in Chattanooga (and in Wilson, a town of 50,000 in Eastern North Carolina), or by private companies, as Google has been doing in Kansas City, Missouri. But, if such an infrastructure improvement would be cost-effective or even outright profitable for a city that undertakes it, it’s hard to see why a city would have to wait for the largesse of a Google. Public investment in local broadband is simply good municipal policy. If it is, and to the extent that it is, the Henry George Theorem tells us that it will fully pay for itself in higher land values. Don’t you think that cheap, reliable high-speed Internet service will move Chattanooga, Tennessee up on the list of desirable places to start a business? There’s no doubt that people prefer places with high-quality, reliable infrastructure. There’s also no doubt (though this is a fact that is less widely understood) that the very best way to pay for local infrastructure is by taxes on land value — after all, it is precisely those public investments that have created that land value in the first place. Undoubtedly, Internet service is a “public utility” issue — which is why the Net Neutrality debate has been so fraught and passionate. But the answer isn’t to try to restore the Internet to a bygone era of “neutrality” that merely rations existing capacity. The answer is to let a million technological flowers bloom — and when they do, remember who rightfully owns the ground they’re growing in. So, join hands, everyone — all together now: What do we want? Municipal broadband! How do we pay for it? The land value tax! I can’t hear you! Come on — say it again now, much louder: