Image from DALL-E 2, of course.

What’s broken in commercializing science?

Mark Johnson
12 min readNov 27, 2023

Scientific progress is one essential key to our security as a nation, to our better health, to more jobs, to a higher standard of living, and to our cultural progress.” -Vannevar Bush, 1945

Science is the foundation of progress, the mother of abundance, and the father of possibility. Like magic, science allows us to manipulate the world in ways we didn’t think possible, whether it’s talking to someone on other side of the globe, soaring into the sky like birds, or taking a pill that cures a disease.

When I was in college in the late 90s, I expected that by the time I hit 40, we’d have a cure for cancer, we’d have unlimited clean energy, universal high-speed internet, and a lunar base. Sadly, none of that occured. Despite major achievements in areas like computation, the internet, and mobile over the past few decades, we shouldn’t become complacent about the march of progress.

Thus, I sought to understand why we don’t see more science commercialized. Sadly, the research on how science is commercialized is rather sparse. The Institute for Progress is doing some great research in metascience, headed up by Heidi Williams. J Storrs Hall wrote a fantastic book Where’s My Flying Car. Eric Gilliam has a whole substack dedicated to the history of innovation. But overall, questions like “how well is technology being spun out of National Labs?” often go unanswered.

My main thesis going into my research was that we aren’t commercializing enough science out of research institutions like universities and national labs. All evidence I’ve seen has corroborated that thesis, though I’ve also uncovered many other factors: are we funding the right research? are we coordinating properly across institutions for moonshots? is government funding crowding out other sources of capital? are VCs the wrong type of capital to fund science? why aren’t entrepreneurs seeking out technology from labs to turn into products? is society ready for new technology?

What’s gone wrong? I’ve spent the better part of a year researching these questions and aim to share my findings in this post in hopes that I’ll find kindred spirits.

The Conditions for Progress

If we zoom out, I posit there are a number of societal factors that affect scientific progress:

  1. R&D — we need to invest in foundational science
  2. Commercialization — we need to translate science into products
  3. Culture — people need to be excited about technology change
  4. Regulation — we need a permissionless regulatory environment for emerging technology

Culture and Regulation are deeply interlinked. With a permissionless regulatory environment like the explicit laissez-faire policy of the Clinton administration in the early days of the Internet, we created enormous value for humanity. In the following decades, we democratized information, got mobile phones into the hands of more people than have clean drinking water, and created artificial intelligences. Compare that to the Nuclear Regulatory Commission (NRC), which effectively halted progress on nuclear power. Instead of entering the 20th century with a network of advanced, zero-carbon nuclear reactors, America still belches out carbon from coal plants, a technology that was remarkable in the late 19th century, but continued long after its ideal lifespan.

Though permissionless innovation is important, the effect of culture cannot be discounted. The NRC was timid about nuclear power exactly because the broader population was terrified (rightly) of nuclear war and fearful (unfortunately) of nuclear energy. Movies like The Day After (1983) and The China Syndrome (1979) were cultural lighthouses, guiding us away from the positive aspects of nuclear energy. The China Syndrome premiered just a few weeks before the Three Mile Island nuclear accident, helping to fuel ticket sales and stoke the flames of fear. The dawn of the AI age is looking much more like nuclear than the internet, with society wringing its hands in fear of AI overlords, instead of imaging the incredible possibilities for unlimited intelligence.

Likely there is work to be done on understanding the relationship between culture and innovation. Culture is the tail that wags the dog of government: if people are generally positive towards technology, then government will follow suit and play its proper role of maintaining rule of law and amending regulations to support new products. Marc Andreessen’s Techno-Optimism Manifesto spoke to people like me who see positive upside in technology, but failed to recruit new people to the cause.

Cultural acceptance of technological change an investigation for another day and for another person, as I’m hardly someone who understands cultural trends.

R&D and Commercialization are also a feedback loop. If we spend huge amounts of money on research, which don’t translate into products, then we will further our intellectual understanding of the world but won’t get the fabulous benefits of that understanding. On the flip side, science that is connected to real world problems will direct our brightest minds towards science that can radically improve society and industry will work quickly to translate the science into utility.

So what’s going on with that feedback loop? My intuition is that too much technology sits on the shelves of labs instead of benefitting society. America has the largest R&D budget of any nation with 28% of the global total. Are we seeing society progress as fast as we might have expected?

How is R&D commercialized?

My experience is in building products from the raw materials of technology. Almost every startup I’ve been involved with has spun out of a research institution (Stanford, Xerox PARC, Los Alamos National Laboratory, UIUC) and I’ve seen the challenges of spinning something out, finding aligned capital who understands technology as well as business risk, and building a product out of raw scientific research.

Much of what follows is based on my own experience, but also has been corroborated by those that I’ve spoken with in academic, tech transfer organizations, and in deep tech investment.

The regulatory environment is a burden, particularly in highly regulated markets like energy and healthcare. But for the COVID-19 pandemic, we might have let the mRNA technology underlying vaccines sit in a quagmire of regulation. It shouldn’t have taken a global crisis to unlock the value of science.

Even if we were able to radically improve regulation, we’d still have a massive issue in jailbreaking technologies from research institutions.

The government is deeply embedded in R&D and this might have become a problem. The chart below shows the relative spend for basic research from the four key institutions of society, with government and education spending the bulk of dollars. Not only has the absolute number of government increased considerably, the relative share of the government now dominates. (As an aside, getting accurate numbers on science funding is extremely difficult: what counts as “basic” vs. “applied” research, for example?)

When government plays its proper role in science funding and experimentation, it can move the world forward. World War II was won by America’s ability to mobilize our production capacity. The US produced more tanks than all of the Axis powers combined and twice the production of artillery and machine guns. However, it was also won by technology: radar, sonar, missile trajectories, and, of course, The Manhattan Project. Seeing the success of science, the government took a much larger role in science funding. The Apollo Program was a response to the threat of the USSR dominating space and ushered in the space era. Dollars deployed from ARPA went into many different technologies, the most notable being the Internet (originally ARPANet). Organizations like Stanford Research Institute (SRI), which received huge amounts of government funding, created technology necessary for the PC, like the mouse and early graphical user interfaces.

The Mother of All Demos, led by Doug Engelbart at SRI in 1968, demonstrated all most of our modern computing infrastructure includuing networking and the mouse.

Why was the government so successful during this period in deploying R&D dollars efficiently to great effect and integrating that research into society? One potential explanation is that the vision was very clear in all of these projects. In WW2, there was a clear and present danger of the Axis powers dominating the world with a totalitarian Nazi regime. After winning the war, we needed to deal with the threat of communism. The practical focus of needing to ensure that communication was maintained even in the face of a nuclear strike is what gave us the Internet. The reality of needing to reduce the weight on the Apollo spacecraft is what drove the desire to use new-fangled integrated circuits as computers, providing an early capital infusion into the chip market.

What is the current challenge the world is facing? China is certainly a threat, but there is no consensus on exactly what that threat is or how to deal with it. Climate change technology has received trillions of dollars of funding and all we’ve gotten is more efficient solar and wind, hardly technologies that unleash clean energy abundance of clean energy and (arguably) a poor return on capital given alternatives like nuclear. The Covid pandemic could have been a rallying moment in society to radically improve medical technology. Instead, mixed signals from the NIH has divided our country further, making Covid opinions political instead of scientific. Lacking a clear vision will hamper our deployment of R&D capital.

However, that lack of vision still doesn’t explain why given billions of research dollars every year, we don’t see more science entering society. The government realized this gap and passed the Bayh-Dole act of 1980, creating a structure to allow technology to be spun out of labs. The Act allowed institutions to profit off their inventions, even though the grant money was from the government. This was well-intentioned and based on capitalist principles.

Unfortunately, that act was also the progenitor of our current “tech transfer” infrastructure.

Technology Transfer Offices (TTOs) exist at most research institutions with the intention of licensing IP developed at the institution for the outside world. My personal experience working with TTOs is horrendous: they are typically lawyers who see their role as protecting IP rather than shepherding IP to its highest-and-best use. There are some bright spots in tech transfer, but the vast majority of TTOs should be shut down and rethought completely.

A key problem with research institutions is that the protected IP doesn’t have a clear and obvious use case. There are exceptions, like spinning out drugs. Those are easy, since once efficacy is proven, there are large drug companies with plug-and-play infrastructure to manufacture, distribute, and market the new drugs (not to mention deal with the regulatory burden of the USDA). Most technologies don’t have as clear cut cases. A personal example was Descartes Labs, where we spun out mathematical optimization algorithms for machine learning over complex signal data. Choosing satellite data was driven by the markets: since so many satellites were going up, our technology was well suited to analyze the enormous amounts of data coming off the hardware. That connection between the underlying technology couldn’t have been found by the TTO. Our team needed to go into the wild to find product-market fit.

Further, much of the technology at research institutions isn’t ready for venture capital. VC typically likes to invest in businesses with clear markets, clear products, that fit into established models, like software-as-a-service. VCs famously say that they don’t fund “science projects,” which is the majority of the IP produced by a research institution. At Bell Labs, for every Bill Shockley (transistor) or Claude Shannon (information theory), there were 9 research scientists, who were responsible for translating the technology into something that could be licensed externally or deployed within the AT&T network.

VC capital is adverse to deep tech, ince the LPs for VC funds demand certain kinds of returns. So, even though VC is technically risk-capital, most VCs act like PE firms, looking to reduce the risk of their investments through financial analysis. Since SaaS metrics and software financials are well understood, most VC capital goes to software companies. Atoms are forgotten when bits are easier to invest in. I posit that we need a new capital structure for commercializing science, probably patient capital from sources like university endowments, foundations, and permanent funds.

Though there are many problems with research institutions, business also doesn’t seem to be playing its proper role. The golden era of corporate research labs was in the middle of the 20th century: AT&T’s Bell Labs, IBM Research, Xerox’s PARC, Sarnoff, etc. Bell Labs alone is responsible for some of the most important inventions of the 20th century: the laser, information theory, the transistor, the underwater cable, satellite, cellular technology, to name a few. Not only were the inventions practical, but basic research needed to be done to enable these new technologies: Bell Labs received 9 Nobel Prizes. What can we say about the R&D budgets of modern corporate giants like Apple, Google, and Microsoft? Certainly DeepMind at Google has written seminal papers in AI, a big win. But, the most prominent and well-funded of research labs, Microsoft Research (MSR), has produced reams of papers but few notable innovations beyond incremental improvements to Microsoft’s software products.

I don’t have a strong thesis yet on why corporate research seems to have stagnated from the astronomical gains of the mid 20th century. Perhaps there is a vision problem again here. At Bell Labs, the goal of the organization was to improve “the network” and the AT&T end-customers were always in mind. Even theoretical advances were important to this vision, like when Claude Shannon created the mathematics of information theory. Not only was this a huge leap forward in quantifying information (e.g., signal vs. noise), it was immediately practical in devising ways for AT&T to compress and unpack information sent over their network. There was no such thing as theory divorced from practicality at Bell Labs.

Another possibility corporations focus their R&D budgets on near-term practical technologies since government R&D funding typically is focused on longer-term bets in science. A consequence of this is that there’s a missing feedback loop between practical applications and ivory tower science. Scientists don’t have to deal with the messy world of engineering their science into something practical, and certainly not worry about whether or not consumers or society needs what they’re building. In the 20th century, some of our brightest minds in physics gave us practical advances in materials and information transmission. Now, they work on abstract concepts like string theory and the Standard Model in physics. When we split the atom, there was immediate utility. When we discovered the Higgs Boson, we patted ourselves on the head because the mathematics worked out, but there was no clear utility.

Solutions?

Abundance comes from translating science into practice through engineering. Despite the US (and the broader world) spending huge sums of cash on science, we don’t see enough breakthroughs, nor do we see the breakthroughs translating into solutions at a fast enough clip. Where should we look for a solution?

I hope that my musing above convinced you that there is something broken in the system itself. A first step might be researching the history of scientific research as an institution over the past 150 years and building a model of how it works today: who are the players? where does the capital come from? what’s the process moving science from labs into society?

With that current state of the world well-described, we then could create a vision for a future state where scientific advances are regularly advancing society. This will require new research organizations, new capital structures, and a renewed role for business in scientific research.

My personal passion project over the past few years has been noodling an idea for a science accelerator called Morton Labs, to bridge the gap between research institutions and capital markets. If Universities produce IP that isn’t yet fundable, we need to solve the research engineering challenge (how to derisk the technology) and the market engineering challenge (how to create sustainable businesses with products that address a market need). Such an organization would require relationships with multiple sources of IP and a pool of extremely patient and risk-tolerant capital.

However, there are lots of other ways to think about solving the problem. Organizations like the Arc Institute, funded by Patrick Collison (Stripe) and Vitalik Buterin (Ethereum) are attempting to give grants to bold scientists over 6 years to come up with novel solutions in biology and medicine. Ben Reinhardt’s Speculative Technologies attempts to replicate the successes of the DARPA model — particularly its ability to coordinate resources across multiple institutions — to fund long-term speculative research. And others, like Eric Gilliam, are dreaming up new kinds of deep tech VCs.

This post wasn’t meant to have all the answers. In fact, it’s really meant to be a beacon for others who see the problem similarly. Whether you’re a university president, a program manager at NIH or NSF, a lab researcher, a disgruntled TTO employee, or a scientist hungry to change the world… I want to talk to you.

Society has built up strong infrastructure around science over the past century: we have the labs, the scientists, and the money. If we can build a vision for how science can positively affect society, then by the time we hit 2050, the world will be radically different and radically better.

--

--

Mark Johnson

CTO of Stand Together. Former CEO of GrainBridge, Co-founder of Descartes Labs, CEO of Zite. Love product, philosophy, data refineries, and models.