Skip to content
dan.fritch.mndan.fritch.mn
GitHubGitLabMediumTwitterFacebookLinkedIn

How Software Makes Hardware

It's never been easier to get started in software. Creating new applications, products, companies, and even industries is easier than it has ever been. For the new programmer, information abounds on how to get started. (Even more is available to the veteran practitioner.) Software engineering has experienced a quantum leap in productivity, largely creditable to a series of modern, designer-centric programming languages and tools. A thriving open-source community provides a seemingly endless supply of production-tested, high-quality libraries and frameworks. Nearly all of our favorite software products use these free tools, to one extent or another, so much so that entire websites are dedicated to who uses what. Low-cost, easily accessible cloud-computing platforms can then deploy code worldwide, in some cases in just minutes. We learned a bit about this phenomenon in 2011, along with the introduction of the idea that software was eating the world:

In 2000, when my partner Ben Horowitz was CEO of the first cloud computing company, Loudcloud, the cost of a customer running a basic Internet application was approximately $150,000 a month. Running that same application today in Amazon’s cloud costs about $1,500 a month.

These costs have continued to plummet, perhaps another order of magnitude since. This combination of factors has enabled start-ups with little more than a dorm room full of friends. And there is plenty of economic incentive to start them: from time to time, companies with scarcely a dozen employees are acquired for more than $1 billion.

Hardware has not only failed to keep up, it has eroded. To the extent that entry in the chip industry is discussed at all, it tends to be in terms of how incredibly expensive it is. Sadly these gargantuan financial commitments begin not with building a chip itself, but just for gaining access to the requisite software tools. New entrants have correspondingly died off; where software startups abound, silicon startups are near impossible to find. Vertical integration had become the new entry-point into hardware. Only sufficiently large and successful software companies can now enter the silicon space.

How and why have the two diverged? The primary answer seems obvious. Making hardware requires making physical things. Making software does not. That fundamental difference can explain a lot, but trips over an essential fact: most hardware design is not physical at all. Nearly all chip design is spent on software models of future silicon. These are ultimately programs, designed in a handful of domain-specific programming languages. Most design-time is spent writing, running, and verifying their code. Hardware-code traffics in the same quantities as software: bits and files flying back and forth, lines of code, and the mental models by which humans can manipulate them. Counter-intuitively, chip design is just a specialized form of software design.

These programs don't just travel between engineers: many "chip" companies sell them as a primary business. Probably the best-known example is ARM. While we might often colloquially say things like the iPhone uses an ARM processor, or ARM has shipped over 100 billion chips worldwide, both belie the reality that ARM does not sell chips at all. They design a family of CPU architectures, which are licensed to other chip companies. (Nearly all of which are also "fabless", e.g. they don't manufacture chips either.) ARM does not sell them a chip, it sells them architectural plans to use in making their own. Chip guys refer to these plans as silicon IP. This takes a few forms: piles of documentation and instructions, for one, but most centrally, a set of model-software. This code does a number of things: emulates the CPU architecture in simulation, describes it for manufacturing, and performs all the other functions required to virtually design a larger system around the CPU.

Silicon IP also holds a very different economic position from its software brethren. Like everything else in hardware, it tends to be incredibly expensive. While ARM's architecture (and their primary competitor of the past decade, Intel's x86) are proprietary and closed-source, the new kid on the CPU architecture block is RISC-V, which is entirely open-source, as are several of its notable implementations. These repositories look a lot like what silicon-IP companies are designing and selling.

This process by which hardware systems are created will be the subject of Software Makes Hardware. By hardware here, we of course mean electronic hardware: chips, boards, and systems which use them. (While how software makes electronics may have been a more accurate title, we couldn't resist the hard/soft-ware duality.) Software Makes Hardware won't focus on the fundamentals of logic, or semiconductor physics, or elaborate fabrication technology. These are all well covered elsewhere. Our focus is on how the engineering happens: the languages, tools, and mental models used by engineers, and a bit on the business models which fund them. We'll dig into how this space is changing, and how it's likely to look in the future.

While we hope to introduce this space to a general (albeit highly intelligent!) audience, our primary audience will be software engineers, and everyone generally interested in software engineering. We fully expect the next generation of hardware to come from software. And we mean this in several respects: the engineering, the economic models, and the companies and people themselves.

In our first article, we'll begin by introducing the programming languages of hardware.

Edit this page on GitHub
Next:
Software Makes Hardware