Some classic (non-aggregation) monopolization!
Everyone loves to talk about tech monopolies. Their acquisition spree and obvious market power in a world with no distribution cost is likely better discussed in the DOJ recommendation or at the venerable Ben Thompson’s Stratechery. Instead, I want to talk about some good ole fashioned monopolizing. And that’s vertical integration via going down the technology stack into hardware. I want to discuss why now, why does it matter, and how is each of the large platforms positioned.
The phrase “Owe the bank 500 dollars, that is your problem. Owe the bank 500 million – that is the bank’s problem.” is something that comes to mind for some of the tech monopolies right now. There is a shifting relationship between the largest software companies in the world and their suppliers, and as the leading software companies have become ever-larger portions of the compute pie, it’s kind of become the problem of the tech companies, and not the semiconductor companies that service them to push forward the natural limits of hardware. Software ate the world so completely that now the large tech companies have to deal with the actual hardware that underlies their stack. Especially as some companies like Intel have fallen behind.
At this point in time, no other companies have ever had such a concentrated share of absolute compute and sells it as a service. Even IBM in its zenith sold PCs and Mainframes (and they still ran a tightly integrated stack!), not units of compute disaggregated like the infrastructure as a service provider. As Moore’s law has broken down and AI compute demand has skyrocketed, this has kind of become a problem at the companies and they are aware. This great video about opensource EDA and tooling problems (if you’re a nerd you’ll enjoy) started with some interesting caveats.
However, if you look at Google’s products, were our demand for compute power continues to grow substantially, frequently at exponential type numbers, and this used to be a free ride with Moore’s law. Giving us increasing compute power to keep up with this increasing demand for computation. But that’s kind of come to an end now which isn’t great, and so we have a lot of projects at Google that are trying to solve this and I’m working on one of them. I don’t work on the most successful project we’ve had here, which is the TPU. This is a ML accelerator that has drastically increased our ability to do ML compute, and this is kind of an interesting thing in its showing that hardware, that is the main specific, can potentially keep up with this growing demand for compute. The problem is it’s taking a lot of effort to create these hardware accelerators and while some groups at Google are big enough, like the ML people to have dedicated teams working on dedicated hardware like the TPU um, we are looking at the problem that every team at Google is eventually probably going to have to look at hardware accelerating their workloads, especially if their demand continues to rise.
Btw – this transcript was made with https://hierogly.ph/ made by @_nd_go and me.
So clearly this is top of mind at many of the tech companies around the world. I had [ … ]