Ask any iOS engineer: there is no API for extending the built-in mail app on the iPhone. If you wanted to build something like Rapportive, most people would tell you that it is impossible. Yet we figured it out.
Tesla is really a battery and technology company that happens to use batteries and computers and software to control a drive train. As such they are not subject to the same physical and economic constraints.
So I think luck, usually in the form of timing, is the main differentiator for growing huge. However, there are several points to make here. Luck is certainly not enough, and most of those companies who reached massive scale had:
1. A strong product that solved a huge market problem.
2. The ability to listen to the market and maneuver accordingly.
3. Great ability to execute - build stuff fast enough and at a sufficient level of quality.
4. A technological solution that is far from trivial, especially at scale.
The list above is what I thought before Eric’s question. But Mr Schmidt added another, fifth point: 5. An inherent property of the service that pushes it to scale.
long-term goals do not always align with short-term metrics
There’s a stage in a product cycle where you know it’s going to ship. Where you can see the end. It’s right there, sitting at the corner of Emerson St. and University Ave. Or maybe sipping coffee at Fraiche.
Oh, hello, it waves — there. In front of you. The End. (Or, An End.) And seeing this puts you in a special space that when you think about it — think about all the work that it took the team to get there, to bring that end so close — you are overwhelmed with a flood of emotions.
Some of our specific challenges of the denormalization process were:
1. Dozens of legacy data formats that evolved over years. Peter Ondruška, a Facebook summer intern, defined a custom language to concisely express our data format conversion rules and wrote a compiler to turn this into runnable PHP. Three “data archeologists” wrote the conversion rules.
2. Non-recent activity data had been moved to slow network storage. We hacked a read-only build of MySQL and deployed hundreds of servers to exert maximum IO pressure and copy this data out in weeks instead of months.
3. Massive join queries that did tons of random IO. We consolidated join tables into a tier of flash-only databases. Traditionally PHP can perform database queries on only one server at a time, so we wrote a parallelizing query proxy that allowed us to query the entire join tier in parallel.