The Billion Dollar Code and the Rapid Rise of Mapping Technology
The Billion Dollar Code is a quick and fascinating tour of the evolution of satellite image processing, among other things. In the first episode of the series, the main characters discuss the difficulty of processing imagery and displaying it seamlessly so that a person can virtually fly to any location on earth. As an academic researcher in the early 1990’s, I can attest to the hurdles they faced.
When the series’ protagonists were working on Terravision software, I was using Landsat images to build a map of forests in Texas. In 1992, a single Landsat image was 500 megabytes, and hard disk drives were typically in the 40-to-100-megabyte range. I would section a Landsat image into manageable pieces to process them into images like those displayed in Terravision. The idea of generating the imagery needed for Terravision was daunting and seemed impossible at the time. Kudos to Axel Schmidt for such a far-reaching vision.
My involvement with Google Earth started at In-Q-Tel, the Central Intelligence Agency’s venture capital firm. In-Q-Tel was charged with finding and fostering commercial technology that would meet the mission of the CIA. The Agency would provide “the problem set,” i.e., a list of intelligence gathering and analytic problems searching for a commercial off-the-shelf solution. Why a commercial solution? For decades, the Department of Defense (DoD) and the Intelligence Community (IC) relied on contractors to write and build special-purpose software and hardware. In the early 2000s, with the dot-com boom fueling the growth of startups, the Agency wanted to tap into the innovation and rapid development of technology occurring in Silicon Valley and other tech centers in the United States.
One of the challenges presented by the problem set was the provisioning and rendering of imagery. At the time, one solution was running fiber optic cables to specialized workstations that could provide the speed and bandwidth to perform the global flythroughs with imagery envisioned by Terravision. The Agency wanted a solution that could run on commodity hardware, the same personal computers that anyone could purchase from their local home electronics store.
As a senior program manager specializing in geospatial tech, one of my responsibilities was to review and vet promising geo-technology. I first met Keyhole, the company acquired by Google that became Google Earth, when they demoed EarthViewer at the In-Q-Tel office. I recall being impressed that it ran on a personal computer. Unlike Terravision, which required specialized hardware, Keyhole could run on a Windows PC and was the kind of solution that addressed the problem set. In-Q-Tel invested in Keyhole in 2003.
The last episode of the series shows the Terravision founders losing the lawsuit against Google. However, the question remains, “Did Google use Terravision’s algorithm in Google Earth?” One of the responses comes from Avi Bar-Zeev, a Keyhole founder. Bar-Zeev answers the question in the title of his blog post: “Was Google Earth Stolen? (no).” It’s not uncommon for unconnected inventors working on the same problem to create similar ideas, processes, and technology. For example, Charles Darwin is credited with evolution, but Darwin’s contemporary, Alfred Russel Wallace, independently proposed evolution through natural selection.
Sign up for our newsletter
Why sign up:
- Latest offers and discounts
- Tailored content delivered weekly
- Exclusive events
- One click to unsubscribe