Google Crisis Response: a small team tackling big problems

This is the latest post in our series profiling entrepreneurial Googlers working on products across the company and around the world. Speed in execution is important for any Google product team, but as we learned after the recent earthquakes in both Japan and New Zealand, it’s even more critical in crisis response. This post is an inside look at the efforts of our year-old Crisis Response team, and what they’re doing to make preparedness tools available to anyone at the click of a button. - Ed.

The Google Crisis Response Team came together in 2010 after a few engineers and I realized that we needed a scalable way to make disaster-related information immediately available and useful in a crisis. Until a little over a year ago, we responded to crises with scattered 20 percent time projects, but after the Haiti earthquake in January 2010 we saw the opportunity to create a full-time team that would make critical information more accessible during disaster situations.

For us to help during a crisis, it’s vital to get things done really quickly, and we’ve been able to do that as a small team within Google. Working from a standard already developed by one of the Google engineers, Person Finder was built and launched in 72 hours after the Haitian earthquake, and it launched within three hours after the New Zealand earthquake in February. Unfortunately, there have been an unusually high number of disasters over the last year, forcing us to learn and get even faster.

Within minutes of hearing about the 9.0 magnitude earthquake off the coast of Japan in March, Googlers around the world—from engineers to webmasters to product managers—immediately started organizing a Google Crisis Response resource page with disaster-related information such as maps and satellite imagery, Person Finder and news updates and citizen videos on YouTube. In Japan, Person Finder went live within an hour of the earthquake. More than 600,000 contact entries have been made since then—more than all other disasters combined—and there have been several reports of people finding their loved ones safe. I was inspired by my colleagues’ ability to launch tools about an hour after the earthquake struck; the Tokyo office, in particular, has really been helping to drive the rapid response and provided real-time information to teams across the globe, even while aftershocks were rocking the city and buildings were still swaying.


But we’re eager to find other ways of helping. In addition to these efforts focused on specific situations, we’ve worked hard this past year to more broadly organize the information most helpful during crisis situations and make it possible for people to use that data in near real-time. If people are asking for information, then in our view, it’s already too late. In these situations, it’s incredibly important that things happen fast.

So in addition to building products, we collaborate with many incredible organizations to make technology useful for responding to a crisis. For example, Random Hacks of Kindness is a collaboration between technology companies and government organizations which encourages teams around the world to create software solutions to problems that arise during a crisis. Recent “RHoKstars” have created all sorts of useful tools—from HeightCatcher, which helps identify malnourishment of children in relief camps by accurately assessing height and weight through a mobile device, to new features for Person Finder, such as email notifications, automatic translation and phonetic name matching—which have all been extremely useful in Japan. These projects present a real opportunity to improve lives by employing crowd-sourcing technology and real-time data during a crisis.

The sheer number of major natural disasters in 2010 and early 2011 demonstrates just how important it is for those involved in relief efforts to have real-time access to information no matter where they are. The Google Crisis Response team has worked over the past year to develop open source initiatives that encourage collaboration with larger crisis response efforts, including relief organizations, NGOs and individual volunteers. And although we’re a small team and still relatively new to the crisis response ecosystem, we hope the resources and support we receive from Google and our community partners around the world will make a difference in preparedness efforts.

20 percent time spent coding in the clouds

This is the latest post in our series profiling entrepreneurial Googlers working on products across the company and around the world—even 35,000 feet above the ground. Read how one engineering director tried Google App Engine for the first time to build an Android app—now used by nearly half a million people—during a 12-hour plane ride to Japan. -Ed.

A 12-hour plane flight may seem daunting to some, but I look at it as uninterrupted time to do what I love—code new products. My bi-monthly trips from London to Tokyo and California are how I spend my 20 percent time—what I consider my “license to innovate.” It was on a flight to Tokyo that I first built what became Chrome to Phone, an Android app and Chrome extension that allows you to instantly send content—like a webpage, map or YouTube video—from your Chrome browser to your Android device.

As an engineering director, I spend the bulk of my time managing software engineers and various projects. As a result, there’s not a lot of time to just sit at my desk and code, and it’s possible for my technical skills to become rusty. So on one of my frequent cross-continent trips, I decided to take the opportunity—and time—to brush up on my engineering skills by exploring device-to-device interaction, an area that has a lot of potential in our increasingly connected world. I’d never written a Chrome extension or used App Engine, a platform that allows developers to build web applications on the same scalable systems that power Google’s own applications and services. But rather than sleeping or reading a book, I spent my flight figuring it out. And somewhere over Belgium on my way to Japan, I had a working prototype of Chrome to Phone.

A few days later, on my trip back to London, I emailed my prototype to Andy Rubin and Linus Upson, who lead the Android and Chrome engineering teams. Before my plane even landed, they’d both given the product their blessing. With a little help from a developer in Mountain View and a user interface designer back in London, we tidied things up and ultimately launched the open source code for Chrome to Phone at Google I/O just two months later.



As an engineering director, I don’t always have the time to get deeply involved in every aspect of a product launch. Chrome to Phone gave me a unique opportunity to be actively involved at the grassroots of product development at Google—from concept to launch—working directly with the legal, internationalization and consumer operations teams. With few restrictions on how I spent my time, I was able to build a prototype and launch it quickly, adding more features based on user feedback. Today, more than 475,000 people use the extension, and that number is still growing.

When you’re leaving your house to go out, you take your phone, keys and wallet. I don’t think it will be long before you just take your phone—it will contain everything that you need—and that’s our motivation to explore device-to-device interaction. In order to get there, we have engineers here in the U.K. and around the world examining the mobile space, both in their full-time roles and as 20 percent projects. There isn’t only one solution, so by encouraging engineers to work on new projects, we hope that ideas will come from all over the world—whether from a Google office or even 35,000 feet above one.

Speech technology at Google: teaching machines to talk and listen

This is the latest post in our series profiling entrepreneurial Googlers working on products across the company and around the world. Here, you’ll get a behind-the-scenes look at how one Googler built an entire R&D team around voice technology that has gone on to power products like YouTube transcriptions and Voice Search. - Ed.

When I first interviewed at Google during the summer of 2004, mobile was just making its way onto the company’s radar. My passion was speech technology, the field in which I’d already worked for 20 years. After 10 years of speech research at SRI, followed by 10 years helping build Nuance Communications, the company I co-founded in 1994, I was ready for a new challenge. I felt that mobile was an area ripe for innovation, with a need for speech technology, and destined to be a key platform for delivery of services.

During my interview, I shared my desire to pursue the mobile space and mentioned that if Google didn’t have any big plans for mobile, then I probably wouldn’t be a good fit for the company. Well, I got the job, and I started soon after, without a team or even a defined role. In classic Google fashion, I was encouraged to explore the company, learn about what various teams were working on and figure out what was needed.

After a few months, I presented an idea to senior management to build a telephone-based spoken interface to local search. Although there was a diversity of opinion at the meeting about what applications made the most sense for Google, all agreed that I should start to build a team focused on speech technology. With help from a couple of Google colleagues who also had speech backgrounds, I began recruiting, and within a few months people were busily building our own speech recognition system.

Six years later, I’m excited by how far we’ve come and, in turn, how our long-term goals have expanded. When I started, I had to sell other teams on the value of speech technology to Google's mission. Now, I’m constantly approached by other teams with ideas and needs for speech. The biggest challenge is scaling our effort to meet the opportunities. We've advanced from GOOG-411, our first speech-driven service, to Voice Search, Voice Input, Voice Actions, a Voice API for Android developers, automatic captioning of YouTube videos, automatic transcription of voicemail for Google Voice and speech-to-speech translation, amongst others. In the past year alone, we’ve ported our technology to more than 20 languages.



Speech technology requires an enormous amount of data to feed our statistical models and lots of computing power to train our systems—and Google is the ideal place to pursue such technical approaches. With large amounts of data, computing power and an infrastructure focused on supporting large-scale services, we’re encouraged to launch quickly and iterate based on real-time feedback.

I’ve been exploring speech technology for nearly three decades, yet I see huge potential for further innovation. We envision a comprehensive interface for voice and text communication that defies all barriers of modality and language and makes information truly universally accessible. And it’s here at Google that I think we have the best chance to make this future a reality.

Update 9:39 PM: Changed title of post to clarify that speech technology is not only used on mobile phones but also for transcription tasks like YouTube captioning and voicemail transcription. -Ed.

Dialed up: the rapid launch and growth of Click-to-Call

This post is the first in our series profiling entrepreneurial Googlers working on products across the company and around the world. Here, you’ll get an in-depth look at how one of our most successful mobile advertising features was launched by one and a half engineers in a matter of months. -Ed.

I’ll always remember my first cell phone—a big, black brick that was really only good for making calls. While technology certainly has advanced since then, I still appreciate the speed of connecting to people and businesses instantly over the phone, something that I found harder and harder to do when I searched, for example, for the number of a restaurant to make a reservation.

So in June of 2009, a few engineers and I pooled our 20% time and worked to develop a prototype of what would eventually become Click-to-Call for smartphones, an ad unit that makes it easier for people to connect to a business by phone, rather than through a website.


Building the feature was the easy part; essentially, we developed an ad extension that allows advertisers to include a phone or location in their campaigns. However, it was launching it to advertisers that posed the biggest challenge.

With new products like Click-to-Call, we often choose to launch in beta, and incrementally roll out the features to a small subset of users, usually beginning at a 1% test and increasing from there. With Click-to-Call, we’d developed a mobile feature that we wanted to launch as soon as possible, but since mobile advertising was much smaller at that point—then with only about one-sixth as many search queries we get today—we calculated that it would take nearly three years to roll out to 10% and around 10 years to actually launch it. At that rate, the feature would likely become antiquated long before it ever officially launched.

So I decided not to follow the usual process and took a risk, choosing to launch to 50% of Google’s mobile advertisers within the first week. In my view, there was simply no other way to collect enough feedback in a short period of time so that we could quickly iterate based on feedback. Thousands of advertisers—an unprecedented amount for a brand-new feature—were on board to try it out, and with a few engineers and some pretty massive spreadsheets, we started to see real results. Within a month, we had the magical ingredient—momentum—and from there we were collecting enough feedback to be on track to bring the feature to all advertisers in a matter of months.


This is one of the reasons I work at Google. Google gives me freedom to experiment, ownership of my ideas, and amazing resources and support. We built Click-to-Call in June 2009, began testing it in July, and had it up and running for all advertisers in January 2010. One year later, Click-to-Call ads on both search and the Google Display Network are generating millions of calls every month on mobile phones and driving strong performance for advertisers.

If you’re interested in exploring some of the most significant trends in mobile, you can watch our Think Mobile livestream this Thursday, February 10 at 1:05pm EST, where we’ll discuss why it’s “not too late for businesses to still be early” in this space.