An accessibility survey for blind users

These days, we rely on the Internet to keep us informed and in touch, yet our experience of the web is filtered through the tools we use to access it. The devices and technologies we choose, and our decisions about when we upgrade those tools, can affect how we interact with the web and with whom we are able to communicate.



In July, I attended the annual conference held by the American Council of the Blind (ACB). I was struck by something I heard from people there: their experience using the web was very different from mine not because they were blind, but because the technology and web tools available to them were unlike the ones available to me, as a sighted person. While the Internet provides many benefits to modern society, it has also created a unique set of challenges for blind and low-vision users who rely on assistive technologies to use the web. We’re committed to making Google’s products more accessible, and we believe the best way to understand the accessibility needs of our users is to listen to them.



This week, we’re announcing a survey that will help us better understand computer usage and assistive technology patterns in the blind community. Over the past three months, we’ve worked closely with the ACB to develop a survey that would give us a greater understanding of how people choose and learn about the assistive technologies they use. This survey will help us design products and tools that interact more effectively with assistive technologies currently available to the blind community, as well as improve our ability to educate users about new features in our own assistive technologies, such as ChromeVox and TalkBack.



The survey will be available through mid-September on the ACB's website and by phone. We encourage anyone with a visual impairment who relies on assistive technologies to participate; your input will help us offer products that can better suit your needs. For details, visit www.acb.org/googlesurvey.



Supporting accessibility at CSUN

This week we’ll be at the 26th annual CSUN International Technology & Persons with Disabilities Conference to talk with users and accessibility experts about how to make our products more accessible to people with disabilities. We’ll also give a talk on the current state of accessibility for our products.

We’ve been working in this space for a while, launching features such as captions on YouTube, applications such as WalkyTalky and Intersection Explorer on Android (so people can use Google Maps eyes-free) and building easy-to-navigate, accessible Google search pages to work smoothly with adaptive technologies.

We have more to do. At CSUN 2011, we’re looking forward to more insights about how to make Android, Chrome and Google Apps better enabled for people who rely on assistive technologies like screen readers. If you’re attending and are interested in participating in our focus groups there, please fill out our survey by 9pm PST today, Tuesday, March 15.

To see an overview of the accessibility features of our products today, visit google.com/accessibility. We're launching an updated version of this site later today to make it easier for visitors to find information on using our products, and for developers and publishers to learn how to develop accessible products on our platforms. While you’re there, please give us feedback on what we can do better to make our products more accessible.

Honoring the 20th Anniversary of the Americans with Disabilities Act

[Cross-posted on Google Public Policy Blog

Bending, walking, breathing, hearing, seeing and sleeping are simple things that are often taken for granted, as are thinking, learning, and communicating.

Twenty years ago today, the Americans with Disabilities Act (ADA) was signed into law. This milestone legislation bans persons or companies from discriminating against anyone with limited abilities. It’s hard to imagine a world in which the right to participate in activities commonly enjoyed by the bulk of the population are denied or inadequately accommodated, but that was the case before ADA.

The efforts of the advocates who came to Washington two decades ago to rally for their civil rights has transformed so much of the modern world around us. As someone who’s worn hearing aids since I was 13, for example, I very much appreciate that most television programs and DVDs or Blu-Ray disks are captioned. On my way home, I might pass through a door that I know is wide enough for a wheelchair -- because the ADA set the building codes that require it. I see service animals on the DC Metro, accessible checkout aisles at my grocery store, ramps on sidewalks, and designated parking in movie theater lots: all there because of the important provisions included in the ADA.

Whereas the ADA set legal standards for ensuring equal rights for Americans with disabilities, Google is keenly aware that technology can help all users better enjoy the world around them. From opening millions of titles of printed content to persons with visual impairments through Google Book Search, to providing ready and easy-to-use captions on YouTube, to including a built-in screenreader and text-to-speech engine in Android, to introducing new extensions on Chrome to make online text easier to read, we’re serious about honoring our mission to make the world’s information universally accessible and useful. You can keep up with our progress at google.com/accessibility.

Congratulations to all those who work to make the ADA a living, breathing reality. For all the years I’ve been working on policy in Washington, it’s still rare to see a law that has had as positive and fundamental an influence on our lives as this Act. There still is work to be done to meet the goals of ADA, and we are committed to doing our part.

Automatic captions in YouTube

Since we first announced captions in Google Video and YouTube, we've introduced multiple caption tracks, improved search functionality and even automatic translation. Each of these features has had great personal significance to me, not only because I helped to design them, but also because I'm deaf. Today, I'm in Washington, D.C. to announce what I consider the most important and exciting milestone yet: machine-generated automatic captions.

Since the original launch of captions in our products, we’ve been happy to see growth in the number of captioned videos on our services, which now number in the hundreds of thousands. This suggests that more and more people are becoming aware of how useful captions can be. As we’ve explained in the past, captions not only help the deaf and hearing impaired, but with machine translation, they also enable people around the world to access video content in any of 51 languages. Captions can also improve search and even enable users to jump to the exact parts of the videos they're looking for.

However, like everything YouTube does, captions face a tremendous challenge of scale. Every minute, 20 hours of video are uploaded. How can we expect every video owner to spend the time and effort necessary to add captions to their videos? Even with all of the captioning support already available on YouTube, the majority of user-generated video content online is still inaccessible to people like me.

To help address this challenge, we've combined Google's automatic speech recognition (ASR) technology with the YouTube caption system to offer automatic captions, or auto-caps for short. Auto-caps use the same voice recognition algorithms in Google Voice to automatically generate captions for video. The captions will not always be perfect (check out the video below for an amusing example), but even when they're off, they can still be helpful—and the technology will continue to improve with time.

In addition to automatic captions, we’re also launching automatic caption timing, or auto-timing, to make it significantly easier to create captions manually. With auto-timing, you no longer need to have special expertise to create your own captions in YouTube. All you need to do is create a simple text file with all the words in the video and we’ll use Google’s ASR technology to figure out when the words are spoken and create captions for your video. This should significantly lower the barriers for video owners who want to add captions, but who don’t have the time or resources to create professional caption tracks.

To learn more about how to use auto-caps and auto-timing, check out this short video and our help center article:



You should see both features available in English by the end of the week. For our initial launch, auto-caps are only visible on a handful of partner channels (list below*). Because auto-caps are not perfect, we want to make sure we get feedback from both viewers and video owners before we roll them out more broadly. Auto-timing, on the other hand, is rolling out globally for all English-language videos on YouTube. We hope to expand these features for other channels and languages in the future. Please send us your feedback to help make that happen.

Today I'm more hopeful than ever that we'll achieve our long-term goal of making videos universally accessible. Even with its flaws, I see the addition of automatic captioning as a huge step forward.

* Partners for the initial launch of auto-caps: UC Berkeley, Stanford, MIT, Yale, UCLA, Duke, UCTV, Columbia, PBS, National Geographic, Demand Media, UNSW and most Google & YouTube channels.

Update on 11/24: We've posted a full length video of our announcement event in Washington D.C. on YouTube. We've included English captions using our new auto-timing feature.



More accessibility features in Android 1.6

From time to time, our own T.V. Raman shares his tips on how to use Google from his perspective as a technologist who cannot see — tips that sighted people, among others, may also find useful.

The most recent release of Android 1.6, a.k.a. Donut, introduces accessibility features designed to make Android apps more widely usable by blind and low-vision users. In brief, Android 1.6 includes a built-in screenreader and text-to-speech (TTS) engine which make it possible to use most Android applications, as well as all of Android's default UI, when not looking at the screen.

Android-powered devices with Android 1.6 and future software versions will include the following accessibility enhancements:
  • Text-to-Speech (TTS) is now bundled with the Android platform. The platform comes with voices for English (U.S. and U.K.), French, Italian, Spanish and German.
  • A standardized Text To Speech API is part of the Android SDK, and this enables developers to create high-quality talking applications.
  • Starting with Android 1.6, the Android platform includes a set of easy to use accessibility APIs that make it possible to create accessibility aids such as screenreaders for the blind.
  • Application authors can easily ensure that their applications remain usable by blind and visually impaired users by ensuring that all parts of the user interface are reachable via the trackball; and all image controls have associated textual metadata.
  • Starting with Android 1.6, the Android platform comes with applications that provide spoken, auditory (non-speech sounds) and haptic (vibration) feedback. Named TalkBack, SoundBack and KickBack, these applications are available via the Settings > Accessibility menu.
  • In addition, project Eyes-Free (which includes accessibility tools such as TalkBack) provides several UI enhancements for using touch-screen input. Many of these innovations are available via Android Market and are already being heavily used. We believe these eyes-free tools will serve our users with special needs as well.
You can turn on the accessibility features by going to Settings --> Accessibility and checking the box "Accessibility". While the web browser and browser-based applications do not yet "talk" using these enhancements, we're working on them for upcoming releases. Check out this Google Open Source Blog post for more details, and stay tuned to the eyes-free channel on YouTube for step-by-step demonstrations on configuring and using accessibility support on Android.

A new home for accessibility at Google

Information access is at the core of Google’s mission, which is why we work to make the world's content available to people with disabilities, such as blindness, visual impairment, color deficiency, deafness, hearing loss and limited dexterity. Building accessible products isn't only the right thing to do, it also opens up Google services to very significant populations of people. According to the United Nations, 650 million people live with a disability, which makes them the world's largest minority.

We regularly develop and release accessibility features and improvements. Sometimes these are snazzy new applications like the a new talking RSS reader for Android devices. Other times the changes aren't flashy, but they're still important, such as our recent incremental improvements to WAI-ARIA support in Google Chrome (adding support for ARIA roles and labels). We also work on more foundational research to improve customization and access for our users, such as AxsJax (an Open Source framework for injecting usability enhancements into Web 2.0 applications).

We've written frequently about accessibility on our various blogs and help forums, but this information has never been easily accessible (pun intended) in one central place. This week we've launched a handy new website for Accessibility at Google to pull all our existing resources together: www.google.com/accessibility. Here you can follow the latest accessibility updates from our blogs, find resources from our help center, participate in a discussion group, or send us your feedback and feature requests. Around here, we often say, "launch early and iterate" — meaning, get something out the door, get feedback, and then improve it. In that tradition, our accessibility website is pretty simple, and we expect this site to be the first of many iterations. We're excited about the possibilities.

The thing we're most excited about is getting your feedback about Google products and services so we can make them better for the future. Take a look and let us know what you think.

Posted by Jonas Klink, Accessibility Product Manager

ARIA For Google Reader: In praise of timely information access



From time to time, our own T.V. Raman shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful.

The advent of RSS and ATOM feeds, and the creation of tools like Google Reader for efficiently consuming content feeds, has vastly increased the amount of information we access every day. From the perspective of someone who cannot see, content feeds are one of the major innovations of the century. They give me direct access to the actual content without first having to dig through a lot of boilerplate visual layout as happens with websites. In addition, all of this content is now available from a single page with a consistent interface.

Until now, I've enjoyed the benefits of Google Reader using a custom client. Today, we're happy to tell you that the "mainstream" Google Reader now works with off-the-shelf screenreaders, as well as Fire Vox, the self-voicing extension to Firefox. This brings the benefits of content feeds and feed readers to the vast majority of visually impaired users.

Google Reader has always had complete keyboard support. With the accessibility enhancements we've added, all user actions now produce the relevant spoken feedback via the user's adaptive technology of choice. This feedback is generated using Accessible Rich Internet Applications (WAI-ARIA), an evolving standard for enhancing the accessibility of Web-2.0 applications. WAI-ARIA is supported at present by Firefox -- with future support forthcoming in other browsers. This is one of the primary advantages of building on open standards.

We originally prototyped these features in Google Reader using the AxsJAX framework. After extensive testing of these enhancements, we've now integrated these into the mainstream product. See the related post on the Google Reader Blog for additional technical details.

Looking forward to a better informed future for all!

Google Translation + Gmail help people communicate



From time to time, our own T.V. Raman shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful. - Ed.


Language barriers can be a primary source of accessibility problems on the web, and automatic translation, though not perfect, provides a useful solution.

We recently made our machine translation technology accessible from within Gmail and Google Talk, which gives mail and IM users instant access to translation capabilities at the point where they might most need them, e.g., when communicating with friends and colleagues around the world. If you find yourself wanting to translate a few words or short phrase, you can IM an appropriate chat-bot to obtain an immediate translation. As an example, the Google translation bot for going from English to Chinese is available as en2zh@bot.talk.google.com. In general, translation bots are named using two-letter codes for the source and target language.

Surfacing machine translation in this manner is a great example of how Web 2.0 mashups bring together useful services to create solutions that are bigger than the sum of their building blocks. I've blogged here in the past about the potential presented by web mashups for users with special needs. Using our work on AxsJAX to inject accessibility enhancements into Web applications, my officemate Charles Chen and I recently augmented Google Talk to produce appropriate spoken feedback when used with adaptive technologies such as self-voicing browsers.

The combination of machine translation, instant messaging and AxsJAX-enabled spoken output produces an interesting result that is obvious after the fact: when I use Google IM to message a translation bot, I now hear the result in the target language. This makes for a very interesting chat buddy -- one who can act as my personal interpreter!

And let's not forget that little translate this page within Google search results. Next time you find that some of the documents in your search results are non-English, try clicking on that translate link. You'll be able to specify the source and target languages to obtain an automatically generated translation. A nice thing about the translated page is that when you follow any links from that document, the newly retrieved document will be automatically translated. Thus, if you find an article in German that matches your query and you're an English speaker, you can translate from de|en (that's German to English using two letter language codes) and as you read the translated English version, following links from that document will result in their being automatically translated to English.

Public transit made easy


From time to time, our own T.V. Raman shares his tips on how to use Google from his perspective as a technologist who cannot see -- tips that sighted people, among others, may also find useful. - Ed.

A little over a year ago, I blogged about our simple textual directions as an alternative to the popular graphical Google Maps interface. Those directions help me orient myself and learn my way around. But in the interest of safety -- my own and others! -- I choose not to drive and rely heavily on public transportation.

Now that Maps has textual directions in place, it's easy to build on top of that interface to introduce new innovations that become immediately useful to someone like me. Google Transit is a great example of this -- it helps me locate public transportation options and does so in the text format that I need. In addition, it offers several nice features to help me plan my trip:

  • I can specify the desired departure or arrival time.
  • It will show more than one trip choice, allowing some flexibility with respect to when I'd like to start.
  • It estimates the amount of walking required to get to a transit stop/station.
  • It identifies the length of waiting at each transit point.
  • It estimates the comparable cost of transportation options, where available.

But these aren't the only benefits. Behind the scenes is the Google Transit Feed Specification (GTFS), an open data format used by public transit agencies to upload their data. Several agencies are already using these public feeds. Though GTFS is never seen by commuters directly, it opens up a wealth of possibilities with respect to accessibility and alternative access, such as building custom user interfaces and specialized route guidance applications that are optimized for people with special needs.

Though we added this alternative view to enhance the accessibility of Google Maps for blind and low-vision users, we hope that everyone finds it a useful addition to your commute arsenal. So next time you use the Maps graphical interface, give its cousin, the simple textual directions, a try -- there might be times when you find yourself using it even if you can see.

And here's to ever more open data feeds from the various public transport agencies!

New Toolbar adds accessible features



Last week Google Toolbar for Internet Explorer launched version 5 as a public beta. This version introduces a number of exciting features, such as making your Toolbar settings available from any computer that you log into with your Google Account, improved suggestions for broken links, as well as important changes that make Toolbar more accessible for assistive technology users.

This release adds support for Windows Accessibility APIs (used by screen readers, etc.) and enables keyboard navigation and access. From inside a browser with Toolbar installed, the global shortcut Alt+G places your cursor in the Google Toolbar search box. If you're using a screen reader, you'll hear "Google Toolbar Search". Pressing the Tab key brings keyboard focus to the button placed immediately after the search box, and right and left arrow keys move focus between buttons. More information on keyboard access is documented in the Toolbar Help Center (query 'accessibility').

Version 5 comes as a part of our ongoing efforts to enhance accessibility in our client-side and web applications, which is a matter I hardly need to mention is very important. Personally, I see my work that went into the Toolbar as an important step forward, as the product reaches a very large number of users and enables everyone to gain quick access to a multitude of useful features, through a unified UI. Adding keyboard navigation and other features that enhance the ease of access to these features benefit everyone.

We look forward to making further improvements to accessibility (including the installation process) in future releases. You can download the new Google Toolbar at http://toolbar.google.com/T5.