set-up instructions here http://googlemobile.blogspot.com/2011/01/cloud-printing-on-go.html.
Video Rating: 4 / 5
set-up instructions here http://googlemobile.blogspot.com/2011/01/cloud-printing-on-go.html.
Video Rating: 4 / 5
http://www.google.com/howgoogleworks | Cloud computing is the phrase for web-based software you can use anywhere you have an Internet connection. We explain …
Video Rating: 4 / 5
Google unveiled its completely redesigned Google Maps product on the web at I/O 2013, and at a panel dedicated to the new Maps experience, Maps User Experience Design Lead Jonah Jones and Engineering Director for Maps on the web Yatin Chawathe took us through what went into creating Maps and the engineering effort behind the considerable change seems prodigious.
Specifically, Jones and Chawathe took us much deeper into two of the main driving concepts behind the redesign of Maps, including “Building A Map For Every Place” and “Explore The World.” The former has to do with customizing maps every time a user clicks on a new location, in real-time and with more contextually relevant information, and the latter involves providing beautiful imagery including via Earth integration directly into maps, and with 3D virtual photo tours.
In making a Maps product that is extremely adaptive to both a user’s personal input sources and to specific locales, Google had to rethink its approach to maps, and it looked to the way we casually share directions as a marker of a good system for surfacing relevant information. When you draw a map on a napkin, you are automatically filtering out the most important information, and doing it with your specific audience in mind. The result is a simplified map, that involves maybe a few major routes, as well as smaller roads, and a prioritization that doesn’t necessarily reflect how important a road is to the general population.
“A map draw for you is great because it highlights aspects and things personal to you,” Jones explained, adding that there’s also nostalgic value in something like a hand drawn map. Google wanted to be able to replicate both of these, and so it took an engineering approach to automate a process that’s normally human-powered.
Google didn’t want to exactly replicate the hand-drawn map, however, since it leaves out a lot of information that you want to still be present in a modern, digital, interactive map. But it did want to subtly highlight and downplay certain map elements, bring to the fore aspects that are useful and fading back others that aren’t as important. To do that, it took a big data analytics approach.
First, for a specific location the new Maps algorithm will analyze the entire set of people looking for directions in that area, and then highlight the routes that come up most often. Then from that subset they’ll focus in even further and weigh more vs. less important routes, based again on aggregated user data. They can see which roads are more popular, and then pop those out vs. the less important ones. Finally the less important ones are cut away, and you’re left with something resembling the hand-written map.
Once those are flagged, however, you could still be missing info on the ground regarding very small routes important to a specific place. Those are then targeted via a hyper local re-labeling algorithm that addresses just the immediate surroundings, adding labels to key routes and taking them away from other locations to decrease clutter and subtly change the focus.
That then informs the UI rendering of the Map itself, which still retains the street markers for all surrounding routes. Lines along routes important to getting there are made bold and lines on less important streets are thinned out, but not removed in case some users still require that information. It’s about drawing attention and changing perspective, not eliminating something altogether.
All of the above takes advantage of the immense processing power in Google’s data center to do the whole thing in real-time every single second, for every single one of Map’ millions of users. Yet the impact on a user’s computing requirements is minimal; Google sends even less data than it did with the previous version of Maps, keeping bandwidth requirements low.
Google’s other big addition to the new Maps experience has to do with bringing beautiful imagery to the web, in the form of both Google Earth 3D flyovers and the new virtual tours that provide an up-close-and-personal view of some prime spots. Those virtual tours also represent a massive engineering effort, one which Chawathe explained in broad strokes on stage.
The virtual tours are a crowdsourced effort, which users may not even realize they’re actively contributing to. The images are drawn from pictures uploaded to Google+, Panoramio and other sources within the Google photo sharing ecosystem.
To get from that group of photos to an actual 3D tour requires a lot more than just aggregating photos, however. Google says it can map not only where every photo in its database was taken, but can also tie each individual pixel in every image to a very specific location using its algorithm, making it much easier to stitch sets together. Once that process is complete, it’s left with a point cloud that can flesh out a region, but that’s a brute force approach, and some art is required to make it look good.
That involves filtering the photos, picking ones that show the landmark in context with its surroundings, ones that show the landmark clearly from visually pleasing angles, pics that capture architectural detail, interesting picturesque scenes in various lighting conditions and more. It picks these photos based on visual recognition tech and their popularity and ratings on Google properties; so an image that gets a lot of +1s on Google+ will be rated over one that’s got none, for example.
Once it has a set of top-quality pictures, it determines an order in which they should appear that makes the most sense. Even then it wouldn’t be smooth as a finished product, however, since there gaps and the transition between angles would involve a lot of bizarre warping and image artifacts that would taint the overall experience. So finally, Google’s algorithm goes back to the larger set of images and picks ones that fit nicely in the gaps. These don’t need to be the best quality, since they’re just filling out the animation.
Jones said that what they’ve built is impressive, but still pales in comparison to what a human artist could achieve manually stitching together their own photo tour. He hopes to bring up Google’s automated process to the point where it’s impressive regardless of the source, and comparable with what humans are capable of working on their own.
In response to a question from the audience, Chawathe also said that Google could in the future look for a way to make its 3D guided tour feature a consumer tool. It sounds like it’s not something Google is currently developing, but putting that power in the hands of Google+ users for instance might make it more of a draw for photography enthusiasts. Google already showed that it’s making efforts in that direction with the new auto-enhance and auto-awesome features it introduced for G+ at I/O.
These efforts show how Google is making use of its immense computer processing power to deliver experiences via Maps that reflect a continually changing world. It sounds like this is just the beginning for both of the projects, too, and as with every major change, we’ll probably see more refinement of these approaches as users come on board and provide more feedback.
In a letter to chief executive Larry Page, lawmakers demand answers on how the tech giant plans to protect citizens’ private data.
Amidst the dozens of sessions, weary from hours of keynote speeches, we managed to sit down in the middle of Google I/O to chat a little about what we’ve seen and what we…
It’s clear that Google had other things it could have talked about on the first day of the I/O conference. Like Google Glass.
Instead, the attendees heard more about how Google has developed new ways to turn data into services. The highlights were not some fancy hardware but the magic of Google’s APIs and algorithms, the bread and butter of what Google does.
I spent part of the afternoon talking with Rackspace’s Robert Scoble and long-time media pro Jake Ludington about the event, which had little of the raw excitement of years past when executives talked breathlessly about Google+ or parachuted on to the top of Moscone to show off Google Glass.
I first met Scoble and Ludington in 2004. Scoble worked at Microsoft and Ludington was a big part of Gnomedex, one of the geekiest conferences of the day. Blogs were arguably the most advanced social networks, mobile phones were still like bricks.
My conversation with Scoble focused on the semantics, the context of the algorithms and the more nitty-gritty aspects of a keynote really meant for developers.
Robert Scoble at Google I/O
Ludington looked for the points in the keynote when the audience seemed most engaged.
Jake Ludington at Google I/O
Both Scoble and Ludington are geeks in their own way. It is the way that data can be one thing and then another that draws them to Google I/O. It’s not too much different today. In 2004 it was about using RSS feeds to read blogs. Today, Google Glass is like a reader, pulling in data to a lens that transmits ir for the human mind to read. Again, it’s a new way to turn data into services.
Scoble and Ludington show that the spectacle of something like a sky diver may be fun but it’s the wonder of innovation that keeps us coming back.
Today during the Google I/O keynote, Google unveiled the new Maps for desktop. Google Maps is getting a complete overhaul, along with clever new features and a number of new capabilities. It’s the…
Google and Donald Trump have invested big money in crowdfunding projects, RocketHub and A&E are teaming up, and new CNBC reality show ‘Crowd Rules’ premieres tonight.
Just before Google I/O, Microsoft is making a big pitch for developers with a high-profile announcement about a new team that will focus on building outside interest in app development on the Azure platform.
The group, which will have a base in San Francisco, is part of the Developer and Platform Evangelism (DPE) group led by Technical Fellow John Shewchuk. As Mary Jo Foley wrote, the new developer team is part of Microsoft’s effort to be a platform provider more so than a software purveyor.
Here’s what Shewchuk wrote recently about the effort:
We’re building out the team by adding top-notch developers and evangelists from across the industry. Two recent examples: James Whittaker – a known industry disruptor and incredible speaker joins us from Bing where he has been leading the development team making Bing knowledge available programmatically – many people may know him from his viral blog post on why he left Google for Microsoft. And Patrick Chanezon just joined us from VMware where he was driving their cloud and tools developer relations – he has a ton of expertise in the open source space which will be increasingly important given our new Azure IaaS support for Linux.
Of particular note is the hiring of Chanezon, who recently left VMware to join Microsoft as its director of enterprise evangelism. In a blog post, Chanezon puts an emphasis on Microsoft’s Azure platform and its readiness. Interestingly, he says that Azure “is more open than people think.” I take that as he and the development team have some work in growing awareness about the Azure infrastructure.
Chanezon leaves a job at VMware where he managed developer relations for Spring and Cloud Foundry. Spring and Cloud Foundry were recently spun out into a separate company called Pivotal that is positioning as a platform for data analytics and app development. Chanezon worked at Google on the Cloud Platform Advocacy Team manager before leaving for VMware.
It’s apparent that Microsoft has built a world-class development platform but getting people to use it has posed its challenges. This is in part due to Microsoft’s past focus on its insistence that developers uses Microsoft technology at every level of the stack. That attitude has shifted as symbolized in the news today and a series of announcements over the past several months related to Azure. It has launched new mobile features for iOS and Android development. In March they offered support For PhoneGap, Dropbox and Hadoop. Arguably the most strategic move came last month with the news of general availability of Active Directory on the Azure platform.
Still, Microsoft has lagged in attracting developer talent to the Azure platform. What it needs is not just good evangelists but a deeper ecosystem that will only come if it can build credibility in the market.
Here is something you probably didn’t see coming: Outlook.com just enabled chat interoperability with Google Talk. This new feature, which is rolling out worldwide over the next few days, allows Outlook.com users to chat with their friends on Google, just like they can already do with their Facebook friends. Given the somewhat strained relationship between Microsoft and Google, this move comes as a bit of a surprise, but it looks like Microsoft doesn’t expect any issues with this rollout.
The new chat feature will be available across a number of Outlook.com-related products, including your inbox, calendar, address book and SkyDrive, so you can chat with your friends on Google while working on a document, for example.
As Microsoft’s senior product manager for Outlook.com Dharmesh Mehta told me yesterday, Microsoft heard from its users that chat interoperability was “one of the things that was holding people back from switching from Gmail to Outlook.com.” Many of those users who did switch, he added, said that this was a feature “they missed after the switch.”
To enable Google chat in Outlook.com, users simply have to connect their accounts using Google’s standard OAuth system to give Microsoft access to their accounts. After that, they can start new chats by hovering over a Gmail user’s contact cord or right from the standard chat pane.
One thing that doesn’t currently work, though, is to start group chats that include Gmail and Facebook users. Mehta left open the possibility that Microsoft would enable this in the future, but for now, the team hasn’t built the pieces that would allow Microsoft to pass messages between the networks.
Google is widely expected to launch updates to its own text, audio and video chat features at I/O later this week. It’s unlikely, however, that these will have any influence on the new features Microsoft announced today.