I got my first chance to demo a Google Glass yesterday!
The results… well, somewhat of a letdown really.
Perhaps my expectations were a bit too high. There’s so much hype about Google Glass.
All the videos about it are so colorful and exciting. There are vivid, powerful, images of people jumping out of planes, riding horses, climbing mountains.
But when I put on a Google Glass for the first time… it was really a rather bland experience.
Cool, yes, but not mind blowing cool.
Here’s what it’s like:
Donning a Google Glass, one of the first things you notice is that it’s comfortable and very lightweight. That in itself is pretty impressive really. Who imagined ten years ago that you would wear a computer on your head like sunglasses?
There is a clear, rectangle shaped, panel in the upper right of your vision that distorts your view in that direction a bit. The Googler training me on Glass said that was something you get used to fairly quickly and don’t even think about after a while.
The part of the Glass that goes along your right temple is a touchpad. A touch to the touchpad and a clock comes into view on the upper right side of your vision. When you say “OK, Glass” a simple text menu comes up with what can ask Glass to do.
I did find that I had to rotate my eyes to the upper right a bit more than what was comfortable to see the display well. I was probably trying too hard. My trainer said it becomes second nature with time, and I can see how it would be. She said the display is designed to appear as though the graphics are about 8 feet away. They seemed much closer to me.
I went through some basic commands and Glass did a pretty good job of following them. “Take a picture” for example. It was funny though because the first images I took were not of what I was trying to point Glass at. It took just a few shots for me to realize I didn’t need to “aim” it, I just had to look directly at what I wanted to shoot (not look “through” the display) and the image lined up correctly.
Some of the commands I tried on Glass got lost in translation. However, I’ll give it a pass on that as we were in a noisy environment with lots of other people talking around us.
It is interesting to note though that Glass cannot be trained in language. My trainer said it doesn’t account for dialects or different voice patterns like a more advanced voice recognition program can be ‘trained’ to do. You have to speak commands the way Glass expects them, it doesn’t adapt to the user.
Other than following some simple commands, taking a picture or video, sending a text (actually simulating sending a text, we didn’t actually send one due to limits on the demo unit), some very basic web surfing, there wasn’t a whole lot that I could do on this first try. It was interesting, but not a “WOW” experience.
However, I did hear some interesting stories from regular Glass users that have grown accustomed to using Glass.
One received Glass soon after being diagnosed with cancer. He was able to document his experience in a very real and sincere way. It didn’t require taking out a camera or phone, holding it up, people being aware of it. It’s just that he was wearing something that looked like glasses and could record things that he saw.
Another user talked about her experience with Google Glass and Google Maps, and how you can have directions provided to you as you walk along, pointing the way at every turn.
But what was really intriguing was what these users see as the future of Glass. Basically Glass will be a way to communicate and access information in a hands free manner. It opens up a world of possibilities, some I’m sure we haven’t even imagined yet.
Doctors will be able to perform surgery and consult with another doctor half a world away, showing the other doctor exactly what they are seeing, real time, while never having to take their hands or eyes off the patient. Yes – we have video conferencing now, but this will provide the actual view that a doctor is seeing, and it provides a way for the surgeon to see a monitor while keeping their eyes on the patient at the same time.
Or… sticking with the medical industry, nurses and techs can be freed from pushing around computers on trolleys to record patient care. Medical records could be transmitted to Glass as a medical professional walks into a patients room. Perhaps vital stats can be relayed to Glass and displayed in real time, always available with a quick glance from the user.
Alzheimer’s sufferers, well, anyone for that matter, could be prompted with reminders or document when meds were taken.
Turning to construction… a worker can show others exactly what he is doing on a project real time, keeping both hands on the project while sending/receiving information.
So while it’s not a real exciting product to demo one time, at least in my experience, I think there is huge potential here. All the people I spoke to that were regular users of Glass LOVED IT and operated it very easily. I think it probably takes some time to learn the system and get accustomed to it before the “WOW” happens.
I think Google Glass would benefit from a set demo program – something that could be run for first time users to introduce them to the system. It needs to to have some “wow” factors, some fireworks, some color, while challenging new users to some simple head movements and speech prompts to display just what the system can do in a very fun game-like experience. Just my two cents.
Ok, when will Glass be ready for a consumer release? And what will the price be?
No joy there guys. Believe me, I badgered several Googlers about that. All lips were sealed.
Have you tried out Google Glass?
What did you think of it the first time you used it? If you had the opportunity to continue using it, how was that experience?
Share your comments below!