Business Tech: Developing for Google Glass, Part 1
At the International Spectrum Conference, last April, I was given the opportunity to deliver the lunchtime speech on Augmented Reality. When I got up to the slide on Google Glass, the room went silent. Slowly, I reached into my pocket. Everyone was curious about Glass. They wanted to see. I pulled my sleep mask out and put it on, blindfolding myself as if for execution and apologized that I did not make the cut in the #ifihadglass contest.
We have a good group at the conference. They laughed and forgave me. I quipped that it was the best $1,500 I never spent, since I got a good response without having to invest in Glass. Then, a few months later, I got the news — I made it on the second round of contest winners. Google would grant me the privilege of spending my $1,500 and getting Glass.
To paraphrase Tolkien: One does not simply walk in to buy Glass. You have to attend an orientation. There are only three places in the world (all in the U.S. do to legal issues, as I understand it) where you can get oriented. Lucky for me, one was in Manhattan. It was a very — forgive me Google — Apple experience. The loft space was set up with a big open central area. The whole place was designed with the understanding that us happy few would be taking pictures while there and sharing them as our first baby steps with Glass. What was missing? Live developer training. Why? Because there is a ton of resources on the Internet.
Initial Development Plan
I wanted to start with something breathtakingly practical. I won my spot by saying that I want to develop inventory applications for faster warehouse management. So, that's were I started. My process was simple:
- Hook up Glass to a database with Inventory data.
- Use the scanner capabilities to read product and box labels.
- Create an X-Ray effect, where looking at the labels pulled up information about the inventory.
What Would It Take
My first challenge was to build or borrow an inventory database. We all know that was the easy part. Then I started looking at the Glass API. The API details are publicly available on the Internet, so you can see them for yourselves. I started realizing that using Glass as a scanner was far less efficient than simply sending the picture and letting the back-end services parse out the label.
This told me a lot about the inner logic behind Glass. While I could do what I wanted, Glass is optimized to the out-of-the-box experience. When you get it, it has built-in voice commands for "Take a Picture" and "Take a Video." I'm not saying Glass is too low horsepower to do what I want. Far from it. What I am saying is that, like MultiValue, and every other technology, there are assumptions baked into the design. Swimming upstream can be very productive: The web was designed for stateless, anonymous requests and people are subverting that every day.
Wanting to have an optimal experience for the user, I started rethinking my design. I wouldn't be going directly to MultiValue with a decoded code now. I would need something in the middle to convert the picture to a number.
Then I hit a brick wall. I could see how to send the picture. I could see how to return results. But the critical bridge was missing. I didn't see a way to return the results while staying in the app. The return pops you out.
Live Developer Training
The Project Glass team started offering informal, AMA-style developer training: Office Hours. It happened just as I hit my wall. I met three amazing developers, two in person, one by remote video hook up. And they set me straight: you can't find a way to do it because the answer is on the roadmap. Glass, as it exists today is a series of services. The glue is coming. Everyone who is developing wants the same glue that I do.
So, here I am, at a dead end. Except, this is Part I of the article, so you know there's a solution. While Glass API is built as the preferred API, there is a full (nearly full) Android O/S inside of Glass. To get my app out faster, I need to rethink it as an Android app. The bonus is that, if developed as an Android app, I can release it sooner, because it will work on tons of devices which are already available for sales. It will work on the devices a huge number of people already have. And, it will also work on Glass.
Even better, since I have the Glass API and a sense of the roadmap (as it applies to my plans) , I can organize my code to make a future Glass-specific version easy to implement.
In Part II, I'll talk about the OpenQM database I'm using for this project, the Android side of the project, and how the different Android worlds — phone, tablet, and Glass — handle my approach.