Saturday, April 20, 2013

On the coming of "the internet of things"

A year ago I would have talked about "THE INTERNET of THINGS." Now I'm more content to talk about "the internet of things." Capitalization can be important; just ask e.e. cummings. Seriously thought, whether you call it the IoT (Internet of Things) or Ubiquitous Computing or whatever, the ubiquitous future just isn't coming into focus.

I've been sort of thinking about this for a couple years, but it was Chris Anderson's recent tweet that got me thinking about it:
@chr1sa is a well-known tech journalist and gadget aficionado. As Editor-in-Chief at Wired, he's been in a position to see just about every new gadget that comes down the pike. So when he starts tweeting things like  this, you have to start wondering about the IoT market. Is it vapourware? Are we just still way early? Are IoT projects suffering from poor marketing? Maybe IoT is successful, we just haven't noticed.

Before we look into the crystal ball, let's look back and find out more about ubiquitous computing...

A Brief Bit of History

Many of us first heard of ubiquitous computing by way of Mark Weiser. Mark was a brilliant guy who, in the late 80's, was thinking very deep thoughts about the relationship between people and computing machinery. Keep in mind, this is well before the web or the mobile revolution or Google or Facebook. In the late 80's computers were generally thought of as being "those beige boxes on your desk." If you were part of the computing elite, you might have a 19 kilo-baud modem to gab with people on your favorite BBS.

At a time when your average computer visionary was rambling on about how the Internet would change the world, Mark was talking about a radical new way to interact with computing machinery. In his 1991 article for Scientific American called "The Computer for the 21st Century," he leads with the statement "the most profound technologies are those that disappear."

By the mid-90's a few core concepts in "Ubiquitous" or "Calm" computing had emerged:

  • The purpose of a computer is to help you do something else
  • The best computer is a quiet, invisible servant.
  • The more you can do by intuition the smarter you are; the computer should extend your unconscious.
  • Technology should create calm

Despite the fact these ideas were being spawned by some of the same smart people who brought us Ethernet and Graphical User Interfaces, they were largely ignored in the dot-com era. The industry was too busy consolidating heir gains from selling main-street incremental improvements over BBSes.

During the technology marketing intermezzo of the "dot bomb," we started hearing rumblings of a new "Internet of Things." It's easy to think that the Internet of Things is just the previous decade's Ubiquitous Computing with a slightly different marketing pitch; and if you only look at the technology, it probably is. But if you look at the business objectives, they're nearly polar opposites. (More on this later...)

The mid-2000's was a relatively difficult time for technology vendors. The Dot Com bubble soaked up a tremendous amount of venture capital; after the bubble burst, that money was lost and investors started asking serious questions from tech firms like "how are you going to get money out of this? and if you say ad sales, i'm going to slap you."

Ubiquitous Computing, with it's promises of changing the way people thought about computing, was a high-risk business proposition. If you did everything right, you could become "the Microsoft of Ubiquitous Computing." But it was very difficult to forecast what the market would look like in the ubiquitous future. Our experience was with selling devices you could hold and with software you could see running. "Ubiquitous" promised us a world where we wouldn't notice the computer. Talking about a product that is virtually indistinguishable from the background makes marketing people very, very nervous.

It's no surprise people started talking about "the Internet of Things." You could actually put your brainstem around it. The IoT is about putting a IPv6 address on every small device so it can stream information to your desktop (or maybe a server app somewhere in the cloud.) Medical and Logistic industries were to be revolutionized by IoT technology, goes the common narrative. After that, the technology will become cheap enough we'll start putting sensors in lightbulbs, carpets, planters, car tires, and every other thing we can see. We'll be awash in environmental data; all we need do is peek into the ether and pluck out the bits we're interested in.

IoT technology has worked somewhat well in the logistics arena; bar codes on inexpensive packages, RFIDs in consumer products and Wi-Fi enabled smart-buckets on factory floors have improved supply chain automation (and presumably enhanced manufacturing efficiency along with it.) Medical sensors have gotten smaller, cheaper and a more disposable in the last couple of decades. But we're a long way from the star-trek future where the ship's computer will constantly monitor your health and tell you need to take emergency meds or call a doctor.

The Unenviable Now

So we're now in a place where Chris Anderson (of all people) puzzles about the usefulness of Twine and Electric Imp. What's up with that?

Context. That's what's up with that. And narrative. And a seamless experience subsuming into our unconscious. Twine & Electric Imp are technology solutions for people who have already figured out some of their environmental computing problems. Unless you know you need temperature, pressure and humidity sensors or need to connect your digital bathroom scale to your iPhone, they come with no use context. There is no default story you can tell the consumer that puts them in the narrative.

I hate to say it, but some of these products are problems in search of a solution.

But it may be okay that you, or me, or Chris Anderson can't find a use for Twine or Electric Imp. As long as there is someone out there who can. Eric von Hippel's texts on innovation talk about "lead users" who identify solutions for specific problems early in a technology life-cycle. If you can't find the mass-market demand for a product, it might just be that you're not in a situation where a particular technology solves your problem.

So it may not be that there is no demand for IoT tchotchkes, it may just be there's no well defined mass-market demand.

Lower Case "internet of things"

There's a joke in the Artificial Intelligence community that "Artificial Intelligence is 10 years off... and has been for the last 50 years!"

Very few people now believe we'll see early ideas of Artificial Intelligence come to fruition in our lifetimes. That is to say, we probably won't have to worry about AI's like HAL-9000 going on astronaut killing sprees anytime soon 'cause we're unlikely to see an AI that can generally simulate all aspects of human cognition.

But even though we don't have intelligent robots doing all the work we want to avoid, the study of artificial intelligence has led to some wonderful technologies like natural language processing, neural networking for image stabilization and even self-driving cars.

These spin-off technologies are often called "lower case artificial intelligence" to distinguish them from the holy grail, upper-case Artificial Intelligence people simulators like HAL-9000 or William Gibson's Wintermute.

So even if we haven't seen the benefits of "The Internet of Things," maybe products like Nest, Hone and various heart rate monitors are the lower-case "internet of things."

Wednesday, April 17, 2013

Features of a Good Cloud Operating System

I'm a fan of "the Web." I drank the Web 2.0 Kool-Aid and believe in the Internet's ability to dis-intermediate, disrupt and enable.

I like "the cloud" even though I know most ASPs are probably storing my password in the clear or hashed with MD5.

I live in fear of the day Google decides to switch off Google Docs or Google Music. I worry that I sometimes depend on services that fail a year after launch.

I'm a "lead user" living on the bleeding edge of technology. Stuff *mostly* works, but I wish I could stop worrying and love the web.

I've been waiting for other people to build software that makes me less nervous, but I'm starting to get tired of waiting.

I want to build a "Web Desktop" that lets me keep my data where I want to keep it and display data the way I think it should be displayed. Here are a few ideas I'm kicking around for software I'm about to build.

Idea 1 : I decide where it's hosted

I know this isn't an answer for everyone, but I don't mind too much if I have to put up a server. Heck, I might just load the software onto a Raspberry Pi and serve it from my house.

If someone hosts the software I write for me and charges me a couple bucks per month, I'm down with that. As long as they're trustworthy and clueful.

Idea 2 : I want to see a dashboard instead of a screen saver

When I wake up in the morning, I would love to see a Dashboard on my Roku screen. Or as my laptop's screen saver. Or whenever I hit F1.

What I put on my dashboard will be:

  1. Today's Calendar
  2. Weather info for the next two or three days (but certainly for the next twenty-four hours.)
  3. List of the top five "To Do's" (both personal and work.)
  4. Upcoming deadlines or important dates.
  5. The "Has North Korea Nuked Austin Yet" indicator.
  6. Traffic information about my commute.
The web app I'm going to build will be something like a "dashboard construction set" so you'll be able to put stuff you like on your dashboard. I don't really track my Klout score and could care less how many friend requests i have outstanding on Facebook. But that might be important to you, so I'll figure out a way to do that. And I'll build a simple widgety client thing so other people can build dashboard widgets. I probably wouldn't have to do this if JavaScript widgets didn't suck on Linux (and let's not even mention how they disappeared from Windows.)

It would be super-awesome if I could make it with different dashboard layouts for 1920x1080, 980x1228, 600x800 and 320x533.

Idea 3 : It's about the context, stupid!

Why is this such a difficult concept for software vendors to grok. I am more concerned with "context" than with apps. or documents. "Context" is not about tools, but about tasks. "Context" has less to do with workflow artifacts and more to do with "Who am I doing something for?" It's also about understanding what I'm willing to be distracted with.

What does this mean in practical terms?

First off, I need multiple virtual desktops (or webtops, since I'm implementing this in a browser.) When I'm in the work virtual webtop, I don't want to see facebook updates. I don't really want to see personal email alerts. I'm happy to take calls from my family, co-workers and my child's school, but that's about it. In my personal virtual webtop, I only want to see work email alerts for "important" emails.

And most importantly, I don't want to see either a grid of applications OR a grid of documents, I want to see application AND document icons.

And when I open a document in the work webtop, it should stay in the work webtop, even if i switch to the personal webtop.

To me, context is about:
  1. Things I do
  2. Things I do it to
  3. Who I do it for
  4. What I'm willing to be distracted by
Idea 4 : No. Really. It should work everywhere.

It's unlikely I'm going to write a twenty page document on my mobile phone. I would like to be able to open a document, search for the word or name I just realized I mis-spelled, fix it and save the document back to the cloud.

If it works on the desktop, it should work on the mobile phone. If I'm stupid enough to try to edit documents on a mobile phone, I have larger problems than some product manager worrying that it will be a sub-par experience.

Idea 5 : Well-Defined APIs would be nice

In the glorious future, when we have our Internet of Things, we're going to get data from all sorts of little devices. It would be nice if they spat out data in well-defined, possibly self describing chunks. Things they speak to should have well-defined APIs. I want my car's gas tank meter to send data to my dashboard app so it can wake me up 15 minutes early 'cause it knows I'm headed to San Francisco tomorrow and I don't have enough gas for a round trip (ergo, I have to hit the bio-diesel station on the way into the city.)

Idea 6 : Apps should "play nice" with the dashboard.

To the degree it's possible, I would like widgets and UI components for apps to bundled up and delivered to my user agent (web browser) as discrete chunks. I want to be able to set a theme (like large text, high contrast, etc.) and have that theme be honored by widgets and app UIs. Yes, I understand this means I'm going to have to have a reasonably robust and potentially non-css styling language. I hope I can just get away with LESS stylesheets run through handlebars templating.

Idea 7 : Data and code come from anywhere

Yes. I know. It's an XSS attack waiting to happen. But I'm confident I can code safely, especially with the Content Security Policy API coming down the pike. Maybe in the future we'll have a reputation service instead of an app store. Before you install an app, you can automagically check with your constellation of reputation providers. That way the devout, morally straight Mormon down the street can point to the LDS reputation service to prevent his kids from downloading the Suicide Girls calendar app, while I can add the Wicked Grounds recommendation service to my list of app reputation providers.

Idea 8 : Maybe we could have accessibility features that don't suck

JavaScript heavy applications have the reputation for not playing well with screen readers. Maybe we could include a text to speech renderer in the system and convince app developers to expose data to the renderer. In the glorious future, we're bound to have web apps working on mobile phones in cars, so a voice API might not just be for the vision impaired, but also for drivers who want to work hands-free.

Other features like high contrast and large print themes should also help the over forty crowd.

In Conclusion...

So... I haven't started on any of this yet, so feel free to chime in. I'm mostly spitballing ideas, but I'm going to start coding tonight. I'll come back in a couple days with a pointer to a GitHub repo and a few notes. Cheers!


On Cloud User Experiences -- Maybe the Network Really Is the Computer?

John Gage is famous in Silicon Valley for coining the phrase "The Network is the Computer" way back in the 90's. Sun Microsystems took the phrase as it's product management mantra a couple years later. This was a bit of a leap for most people in Sili-Valley, leading to a number of jokes at Sun's expense (my favorite was a t-shirt that read "No wait.. the network is the network, the computer is the computer. Sorry for the confusion.") Gage may be eventually proved right (though Sun might have been decades ahead of the market on this one.)

Software as a Service (SaaS) has been a viable mechanism for delivering solutions to customers for at least a decade. The number of Application Service Providers (ASPs) are growing at least as fast as Independent Software Vendors (ISVs) focused on desktop systems and if we are to believe the trade media, corporate IT managers are clamoring for more Cloud-Based solutions. ISV growth is focused on mobile platforms (almost exclusively Apple iOS and Google Android.)

But maybe there's a future for the mobile web. Maybe the idea of buying and explicitly downloading a an app will go the way of the floppy disk. There are several very good economic reasons for corporate users to move to the mobile web; if ASPs can deliver half-way decent experiences via the mobile web, they have a good shot at taking a chunk of the mobile and desktop app market away from existing ISVs.

And who knows... maybe the next desktop computer you buy will run a web-oriented operating system like Google's Chrome OS or Mozilla's Firefox OS.

So what is a Cloud-Based Operating System anyway?

Ask a hundred product managers what a "cloud based" operating system is, and you'll probably get one hundred different answers. There are a lot of people who use the term to describe customized Linux-based OS distributions optimized to run a browser (and little else.) Google's Chrome OS is the exemplar of this class of distros, but it's certainly not the only one. Other people will tell you the term "Cloud OS" means a particular web application running in a browser that lets you perform typical tasks like editing documents, sorting pictures and even sending documents off to be printed.

But I think I agree with John Gage on this one; "the" computer is no longer the one on your desktop (or in your hand.) If you think of "the computer" as that thing that performs data manipulation tasks on your behalf, it's now spread out across your desktop, your phone, your tablet, your home router and any number of servers across the network. So if "the computer" is now spread across so many different systems, then "the operating system" is too.

Nomenclature fails us in this regard; in the mind of a typical user, terms like "computer" and "operating system" have evolved beyond their technical definitions. "Computer" is synonymous with "Desktop Personal Computer" or "Laptop Computer" these days. You rarely hear people refer to their smartphone or game console as "a computer." And what do most people know about operating systems? Not that they're privileged code bases which manage system resources, but that "an OS is the thing that draws windows on the screen and determines which apps you can run."

From a technical perspective, it seems wrong to call a collection of application software running on a constellation of computers an "Operating System," but from the user's point of view, that might be the most salient feature of future systems. In the future, users may stop looking for the Windows or MacOS X logos on shrink-wrap boxes and start looking for the "Salesforce Compatible Data Source" logo on various web pages.

In the future, when people talk about "the operating system," i believe they'll be talking about network APIs that let systems from different organizations safely share a user's data for the purpose of doing something useful for that user.

No, Seriously, What do I install on my PC?

In the future, I think there will be less "installing" going on. I believe you'll browse to an application's web page, click the "I Agree" button on the EULA and User Interface components will magically appear on your devices. Where we now have App Stores, I think we'll have independent reputation app databases in the future. As part of the process of "installing" a future Cloud-Based OS app, you can configure your user interfaces to check one or several of these reputation databases to ensure they're not malware (or that they don't use bad language, or that they don't show naked people, or ...)

People have long since stopped installing operating systems on hardware (with the exception of power users and Linux fanatics.) PC-based video games are now increasingly distributed online (see Valve Steam or search Amazon for "software download.") Even with bandwidth, security and server costs, online distribution is much cheaper than putting DVD-ROMs in physical boxes. The only reason to distribute your software through a physical retail chain is to appeal to those people who still prefer to buy "things" in computer stores.

In the future, we may see the thin client market evolve into a "slightly thicker client" market. Thin-client class hardware might be fitted with an embedded Linux operating system which boots into a browser. The browser's home page would be set to Google Apps, Facebook, Salesforce or a "Web Desktop" like Glide or G.ho.st. Maybe Microsoft or Oracle will leverage their relationships with enterprise customers to build a "Enterprise" Web Desktop.

Or maybe Community ISPs can become relevant again by selling cheap thick clients that point at Web Desktops served by their systems. Ditto for wireless operators.

What do you install on your PC of the future? Nothing. It's all over the net.

Where does my data go? Where does it rest?

So hopefully you've been asking yourself, "Hey!? Who's got my data!?" Imagine me waving my hands as my eyes glaze over and I solemnly intone, "It's in the Cloud! Beyond the confines of physical reality!"

I would hope that at this point, you would be thinking of how many seemingly decent ASPs can't seem to properly hash your password. There are true risks to putting all of your data out in the cloud. Luckily for web application operators, it's easier to quantify beneficial cost savings than detrimental security risks.

There is no simple answer to "how much risk can I tolerate?" Enterprises will (no doubt) insist that sensitive corporate information be stored on their own servers or those owned by trusted third parties. Individual consumers, who may be using the web to share data with friends and family, may be more risk tolerant.

Any viable web of the future must be flexible enough to support the high-assurance requirements for enterprise customers and the cost-sensitive nature of the consumer market.

Value Added Interfaces

If you go to any typical online application today, the way you get data into the application is by direct user input or uploading files from your PC. And yes, I do mean PC. Even though it's possible to upload  photographs and videos from tablets and smartphones, uploading other data files is a less than compelling user experience.

In the future, I believe we will see "Value Added Interfaces." These will be RESTful Web APIs applications use to share data on a user's behalf. For example, imagine you had a mailing list in a spreadsheet on Google Drive. Ideally, you should be able to give a Google Drive URL to a network based printing house who will do a mail merge for you. The printing house's software queries the Google Drive URL for a CSV formatted address book and "does the right thing."

I believe it will be possible to effectively secure these interfaces; even sensitive data will be available to authorized consumers. In the future, payroll companies like ADP may provide an interface directly to Intuit so your personal tax information is fed directly into the Intuit app. Or Exxon will provide details of fuel purchases directly into your corporate accounting system.

If we want to enable a "all my internet of things data is in the cloud" future, we'll need to move past our reliance on file uploads. They are a legacy of the PC era.

What will I see when I log in?

I have to admit, I don't like the term "Web Desktop." It enforces a PC era model on the cloud-based mobile data future. I don't like the idea of a grid of applications 'cause many times it's useful to think of documents or tasks or modes of thought. Twitter is talking about expanding the types of "cards" they use to summarize information in tweets. Maybe a stack of twitter-esque cards would be a suitable interface? Maybe a "dashboard" summarizing information and tasks of interest to you?

I don't know what you'll see on your screen when you log-in in 2018. Whatever it is though, it will likely be rendered using JavaScript and HTML5.

Monday, April 8, 2013

W00T! Computer Game Camp!

I love being an adult, but there are times I'm jealous of my child. This week was one of them.

The Offspring was enrolled in the Santa Cruz Maker Factory's week-long Minecraft Camp and the one-day Intro to Game Design Course. People who know my child and I, know of our family's love of all things Minecraft; so it's no surprise the Minecraft Camp was a hit. But it was the Intro to Game Design (for 9-12 year olds) course that surprised me. It was the best enrichment activity for my small one I've seen in a while.

Okay... a little bit of background. I'm not the worlds best games programmer. I wrote a couple simple games in the early 80's for the Apple Lisa and Atari 520ST. I also worked at Linden Lab for a while, trying to make Second Life a better place for everyone. But I don't really consider myself a games programmer. Part of the reason is game developers are some of the most overworked computer programmers I've seen in a while. So it was with a mildly heavy heart I heard The Offspring select game development as a career at age 8.  Yes, very cool my child wants to do something technical with an artistic bent; sad that it's a career that involves less sleep than a mother wishes for her child.

But seeing the joy that (not only) my child expressed during the Game Dev Course sorta changed my mind on this one. My kid positively lit up during the class.

Yesterday's class was taught by Joe Allington, who is himself finishing up a degree program in game design at UCSC. Joe was a great instructor, bringing a palpable love of the subject matter and excitement to the course. He used YoYo's GameMaker Studio as a platform to step the kids through game programming basics: (what we used to call) player graphics, basic game logic and simple animation. By the end of the course, the kids were adding their own animations and sounds, adding new game elements and effectively building completely new game levels. (Note: there's a free version of GameMaker Studio for Wintel and Mac at YoYo Game's web site - we downloaded it and are using it continue hacking platfomer levels.)

I give this course a thumbs up. If you have a child between 9 and 12 who likes games, check it out. (Also, I was happy to see the class was not all-male. Don't shortchange your daughters' futures by thinking they won't like or be able to handle game development. Girls can do GameDev too!)