Wednesday, November 17, 2010

Boycott KLIF's Advertisers?

So i just got finished listening to some of Chris Krok's broadcasts for KLIF radio in North Texas:
After a bit of outcry, he later issued an "apology" in which he defended his right to have an opinion on what is said in Ft. Worth Council meetings. His apology did not extend to incendiary remarks he made about LGBT persons and his antagonistic attitude towards the LGBT community. (And personally, I think his comments were pretty disrespectful of Mr. Burns, his district and of Ft. Worth.)

It's pretty clear he's using queer-baiting as a professional ploy. He moved from Chicago to Madison to Minneapolis (the latter two being relatively liberal) to Atlanta to D/FW in search of a market where he could sell his agenda of discord. The guy is clearly pandering to angry right-wing listeners, trying to boost his ratings by being "shocking." His tired schtick is grating and relatively unpleasant, but I'm comforted to know his audience only supports a part-time AM radio listenership. Seriously, there are probably internet podcasts with larger audiences.

Still, his whole queer-baiting act is unworthy of KLIF radio, and the guy will get dropped once the old guard that still go for that kind of vitriol die off. LGBT folk come from all walks of life and have significant economic clout. Sure, maybe we have more organized political muscle in San Francisco and New York than we do in Dallas, but the bottom line is this joker is making coin by disrespecting me, my family and my friends. LGBT individuals have made significant gains in mainstream acceptance in the last couple of decades, but let's not kid ourselves about this, there are still people who have nothing better to do than get worked up over who's sleeping together.

I think this kind of message is on the way out; but we can accelerate its departure by contacting KLIF's advertisers and letting them know we do not appreciate their support of programming that is demeaning of us personally or advocates a political agenda that disenfranchises us. So... anyone who wants to start taking shifts listening to KLIF and noting the names of their advertisers is welcome to join me.

If you happen to find yourself listening to KLIF, tweet the name of the advertisers with the hashtag #klifads. Include contact info for the advertiser if you have it. Every couple of days come back to twitter, search for the hash tag and call the advertiser to remind them they do not want their product associated with Mr. Krok's message.

Also, you may want to contact Mr. Krok's supervisor at KLIF directly and let him know how disappointed you are with Mr. Krok's comments on-air. His name is Jeff Catin. He can be reached at +1 214 526 2400 and his email address is jeff.catlin@cumulus.com. Please remember to be respectful of Mr. Krok and Mr. Catlin. They may have opinions we disagree with, and they may be making money peddling a message of hate, but we have the moral high ground here. We'll lose if they can drag us down to their level.

On The Dangers of Depending on a Single Provider

Yesterday I encountered a bit of a shock when I attempted to log into Second Life. Rather than seeing the familiar setting of my in-world beach house, I was greeted with a rude error message. "Login Failed. Second Life cannot be accessed by this computer. Contact support@secondlife.com."

"Ugh," I thought, "another error in an error-filled day." I had recently modified my settings.xml file to accomodate the weird screen size of my laptop and was convinced I had somehow managed to hork my settings to the point the viewer couldn't figure out which way was up. But after issuing the command `rm -rf ~/.secondlife` to perform a second life settings labotomy failed to clear up the problem, I started to get worried. Thinking my client install had somehow gotten horked, I re-downloaded and re-installed the linux client. Still no love.

At this point I sort of paniced and thought I had been permabanned. Fortunately friends were around to remind me to try to login from other machines. Fortunately, I could log in from the ancient iBook G4 I still have laying around the office running an old copy of viewer 1. After downloading viewer 1 to my main machine and trying to log in, still no love.

For whatever reason, my ID0 or MAC had been banned.

Something was seriously horked here, so I did what you're supposed to do in situations like this: I emailed support@secondlife.com like it says to do in the error dialog. In a few moments I got a response back saying that even though the client told me to send them an email, I'm really supposed to use a web form to file a ticket. After filing a ticket and getting confirmation from Linden's automated support tracking system, all I could do was wait.

While waiting, I found some Second Life Forum discussions about exactly the same problem. Apparently back in the old days this would happen from time to time and after about four hours after filing the ticket, someone would get to it and unban them. So I waited.

I'm a bit lucky in that I have a few support resources that most Second Life residents do not. As a former Linden, I have several personal friends who still work at the lab who were able to confirm that there's nothing on my account "rap sheet" that would indicate my account had been banned (we knew this already). The weird bit is there was no record of my IP address or my MAC addresses being banned either.

Yay! Random weirdness.

So I wait and wait and wait. This morning it's still not resolved so I send an email to a friend at the lab jokingly suggesting this is Linden's way to prevent me from attending virtual world standardization meetings in-world. Let me just stop here and say that while it's fun to let paranoia sweep you away and imagine a consipiracy to prevent you from doing your work in world, it's probably not accurate. Remeber the saying.. "do not attribute to malice what could easily be attributed to incompetence."

Rather than there being a conspiracy to keep me from logging in, here's what probably happened... When you log in to Second Life, your client sends a couple of numbers that uniquely identify your PC: the MAC address and the ID0. These values are used to identify the PCs of griefers, scammers and generally bad people. This is one of the reasons I was surprised to get the ban hammer. What the heck had I done to warrant banning?

MAC addresses and ID0's can easily be spoofed; "bad guys" do it all the time. And probably what happened is somewhere out there a random griefer picked a random ID0 and got caught. The random ID0 they picked happened to match mine and when they were "banned," Linden added the spoofed ID0 to the black-list. Once the "bad guy" in this story couldn't log in with my ID0, they probably picked a new one at random and went back to griefing. I, on the other hand, was left unable to get in-world because it's a violation of Linden's acceptable use policy to spoof these values. You're supposed to resolve the issue through the support center.

So you see what's going on here, right? Linden is using a technical mechanism to enforce a decent policy. But the only people punishes by its enforcement are legitimate users who don't spoof the second life servers.

After working my personal connections inside the lab, I was able to get this issue resolved. But I can't help but wonder what people who are NOT former lindens do.

So let me try to wrap this up with this thought: doing business in Second Life is unnecessarily risky. While it's a wonderful platform for people to express themselves and it's a great, social immersive experience, the policies that govern this virtual world are a bit wonky. And they're likely to stay wonky for the near term.

As messed up as Second Life is, it still has some great things going for it: lots of participants and lots of bling you can buy to name a few. But at the end of the day, if you run a business in Second Life, you are dependent on an external for-profit organzation for your bread and butter. In the 2D-web world, services are neatly interchangable. If you don't like Google's privacy policy, you could get an account at Yahoo! or Hotmail. Or at your local ISP. Or if you knew what you were doing, you could build your own email server. Ditto for other basic web services,

But you simply can't do this with Second Life because Linden owns the service definition. Not to mention, they've got a great head-start against potential competitors like InWorldz and ReactionGrid. Both are great services offering experiences that are more or less the same as Second Life, though with a significantly reduced community. Owners of virtual businesses stick with Second Life in spite of it's seemingly random behavior because "that's where the money is."

When we chartered VWRAP, our objective was to create a truely open virtual world ecosystem. The protocols we worked on were intended to be implemented by a wide array of participants allowing Second Life to grow beyond it's "walled garden" beginnings. By opening the virtual world up to potential competitors, Linden risked short-term reduction in revenues from land sales, but could have participated in a "3d web" that was more attractive to businesses (and educators and ...)

VWRAP is now effectively dead. The HyperGrid lobby may try to recharter VWRAP to focus on HyperGrid, but it's an implementation rather than a protocol. The IETF is sort of allergic to these types of efforts; taking a protocol implemented by a single product and "blessing" it as "the" standard. And with due deference to the activities of ReactionGrid and InWorldz, the larger business community is probably not interested in risking an investment in a protocol with a single implementation. (Thankfully, there are enough bleeding edge users out there that RG and IW can probably survive.

But at the end of the day, when you depend on someone else for technology, you depend on their business remaining stable enough to support your business until you've recovered your investment. This is why Linden makes a lot of people nervous. Sure, they have to adapt to changing business realities, but it's irritating when they behave seemingly randomly. It's even more irritating when you realize that it's their economy you're playing in.

And this is why dealing with OpenSim grids is irritating. You wonder if they're going to be around long enough for you to recover your investment in their economy.

Clearly it's not so irritating that I don't involve myself in their use and development. But I think I'm likely to be cranky and irritable 'til... well... until Second Life and Reaction Grid are dim memories (like AOL and Compuserv) and their creators move on to more profitable open virtual ventures.

[author's note: Kyle, Robin and Chris, who are employees of Reaction Grid have protested in the comments below. I think there's some validity to their comments. I was unclear in that last paragraph about what I wish goes away. For the record, I TOTALLY love what Kyle &co are doing at Reaction Grid (the company). I have nothing but best wishes for Reaction Grid (the company). What I wish goes away is the concept that you can only make money in virtual worlds if you're a "grid operator." That is, I hope we land in a future where "the grid" as a concept goes away and is replaced by a cluster of services that implement a virtual world. I hope Reaction Grid (the company) or Reaction Grid (the community) or Reaction Grid (the collection of virtual locations) all have a prosperous future. I do, however, hope that Reaction Grid (the artificially walled garden of content and identity) evaporates in the future. Given Reaction Grid's (the company's) support of HyperGrid and OpenSim, I believe they share that goal. This meaning was pretty obvious in my head when I wrote that last paragraph, but clearly it was unclear. I apologize to Kyle, Chris and Robin for the confusion.]

Monday, November 1, 2010

fun with sl8.us

so i've been playing with the sl8.us URL shortener-redirector for a bit and have figured out a few fun applications. (for peeps that don't know about it, the sl8.us redirector shortens second life URIs like secondlife://Meyers/171/13/48 and converts them into http URLs like http://sl8.us/mHcs48A9P, that redirect you into Second Life.)

sl8.us was originally intended to shorten SL URIs so they could be easily included in tweets. so the first obvious application is to tweet your location. i've been using it to tweet about interesting in-world locations. you can search for shortened URLs on twitter by searching for "sl8.us" to see locations other people find interesting.

you can shorten a URI by visiting the social avatar site or by using the in-world HUD that shortens (and optionally tweets) your location. (you can get info on it's installation and use at the HUD intro page at sl8.us.)

the in-world HUD lets you "mark" a location for url shortening or look to see who else has been making marks in your current location.

but something more convenient i've found is the ability to bookmark in-world locations in my web browser and online tools. some social bookmarking sites barf on secondlife:/// URIs, so having a http:/// URL that does the same thing is rather convenient. i've added a list of locations i visit frequently in-world to my browser bookmark bar. it's nothing major, but it makes SL a little more "fast." i'm kind of surprised linden hasn't tried to roll out something like this.

Friday, September 10, 2010

announcing project brookdale

this is a quick post to announce "project brookdale," an open source implementation of the Virtual World Region Agent Protocol (VWRAP) suite. VWRAP is a collection of specifications produced by the IETF's VWRAP Working Group. these specifications define interoperability for a "second life-like" virtual world. you can find more information about VWRAP on my VWRAP Page or the blog post "what's the virtual world region agent protocol?"

project brookdale will produce PHP, JavaScript and C/C++ middleware that manages protocol interactions. it is not, in and of itself, a complete virtual world implementation. it is instead designed to be used to simplify the development of VWRAP compatible tools and services. this division is not completely unlike the division between libomv and OpenSim.

here's a quick list of things you'll find in brookdale:

Dynamic Structured Data (DSD) and HTTP Transport Bindings

"DSD" is an evolution of the previous LLSD and LLIDL abstract type system and interface description language. the main difference between DSD and LLSD are minor changes in the XML serialization and a "layered" approach making it easier to use VWRAP messages with non-HTTP transports. more information about the motivations behind DSD can be found in a "abstract resource definitions vs. LLIDL". the internet draft draft-hamrick-vwrap-data-00 describes DSD in more detail.

one feature desired by implementers of previous OGP and early VWRAP work was clear guidance or agreement for how to handle content negotiation and caching of VWRAP messages carried over HTTP(S). draft-hamrick-vwrap-foundation-00 builds on the previous work and describes the use and interpretation of HTTP headers and status codes for content negotiation and caching.

project brookdale provides PHP and JavaScript classes and C functions implementing the interface semantics of DSD. it allows web and application developers to easily manipulate VWRAP requests, responses and events.

a capability management facility and capability broker

web capabilities are an integral part of VWRAP. they allow distributed, trusted systems to grant fine grained access to sensitive resources without a global identity management system. an informal introduction to capabilities in VWRAP can be found in the earlier blog post, "VWRAP essentials : capabilities".

capabilities serve as aliases for RESTful VWRAP resources. composed of a cryptographically unguessable URL, they combine authorization to access a resource with that resource's address. the "capability broker" is the system component that maps a URL managed by a system to the object or database row it represents.

a simple VWRAP event queue over long poll

the VWRAP event queue is a simple abstraction for unsolicited server to client communication. VWRAP currently reifies the event queue as a "long poll" over HTTP. it is hoped that future work will specify the use of WebSockets as a carrier for VWRAP events.

simple connection manager with trust model support

trust between components in a VWRAP system is established by asserting identity using an X.509 client certificate. the VWRAP specifications do not require systems to trust any particular entity or certification authority, but they do require entities accept connections that use client side certs. in other words, a VWRAP system is free to ignore X.509 credentials from a client, but it must not disallow clients from sending them.

one ramification of the VWRAP trust model is it requires a "trusted" service to potentially present a destination-dependent certificate to a remote peer. an authentication service, for instance, may identify itself to an asset service by presenting it with a certificate issued by the asset service (or a trusted third party certification authority.) the brookdale connection manager maintains a mapping between destination URLs, client certificates and their related private keys.

VWRAP Authentication (including OAuth support)

VWRAP defines several "native" authentication technqiues as well as the use of OAuth in protocol transactions. the brookdale authentication components manage the process of user authentication and the seed capability lifecycle.

Client Application Launch Message processing

the VWRAP client application launch message (CALM) is an optional message sent to a web browser with specific details of which servers a client application (like the second life™ viewer) should contact to complete the login and rez process. intended to be used in conjunction with web authentication and authorization schemes like OpenID or OAuth, brookdale contains PHP, JavaScript and C functions to generate and process calm messages.

we're starting by publishing a few PHP and JavaScript files at the project brookdale page. these files are much more "middleware-ish" than they are "application-ish," but more will be coming in the next week.
the release of the code corresponds with the latest VWRAP abstract type system proposal. more code will be released at the brookdale site as the newer VWRAP drafts are published this month.

Thursday, August 26, 2010

net video startups, you don't get me

one of the great things about being me is i have a bunch of friends who work for startups in sili valley and i sometimes get invites to beta their wares. i work as a software developer instead of a marketing guru, so peeps frequently invite me to look at things late in the development process, usually to show off their technical mastery.

and i have to admit, i'm usually impressed. there's a lot of cool tech going on in the valley at the moment (even though we're in the middle of a double dip economic downturn.)

but i'm now a single co-parent. my extra time and cash goes into my family life and a college fund for the offspring. so if you want me to subscribe to your service (or buy your latest tablet) you have to show me some pretty clear value.

i had a netflix subscription until i realized i could easily replace my netflix use with hulu and redbox. and my hulu use is pretty minimal. i tried to watch lost and heros, but just couldn't get into them. i LOVE eureka and my son loves the clone wars. between my son and i, we consume about 2.5 hours of traditional televison content per week. the rest of our packaged video entertainment time is maybe 3 hours of movies per week. and when i remember to do it, i watch john stewart's daily show.

the rest of our inside entertainment time is spent playing video games (maybe an hour per day in the summer, but about two hours per week during the school year if all our homework gets done on time.) i spent a couple hours per week in second life or one of the independent OpenSimulator virtual worlds.

and i watch a metric boatload of youtube videos. i can't get enough dancing cats. i consume my media on my computer; but i would LOVE to watch videos on my TV. right now the only thing i use it for is to play DVDs and vintage nintendo games.

from talking with my peers, it seems like the main difference between me and other people is that the ratio of TV programs to youtube videos is a little bit skewed.

so i'm always surprised when companies like apple, sezmi, boxee and hulu try to shove "premium content" down my throat. i would rather wedge sporks in my eyeballs than watch most sitcoms. i won't pay to watch friends reruns or current episodes of two and a half men.

i would actually pay to watch google tech talks and nova on my TV (instead of my computer.) but i don't think i would pay much.

on thing that fascinates me though; watching movies online with friends. i used to work for the people that made second life, so it shouldn't be a surprise i have some social contacts "in world." one thing i really enjoy doing is getting a group of people together and watching a movie inside the virtual world. sadly, there's little content legally available.

the other evening 200 of my closest friends and i watched a broadcast from within second life on treet.tv. the program was obviously supported by advertisements, and i hope the treet.tv folks were able to charge enough in advertising to cover their streaming costs. the particular show, which dealt with weirdness in the emerald viewer community, had a very focused audience. if i were running ad sales at treet.tv, i would have charged a premium for the ads during that hour (due to the larger than average market,) and tried to find ads targeted towards content developers, super-users and open source developers.

i guess what i'm thinking here is there may be a "long tail" play here in internet video. youtube has part of this niche, but what i would LOVE to see is a ROKU or Nintendo video channel for "interesting science and technology programs." i have a nice television set i would like to use. but i don't want the hassle of a complete windows media center PC. i sure as heck don't want to browse the web on my TV. and i'm totally down with the idea of configuring my device on a web page i access via a desktop or laptop computer.

but the main thing i'm interested in is "watching TV with my online social network." i would love to have a "geek tv channel" that i program, that i invite my friends to view with me. i would populate the channel with live video broadcasts from the web: treet.tv, kink on tap, google tech talk reruns. i want to watch the video on my TV, but have a group text or voice chat session on my laptop.

i'm not so conceited as to think this is "the future of television," but i have to think it's closer than someone trying to sell me friends reruns for $20 per month.

Wednesday, August 25, 2010

VWRAP essentials : the event queue

this week on VWRAP essentials, i wanted to talk about the "VWRAP event queue." in the world of second life(tm) viewer development, the term "event queue" has a very specific meaning. it is a set of data structures, functions and UDP message types that communicate events from servers to clients. that's not what we're talking about here.

VWRAP is intended to be an application layer protocol that can be carried over multiple transports. implementers MUST support transporting VWRAP messages over HTTP(S), but there's been a lot of interest in optionally moving some traffic over XMPP and RTP. so defining anything in terms of HTTP is sort of a problem for VWRAP.

instead, we define an abstraction and then define a bunch of ways the abstraction can be reified by network protocols of real software. the VWRAP event queue is an application layer abstraction for a facility that delivers arbitrary, unsolicited messages to a network peer. in other words, the event queue delivers messages from servers to clients without a specific request from the client.

this is a general problem with interactive web applications as well, and a couple solutions have surfaced in the web community. embedding flash or java content in an HTML page is one solution. plugins for these technologies are readily available for popular web browsers. but VWRAP doesn't use HTML, and it certainly doesn't use a web browser.

another popular technique is "the long poll." interactive web apps implement the long poll by using the JavaScript XMLHttpRequest object to query a specific URL on the server. if the server has nothing to say to the client (i.e. - there are no unsolicited server to client messages) the server just waits, leaving the connection open. as soon as it has something to say, it delivers the message to the client. the client processes the event and queries the poll URL again, waiting for the next message. this technique is simple, but can be wasteful of resources in some programming languages.

more importantly, it's a hack. in a RESTful world, where accessing resources of HTTP is supposed to be idempotent, the idea getting something different each time you access a resource seems somehow unclean.

but there's an emerging standard for how servers should push streams of unsolicited data down an HTTP connection. it's called WebSockets. it's not without warts, and will require a little retooling of some of the web's infrastructure. it didn't include every feature that everyone wanted, but there's general agreement that it's "good enough" and is much better than using the long poll.

VWRAP could have defined the event queue in terms of WebSockets, but at the time it wasn't clear WebSockets would become "the" standard. it may also require some amount of tinkering with the infrastructure to implement, so it's not entirely clear WebSockets will work everywhere all at once. it's also possible that some organizations may be behind aggressive firewalls that will block WebSockets,

for all of these reasons, the VWRAP designers decided to create an "abstract" event queue that could be reified by sending messages over long poll or WebSockets.

the current foundations draft only defines the event queue over a HTTP long poll. future versions will describe it's use over WebSockets.

Tuesday, August 24, 2010

does the linden third party viewer policy sidestep the issue?

the second life community has spent the last week following a reasonably important controversy now referred to as "emeraldgate."

if you're a member of this community, it's hard not to have heard about it. for the benefit of those people who don't follow second life closely, here's a brief backgrounder.

so linden lab has this virtual world called second life(tm). the state of the virtual world is held on servers operated by linden. these server remember things like where people and things are, who owns what, and what direction things are moving, etc. second life users access the virtual world using a "viewer application." the viewer communicates with lindens servers and renders information from them in a nice 3d scene on the users' personal computer.

to spur innovation in virtual worlds, linden open sourced it's viewer software a couple years ago. several teams started adding new features and fixing bugs linden was slow to address. one of the single most popular "third party viewers" was a project called "emerald."

we recently discovered that the emerald viewer has been doing some "bad things." for a couple months people have noticed some weird encrypted data being sent from emerald installations. turns out it's information about the client's PC. granted, the emerald viewer isn't trying to sift through your hard drive trying to find credit card numbers, but the information it leaks (user name and emerald executable location) could help skilled bad guys compromise emerald user's systems.

very recently people discovered a distributed denial of service (DDoS) attack being launched by the emerald viewer. blargh! thousands (if not tens of thousands) of users were unwittingly being co-opted into attack on the rival of one of the emerald developers.

needless to say, a lot of people are beginning to question emerald's ability to manage their developers and produce quality software. twitter and facebook are filled with status updates from users saying they're ditching emerald for linden's official viewer or another third party alternative.

the most recent entity to weigh in on the issue is linden themselves. philip rosedale, linden's CEO published a quick blog post on the issue: Malicious Viewers and Our Third Party Policy. linden is removing the emerald viewer from a directory of third party viewers linden maintains. the "third party viewer directory" is a list of viewer applications (most of which are based on linden's source code) which purport to be essentially well behaved.

emerald's removal and rosedale's blog post were not surprising; the emerald viewer did a couple of bad things they should have known were bad. the lab's actions are hoped to distance the second life service from a few bad developers.

the good news is that some of the old emerald team is reforming and will be trying to build a project where "bad things" like what came to light last week can't happen. we'll see if they can convince their user community and linden of their ability to follow through. the jury's still out on this issue; but it's early in the project cycle so it's anyone's guess how this all resolves itself.

but there is one aspect of this crisis that bugs me: why do we need a third party viewer directory in the first place?

to understand why there's a third party viewer policy and a third party viewer directory, you have to understand a little about the second life virtual world. second life is frequently described as "the 3d web," but there are some notable differences between the web and second life.

first off, second life is not "open" in the broadest sense of the term. the lab has done some wonderful work open sourcing the second life viewer and supporting the ecosystem of third party viewer developers. but the limit of their openness is to release the source of the viewer. this creates the unsatisfactory situation where the protocol used to communicate state of the virtual world is owned by a single entity capable of making unilateral changes.

in the web browser development world, core standards like HTTP, WebSockets and even JavaScript are defined by industry standards coalitions. linden did support the VWRAP effort to develop open standards, but withdrew support for the standard and laid off the staff responsible for it's implementation in the lab.

but maybe one of the most important differences between second life and "the web" is the idea of content. on the 2d web, content is embedded only in the place. it is rare for content to follow a user around from site to site. yet this is the moral equivalent of what's going on in second life when you move your avatar from one location to the next. when you see other web users, it's usually as an image icon right in front of some text. second life users know that their avatars are much richer and more varied. second life users are represented in world as collections of shapes, skeletons, meshes and textures.

and this brings up the next major difference between the virtual world and the web: content protection seems MUCH more important in second life. don't get me wrong, i'm not trying to discount concerns of content thievery on the web. but the web's business model is that content "lives" on a web page and isn't supposed to move. in the virtual world, content creators sell content to individuals with the intent they'll move from place to place.

and it's this expectation of content content control that lies at the heart of the third party viewer policy (and directory.) were second life like the web, content creators would sell content to people and be done with it. but the primary technique for monetizing content on the web is to sell advertising next to it (or sell memberships to content that remove invasive ads.) the web seems to reward content that persists in one location long enough to be indexed by google or microsoft.

tracking down DMCA violations are pretty straight-forward when you can refer to the google cache and the internet archive.

but not so for the virtual world. in second life we rarely extract value by advertising. sure, linden is happy to take a cut when you search, find and buy something from xstreetsl, but the full content is not available on that site for bad guys to purloin.

content creators in second life make their living from selling their goods directly. there's a marketplace here for goods because, quite honestly, the direct cost to users is pretty low. for about the cost of a discount cola from my grocery store, i can purchase a very fashionable outfit for my avatar. for the cost of a latte, i can purchase a complete meeting center to hold virtual meetings with friends or co-workers.

revenue on individual sales are low, but the distribution and copying costs are effectively zero for content creators. the primary costs for second life vendors are non recurring production costs and the cost to maintain a store front. but with xstreetsl offering people a web experience to discover and purchase goods, the "real" costs of doing business in second life boil down to paying yourself for the time you put into building something.

and this is why "less than moral" actors in the virtual world fall to temptation. it's laughably easy to copy someone's work, repackage it as your own, and sell a few on xtreetsl before anyone notices what you're doing. why bother going to the trouble and expense of actually making content when you can just steal it?

in the web world, this content would likely not be of any use to you until it's been optimized and indexed by google's search engine. if you were a purveyor of purloined content on the web, the same tool that provides you the ability to monetize your stolen content is the tool that lets content creators detect your theft.

but search in second life is "sub-optimal" and advertising has been effectively quashed in the interest of user experience. it turns out that people don't want to wander around a virtual world filled with billboards.

the second life economy is dependent on scarcity. there MUST be some scarcity in content community's creative output in order for the virtual goods market to work. but these are digital goods we're talking about, and it turns out that if you're reasonably handy with a C++ compiler you can quite easily make illicit copies of restricted content.

left unchecked, high margin content would be hoovered out wholesale and sold at discount prices by IP thieves. at the end of the day, there is very little linden lab can do about this from a technology perspective.

if it can be rendered on your screen, it can be saved on your hard drive and later re-uploaded. this is the main reason you'll frequently hear people say "put all the value of your content in your scripts." LSL scripts are the only bits of content that are not downloaded to the client. bad guys can't easily copy them with hacked client software.

it turns out that yes, bad people are making a living off stealing other people's content. and there's little that can be done to completely eliminate it. the linden third party viewer policy is an attempt to slow down the dissemination of tools that make content theft easy.

it's a great idea, and i think linden is demonstrating the best possible motives here. but we have to be realistic about what the policy can and can't do.

it is extremely difficult to craft technological prohibitions that will keep all the bad guys out. client IP addresses are rarely stable for long periods of time and the "bad guys" have already figured out how to hack the client software to present fake MAC addresses and viewer strings to the second life servers.

but what _is_ a little easier to do is to crack down a little on the distribution of software with illicit intent. linden's third party viewer policy tells the community what third party software can and can't do and still be considered "virtuous." the third party viewer directory gives users a list of viewers made by people who have promised to honor that policy.

and what's at the core of emerald gate is not that the stock emerald viewer is being used to steal content, but that it was doing things with encrypted messages that made it difficult to figure out if it was stealing content as well as coercing user's PCs to behave in a "bad" way.

so given the current state of the world, and the fact that it would likely be economic suicide for linden to abandon it's content creation community, the third party viewer policy makes a lot of sense.

there is still an open question about "walled gardens' like second life. one can certainly imagine a service where content flows easily in and out of the virtual world. where content doesn't live on linden servers, but lives on public (or semi-public) web servers. the value of the content is not in it's raw bits, but in the way it's marketed, aggregated and distributed.

maybe in future virtual worlds value will derive from creator reputation and recommendation in social networks. maybe the future will see a world of abundance where value and monetization potential is extremely ephemeral.

but we're not there yet, and that's why the third party viewer policy is a necessary evil.

tips on using social media

for the record, i am NOT a social media consultant. i'm not in the business of telling people how to use social media tools to drive revenue. but last sunday i pretended to be a social media pundit in my blog post "ubiquity vs. applicability." in that post i espoused a theory that the larger (or more ubiquitous) your social network, the less utility (or applicability) it had.

or... stated another way... if you have 17,000 people following you on buzz!, then you're not going to be able to engage people the same way you do if you only have 30 (or even 300.) it's just not humanly possibly for your fingers to fly that fast or for your brain to process that many independent threads of conversation.

but in that last post, i didn't really offer any suggestions for people with large social media followings. this post is my attempt to move from punditry to consultantishness by making some prescriptive recommendations. i'm picking leo laporte as a use case since everyone seems to know who he is and it just seems like he needs some good advice about using social media at this point. so until he gets some good advice, he can simply follow mine.

first, an admonishment: don't bitch about social media being an echo chamber when you're using social media as an echo chamber. leo, please, you have a bazillion followers in your social media network. you're not using buzz to share the latest youtube cat video with 10 friends or see who wants to meet up at the pub for a drink after work. you're using it to promote your tech related communication empire.

i'm going to go out on a limb here and say that if you use social media for commercial purposes, you'll get a slightly different engagement level than if you use it for... well... social purposes. saying that social media is a crock cause it doesn't fit your particular use case makes it look like you don't know how to use it.

a better way to handle this situation would probably have been to say, "wow, i was using the tool sub-optimally" and then go into a well reasoned discussion of ways you could use it better.

the bottom line here is that blaming the tool rarely makes you look good; and you pretty much have to be bill gates or the pope to be able to pull it off properly.

the next suggestion? monitor your network, figure out why you're not getting feedback from your peeps. okay, so i made the suggestion above that social media is all about stuff like meeting up in the pub for a game of darts or bragging about your offspring. that "social" use of social media is a big use case, but not the only one. sure, you can do professional things on buzz or twitter and maybe even facebook and myspace.

if you are taking the time to tweet about something commercial or professional, you're going to want to know you're getting value for your investment in time. you'll want to know who is and who isn't replying to your tweets (and why.)

once you know why you're not getting responses from your network, you may want to be a friggin social director; engage people who seem introverted.

for instance... when i tweet something vaguely interesting, there are a couple of people i know will retweet it or reply to it. every now and again, i go through the list of people i'm following and mention them explicitly, like "hey @soandso, what do you think of this thing i just blogged?"

this works as well for net.celebrities as it does for us regular folks. the value of a social media is in getting perspective from friends of friends. most of my twitter friends turn out to be virtual worlds people with a smattering of old friends and roommates. by engaging smart people in my network, i can kind of drive a cross pollination of ideas between people i would like to see conversing. see how this works? by being the one that picks who to engage, i get to shape the conversation; plus, i have a front row seat to the exchange of ideas.

i would really love to see a service that tells me how long it's been since someone mentioned my twitter handle in a conversation and how long it's been since i mentioned theirs. i could use such a tool to tell me who i need to pester in a tweet.

oh. one more thing. i almost forgot to mention, do not treat buzz, twitter or facebook like email. there's a social contract embedded in the email system; if i send it, you'll eventually read it (or you'll eventually ignore it.) but the point is you can defer the decision to ignore it or pay attention to it. social media tools like twitter and facebook are a little more ephemeral. you haev to internalize this.

when you tweet or buzz or post a facebook update, your communication window with them is the length of time it takes for your update to scroll off the screen. sure, some people use advanced client applications that layer an email-like interface on top of a social microblogging service, but for the most part they're the exception to the rule.

facebook and buzz are not as bad as twitter and status.net on this score, but you still have to take timing into account. if you're targeting an audience for a particular status update, maximize your chances for success by timing important status messages. if you want "business types" to read an article linked to in a tweet, time the tweet for business hours. if you're targeting people in a geographic area, don't tweet it when they're asleep.

so far i've talked about things that work for people with small networks as well as large. but people who want to have a large social network are going to eventually run into bandwidth problems. at some point you're going to start getting more updates per second than you can process.

i've been playing around with futuretweets.com as a "twitter timeshifting" tool. futuretweets isn't the most feature rich site in the world, but it does one thing and it does it well and that's worth something.

what do you do then? consider registering "topic based" personas.

other people will likely be following you for different reasons. some will be interested in your "social" self while others are interested in your professional life. consider creating two distinct personas: one for business, one for personal. net.celebrities may want to consider even more.

in an ideal world, there would be a service that lets you subscribe to individuals hash tags. sadly, that doesn't exist at the moment.

anyway, these are just a few ideas. there is no "one true way" to use twitter or buzz or facebook. but you may have specific requirements for your social media. i'm hoping these suggestions can serve as a starting point for your own suggestions.

good luck and happy tweeting!

Monday, August 23, 2010

what we should learn from the emerald debacle

so the last week has seen a storm brewing in the second life community. at the heard of the storm is the popular emerald viewer from modular systems. drama erupted earlier this month when emerald developer LordGregGreg announced his departure from the project. it's not unusual for developers to leave open source projects, even "old skool" devs like LGG. this type of departure frequently goes unnoticed by the general public.

but what stoked the drama fires in this case was the reason LordGregGreg said he was leaving:

"I did not realize at the time that emkdu was added, that it could be used to add in code I was not able to see... Although replacing or deleting emkdu would resolve this issue, I also have to consider that this was hidden in the code for months without anyone knowing." --LGG


the "emkdu" code module referenced in this quote is a closed source component, and over the past several months there's been concern it's functions have been compromised. the issue is complicated and layered and has been used by some to refute the open source software development model.

but the core of the issue seems to be that the emerald project is too big for a single, trusted resource (like LordGregGreg) to effectively evaluate the trustworthiness of each check-in.

adding to emerald's woes was the allegation the third party viewer's HTML login page was maliciously mounting a Distributed Denial of Service (DDoS) attack on a rival software developer. modular system's response appeared by some to be "weak" and to not fully address the issue. wagner james au has a good write-up of the controversy at his new world notes blog.

potentially malicious code in open source projects? rogue devs DDoSing alleged bad guys? what's going on? i've read the cathedral and the bazaar several times and it seems to be implying FLOSS should be preventing these types of problems. after all, we're depending on open source methodologies to deliver on the libertarian promise of the technological meritocracy. the marketplace of ideas is supposed to encourage popular features and bug fixes while discounting the trivial and inconsequential.

free from the distortion of economic incentive, concepts are judged by their merits and implemented in software using the aggregated spare minutes of thousands of developers. but instead of being guided by adam smith's invisible hand, we seem to be avoiding the invisible foot affixed firmly in the metaphorical mouth.

lessons we can learn from the emerald debacle

1. with enough eyeballs, all bugs are certainly shallow. but this only works when you're careful about what you call a "bug." noted software security researcher john steven has a quote, "computers should do what you tell them to do, and only what you tell them to do." the implication here is that as an industry we expend a lot of effort on quality assurance; making sure that our software does what we think it should do. where we fall down as an industry is in software security; insuring that our software doesn't do what it's not supposed to do.

open source projects are not the only ones guilty of this. plenty of proprietary projects have inadvertently introduced vulnerabilities into their code. it's not easy to prove your software doesn't do something; it's proving a negative.

but the idea that the openness of a project will somehow reduce or eliminate security risks is magical thinking. so, lesson one is, "even if your software project is open, you still have to worry about software security."

2. order may spring from chaos, but there is no guarantee that it will. many software developers i work with have developed a misplaced faith in "emergent behaviors." in many instances, we see seemingly chaotic projects or processes "come together" while tightly controlled processes fail to accomplish their objectives. with respect to software development, or any large complicated human endeavor, i believe this stems from incomplete visibility.

no one participant has visibility into every event affecting the project. because we only see a fraction of the inputs and outputs, even "rational" processes seem random or chaotic. when these seemingly random events yield a successful outcome, it's easy to assume some "higher order" has emerged from the chaos.

and it probably has. but it's rarely the same higher order that you think emerged.

so lesson 2 is, "a development process that worked last time may not work this time."

3. perhaps the worst lesson from software projects, open or not, is that the efficacy of democracy to organize human activity is not universal. in other words, letting everyone's opinion carry equal weight in a technology project may be a bad idea. don't get me wrong, it's a great idea for local governments and i'm not saying that project leaders should act ruthlessly, ignoring the interests of project participants.

the problem with democracy on software projects is special interest may lead developers to introduce enhancements and bug fixes that are to the detriment of other developers or the general community.

for example, i run second life on a not-exactly high end system. i've got a reasonably beefy system, but i never run in "ultra" mode. it's just too much of a stress on my overly middle-of-the-road GPU. so why not just remove all that "graphics mode" clutter from preferences? it would certainly simplify a program with a reputation for being "not exactly easy to use."

i picked this ridiculous example for a reason. i don't think anyone would ever suggest removing features that degrade the experience of people who have gone to the trouble of buying high-end graphics hardware. but many projects have processes which could allow this to happen.

it's not exactly what happened in the emerald case, but it certainly seems emerald is lacking a single developer or architect who's role is to understand how all the pieces fit together and can proactively dissuade developers from doing things that would ultimately be harmful to other dev's efforts.

so lesson 3? "democracy is considered harmful in software projects."

4. the last, and perhaps the most important lesson might be the most depressing to software engineers. people frequently contribute to open source projects for the purpose of expressing a creative urge. many developers in the FLOSS community spend their days looking at proprietary code owned by a corporate entity. they come to open source projects to exercise their creative muscle in ways they find difficult in work.

open source projects, focused no the solution space rather than the problem space, allow a developer to solve problems without worrying about interference from marketing or sales. (okay, this isn't always the case, but i would argue it is most of the time.)

so this last lesson can be a bitter pill for some people: you can't escape process. sure, you can eliminate the sales department with their business motivations from the process; you can reduce the process to randomly shouting your intent to check in a module in a random IRC channel.

there are plenty of light-weight processes you can use. but you can't completely eliminate "process" and expect success. you must coordinate with other people. identifying a collection of best common practices and elevating them to the status of "process" will free you from having to think about how to communicate with your peers.

yes, process can constrain you. but the idea is it should constrain you in a way that is not offensive and in a way that will produce value.

lesson 4: appropriate process is a good thing; even for open source projects.

finally, maybe the most important lesson. everyone seems to flub this one at some point in their lives (myself included.) lesson 5 is "if you mess up, come clean early and fix your mistake(s)." the emerald team ignored this lesson when they tried to play down the DDoS attack on a rival's page. sure, maybe it really was a light hearted attempt at geek humor. but enough people didn't think it was funny.

and it's not just because it's the "right thing to do." even if you do make a completely innocent mistake, when a bunch of people jump down your throat, acknowledging the incident and apologizing for not seeing how it would affect other people will help disarm them.

these are just a few simple ideas; it's easy to be a "monday morning quarterback" for software projects. i am not trying to imply that the emerald dev team didn't try hard to maintain the security of their software. but it seems they may have been spending too little time on processes that may have been a little too weak.

there's no prescriptive advice in this post, so take it with a grain of salt. i won't pretend to tell you i know enough to tell you how to run your project without even knowing about it. but i like thinking about software, and these are a few things i've learned from hard experience.

your mileage may vary.

Sunday, August 22, 2010

ubiquity vs. applicability

there's this idea i've been developing for the last decade about "ubiquitous" services on the internet, and i finally realized it might be worth posting about. if you talked to me about PKI (public key infrastructure) in the last decade, the last thing i always say about them is a warning: the sum of ubiquity and trustworthiness is a constant.

that is, you could probably build a PKI system that is ubiquitous, including a CA that anyone could use to as a relying party and anyone could request a certificate from. but in doing so you would need to weaken the certification practice policies to such a degree that it would be laughably easy for a "bad guy" to acquire a fraudulent certificate.

in this instance, ubiquity was in conflict with trust and security. if you're depending on a certificate to positively identify the end entity who presents it, and there's little trust in the verity of that identity, then you really haven't bought anything.

this is one of the contributing factors, IMHO, to the downfall of PKI. PKI vendors went out of their way to find new and "innovative" ways to use their products. like any product company, they were trying to maximize their revenue by increasing sales.

in retrospect, i think there are few who would argue that PKI wasn't oversold. fortunately we live in kind of a post-PKI-hype world. overhyping PKI is an easy way to be shown the door in most enterprises.

we live in a world now where social media is the current, ubiquitous craze: facebook, twitter, linkedIn, buzz! it's the latest fascination amongst internet users; and for good reason, it's fun.

don't get me wrong, i love my twitter.

but... i think a number of us are repeating our expectations for ubiquitous applicability. go read leo laporte's recent blog post about buzz: "buzz kill." i'll wait.

okay, you're back? great! have a cookie!

leo seems to be saying that social media is a crock because the way he used it was not effective. from the sound of it, he got in the habit of "shouting into the echo chamber," but never listening for cogent replies.

and that's okay. leo laporte is one of these guys that everyone likes to listen too. he's got the ubiquity knob dialed over waaaay high and his social media following is massive. his online social graph is so huge, there's no way he has the time to read, much less process, the buzz stream from that community.

and i think that's why the effect was unsatisfactory. social media tools like twitter and buzz CAN be used to supplement broadcast-only media like blogs and podcasts and so forth. but once your community grows past a certain point, you really hit the wall of diminishing returns.

to the degree that social media tools are supposed to do anything other than make lots of money for their creators, they're about facilitating a back and forth conversation.

and this is what leo wasn't doing. and in my humble opinion, that was sort of like expecting to frame a house while holding your hammer by the head. tools work best when we use them for what they're good at. by having tens of thousands of buzz followers, i assert that leo was misusing his social media tools.

or rather, he was using them sub-optimally. the sum of applicability and ubiquity is a constant, and leo's buzz feed was oh so ubiquitous. it's no surprise it had little utility.

Wednesday, August 18, 2010

life after linden

second life™ developer linden lab is not going out of business any time soon.

but with the closing of other virtual worlds like metaplace, there.com and vivaty, it's understandable people may be a bit skeptical about virtual worlds in general. some of the lab's recent actions point towards a previously unknown sensitivity to cash flow: over the last six months they're shedding staff, eliminating services and have promoted Bob Komin (former CFO) to the position of COO.

some people think these are signs of impending doom for second life. these actions could also mean that linden lab is "growing up" and trying to make themselves look like a valid acquisition target (or even an IPO candidate.) linden is known for having an "offbeat" internal culture that sometimes places creativity over accountablity, so the move from offbeat startup to standard mid-sized company isn't going to be easy.

linden isn't a publicly traded company, so we only get bits and pieces of their numbers. but by all accounts, there are still a few large educational and corporate organizations pumping cash into second life's virtual economy. along with the cloud of individuals and smaller organizations, there's still life (and commerce) on the grid.

the mainland isn't going to sink below the ocean tomorrow. it sort of makes me wonder though; what would happen to the second life ecosystem if it did?

it's sometimes fun to consider worst case scenarios; thinking about them can help you consider your behavior and risk management strategies rationally. so just as a thought experiment, what would happen if we woke up one morning to discover that the plug had been pulled on linden lab and second life was closing operations? or more specifically, what happens to the money that had been going into the second life ecosystem?

linden reported a virtual economy of over $500 million in 2009. this is the cash value of all those user-to-user prim hair sales and land rental fees. where does this economic activity go after second life closes it's doors?

and what about economic activity in second life associated with non-linden dollar transactions. say, second inventory or rivers run red or any one of a number of audio hosting services? where would those dollars go?

individuals would likely move on to services that support their use cases, and their buying power would follow them. people who use second life as a chat room may go to IMVU. users dependent on LSL scripted objects may go to an OpenSim world like ReactionGrid, OSGrid or InWorldz. blue mars, kaneva and entropia universe offer more streamlined experiences for beginners, but require more investment from content developers.

themed user communities like lusk or caledon might have the cohesion and resources to start their own OpenSim grids. two years ago we asked if you could run a grid using the OpenSim code base; i think it's pretty clear these days it's technically possible. now the question is probably one of economics. do these communities have sufficient resources to maintain vibrant virtual experiences? are there enough content creators in the luskwood community to satisfy the needs of the furry community? are the non-steampunk experiences in second life so compelling to caledon residents that they would not follow the community out of second life? what's keeping people in second life?

it is not hard to find criticism's of linden lab's product offerings, policies or support. but at the end of the day, enough people still find the value of second life compelling. linden cannot rest too long on their laurels. other virtual world technologies provide roughly the same feature set and are adding enhancements. the lab needs to now add additional features with less staff. if they can't, they'll eventually lose paying customers. it may not be perfect, but it's good enough to keep people paying tier; and it will stay that way until it changes.

Monday, August 2, 2010

personas in social media applications

so now i'm sitting here thinking about features i would add to social networking sites like twitter or facebook if i had the chance. earlier i talked about what i would do if i rewrote twitter. today i'm thinking about identity in a social network. in thinking about identity i'm reminded of william shakespeare's paraphrase of the rock band RUSH's song lyrics:

"all the world's a stage, and all the men and women merely players; they have their exists and their entrances; and one (wo)man in (her)his time plays many parts..." -- w. shakespeare, as you like it, 1599

when i read this (or listen to RUSH on my phone) i ponder the truth of this statement. most mornings i'm a parent, getting my child out of bed, fed and off to school. some days i'm an office worker. other days i'm an artist. still other days i'm a couch potato. on very rare days i'm a divorcée looking for a fun night out.

but strangely, when i go on twitter or facebook, i'm ALWAYS the same person. and i think that's odd.

it may make me sound like i have multiple personality disorder, but it turns out i like to focus on different things at different times. during the day i like to focus on work. in the evening i can think about hobbies, family and friends. in the evening, some days i'm in the mood to talk about computer programming. other days i'm in the mood to talk about knitting. or movies. or ...

so i'm thinking, why can't i create separate "personas" for my twitter account? if i could, i would probably create a "family persona" for all of the disney / g-rated things i want my parents to know about. i might also create a "professional persona" that talked less about knitting and more about internet protocols. and for when the offspring has been put to bed, then i can pull out the "dating persona" and talk about LGBT dating issues, "intimate" robotics and erotica. i don't think anyone would be surprised that i have these different aspects; and i also don't think anyone would blame me for wanting to keep them separate.

in the past, the way to do this was to setup completely distinct accounts. but i'm lazy and i don't like to remember passwords. so what if we (software developers) started making systems that allowed for multiple personas per user. that way i would have one password per service and not have to worry about remembering so many different passwords. if i had this ability, i would probably have a couple different personas: family meadhbh, work meadhbh, professional meadhbh, social meadhbh and r-rated meadhbh.

it's entirely possible that there could be a "i'm doing something i don't want my boss/partner/mother to know about" meadhbh, so i would want it to be my option whether i publicly linked my r-rated meadhbh persona with any of the others.

to be sure, there are risks associated with this idea. what if you're logged in as your "sunday school teacher" persona and you make a sexual innuendo because you thought you were logged in under your "r-rated college student" persona?

but if your personas are tied to an account with a scarce resource (like SMS messages, mobile minutes or in-game script) it might make more sense. rather than having to shuffle virtual money between your Second Life accounts, you could have your game cash associated with one user account, and then available to all your personas.

i also wonder if, in the future, after we all share our most embarrassing moments on flickr and facebook if prospective employers or in-laws will start to get people a little slack. maybe in the future we'll adopt the convention that someone's personal persona is their private business, even if someone could link your job application with pictures of you dancing with a drunk upperclassman twenty years ago.

so i think the next identity management system i work on will support two classes of accounts: user accounts that own resources and personas that are distinct public identities.

the system can be described with this graph:

a diagram describing the relationship between user accounts, personas, resources and key-value pairs
  1. user accounts contain personas.
  2. user accounts own or control resources.
  3. personas may be publicly linked to user accounts or other personas.
  4. resources may be allocated to personas.
  5. user accounts and personas contain key-value pairs.
  6. resources are probably represented by key-value pairs.

when i rewrite twitter

so i've been twittering for a year and some, most recently as @OhMeadhbh. and here are a few observations and ideas i wanted to write down before i forgot. as always, feel free to tell me i don't understand the media.

let me start by saying that while i never claimed to be a "social media genius," i'm also not an idiot at it. i started with social networking almost a decade ago with adrian scott's ryze.com. i ignored classmates.com, friendster, tribe and myspace. got hooked on early versions of linked-in and facebook. and i completely dissed twitter when i first saw it. i'm still trying to figure out if status.net and identi.ca are useful.

so here are some observations:
  1. twitter.com is a single point of failure. don't get me wrong. i love twitter. i gabbed with some of the twitter guys on the plane riding down to an IETF meeting, and i love them too. biz stone came and talked to us at a corporate function and he was absolutely lovely. but using a single corporate entity to hold both your social connections and your status updates? tsk. tsk.

    and i didn't even mention the fail whale.

    ditto for facebook, tribe, friendster, linkedIn, etc.

  2. the ecosystem surrounding a social networking site is probably more valuable than that site. or maybe a better way of saying this is, social networks with extensible ecosystems are more valuable than those without. facebook has a developer program; twitter has an API; LinkedIn uses OpenSocial. ryze.com has no API. tribe has no API. friendster added widgets, but maybe it was just too little too late.

    why is this a big deal? the simple answer is, if you're running a social networking site, if you provide 3rd parties with a way to share your information about your client's social network, they'll make applications and tools that use your site. someone else will assume the risk of marketing to niche segments. if you choose your core market carefully, you'll not compete with these third parties. assuming they're bringing in people to your network that you would not have targeted anyway, it's a win-win.

  3. i don't care about your foursquare spam. i've tried to use foursquare. really. i did. i just don't get it. but that's okay, i know that other people do. i would LOVE to be able to turn off all foursquare updates in my tweet stream. ditto for echo bazaar.

and here are some ideas:
  1. manage your own status updates and friends lists. there's no reason you couldn't manage your own friends list and status updates. maybe the thing to do is run a directory service that points people to FOAF records. FOAF or (Friend of a Friend) is a format for carrying information about your social network.

    if you were ambitious, you could define a DNS SRV Record to point to a server who would respond to queries about your social network. (actually, someone already has, look at webfinger.)

    all we need now is a couple of web applications; one to give out selected information to people who ask, and another to collate that info and present it to you. bonus points if you target it for LAMP systems.

  2. what if the "social networking" site was run as a non-profit and people focused on selling products into the ecosystem? let's just say that you had a .org whose role in life was just to operate as a registrar of FOAF servers (and maybe a small hosting community of it's own.) if it operated as a non-profit, it's operations and behavior would likely be radically different from companies like facebook. if you weren't trying to grow your revenues by selling people's info to advertisers, you could have a network where marketers (working for sellers) could co-operate with concierge services (what doc searle calls "4th parties" or "vendor relationship management" and eve maler calls "user managed access.")

  3. how 'bout "twitter with channels?" i would love it if foursquare spam could get shunted off to a "foursquare channel." then if you weren't interested in your friends foursquare spam, you could just tell your client: "do not show me updates on the foursquare channel."

    or better yet, people who were interested in foursquare could have the option of having foursquare updates consumed directly by their foursquare applications. we would never have to see it again.

    ditto for blip.fm and last.fm and london underground.
so these are just a few ideas. i'm hoping to get people talking. maybe it would be fun to try to spin a business plan around some of these ideas and pitch it to some angels. i would love to know what peeps would add to their own social networking site.

Tuesday, July 27, 2010

pay for your lag?

there's a discussion over at gwyneth llewelyn's blog between her and rob blechner about the upcoming linden town hall meeting. one of the topics mentioned is lag, and what can be done about it. gwen is happy to hear philip rededicate the company to fighting lag, but rob makes the point that in a user generated content world, there's not a lot you can do about it.

this got me thinking... why not build a world where there's a financial cost to creating laggy prims?

determining what's laggy these days isn't an exact science, but i think we know enough to know make a few reasonable assumptions. saying "unoptimized textures and large collections of prims lead to more lag" is a good start.

maybe users in the virtual world could be given a "lag budget" measured in ARC-hours. (ARC is "Avatar Rendering Cost.") if you go over your lag budget, you have to pay for more. so if you were given 6000 ARC-hours per month, you could wear your 1500 ARC outfit for only four hours before you had to buy more lag. but you would get 40 hours of use out of your 150 ARC ruth-esque outfit.

maybe a benefit of premium accounts could be you could sell your unused lag on the "open lag exchange."

just a thought.

what does it mean for a virtual world to be open?

for the last several years, a bunch of us in Second Life™ have been bemoaning the fact that it's more or less a walled garden. yes, there are plenty of ways to get data in and out of the virtual world (even more with viewer 2's "media on a prim" feature.) but there's still a LOT more we could do to open this place up.

in the vwrap working group, we've been focusing on technical details, only occasionally surfacing for more "broad ranging" conversations. i think everyone in the working group supports the concept of an "open" virtual world, but i don't know that we've sat down to have a detailed conversation about what that means.

so this is my two cents on this topic. i would love to hear other people's ideas.

what you get with Second Life™ at the moment

Second Life™ currently allows for some flow between the virtual world and the 2d web. for instance, you can apply a web page to an object's exterior. viewer 1 limited you effectively to one URL per parcel, using the parcel media feature. viewer 2, with all it's faults, introduced the concept of putting a live web page on a prim face. cool stuff.

LSL scripters can also create objects that query data sources on the web using the LSL HTTP Client feature. More recently objects in world gained the ability to respond to HTTP requests with the LSL HTTP Server feature. these features allow in-world developers to interact with the larger 2d web; objects in world can respond to things happening in embodied reality.

LSL developers have done some great work creating in world objects that act as bridges between data and communication channels in world and out. (i'm thinking specifically of the various devices that bridge SL Group Chat with IRC and XMPP.)

why this isn't enough

but my assertion is that while these features are great, they're just not enough.

these features are relatively limited in terms of MIME types you can use and length of HTTP requests and responses in and out of the virtual world. to do any "real" development, you need to establish a web proxy outside SL.

this isn't a deal killer per se, but it does introduce some issues. in order to consume relatively complicated data (an RSS or ATOM feed, for instance) you need both LSL skills and the ability to code server-side PHP, Python, Perl, Java, Ruby or what-not. again, not a deal killer, but it limits the set of people who can develop for your virtual world.

there are also "plumbing issues" like authentication and federated identity. and getting objects in and out of the world is a minor annoyance (i'm thinking of second inventory and related functionality for OpenSim.) moving things in and out of Second Life™ is not impossible, but it's hard to automate, and requires the use of interfaces the lab may change without advance notice.

don't get me wrong here; i understand there's a rationale for why we live with these limits. i don't subscribe to the opinion that the lab (or some of it's employees) is/are evil just because the lab's business model might not be aligned with my personal opinions of how the world should work.

but i might as well talk about what i would like to see and why.

so what would an "open" virtual world look like? my take on an "open" world is one where just about any bit of data needed to participate or render the world could be hosted on an arbitrary server somewhere.

identity, groups and authentication

let's start with identity, groups and authentication. right now in SL, if you want to create an account, you go to secondlife.com, fill in some data, click a button and viola! you have an account on linden's servers. what could possibly be wrong with that?

well... a lot, that's what.

the 1 million active users metric you hear from the lab is pretty good for a small virtual world, but it's dwarfed by WoW's 11.5 million (paying) users, twitter's 75 million and facebook's 500 million. what if we opened up the virtual world to other identity providers? what if we let people automagically provision an account using other identities. they could use their profile information from facebook, gmail, linkedin, twitter and even the wikipedia. this would remove the "friction" of forcing people to create a new account in order to use the service.

so instead of forcing users to provision a new account with it's own password, we could use OAuth, SAML or something similar to carry identity and authorization information. done correctly, this would allow people to retain their "branding" from other services. it would be very nice to know, for instance, that the Meadhbh Oh you meet in-world is the same as @OhMeadhbh on twitter.

twitter, facebook and gmail all have access to the "long form" name you used when you setup your account. why not use this information to put your account name over your head instead of the "fantasy" name we currently use?

group affiliation and friends lists could also be automagically "imported." or rather than having the virtual world make a copy of your social graph, it would just use your favorite, existing social networking site directly to populate your friends list each time you log in.

in the future, i think the "open" virtual world will provide a semi-public avatar profile that uniquely identifies you to virtual world systems and users. an identity provider could either provide that profile (think of how twitter does http://twitter.com/) or would give you the option of putting your avatar profile URL in your social media site profile next to your email and relationship status.

the virtual world cannot depend on a single identity provider. we must make it easy for users to provision accounts and bring their personal branding and social network with them.

text and voice chat

and what about text and voice chat? right now in SL, you have one provider for text and voice group and person-to-person chat. your text chat is routed though linden's servers for text chat and through vivox for voice. it would be nice if we, the users, could pick our favorite voice and text chat provider.

why? not to put too fine a point on it, group text chat in SL sucks rocks. and people might just want to use their existing skype account for voice chat; maybe they have a skype out account and want to add a POTS user to a conference call. or maybe they want to talk about super secret things and want to use a corporate VoIP system.

or maybe they want to just use IRC or XMPP to chat so that the discussion has the option of moving out of the virtual world and into the less immersive, but more common world of "plain ol' internet chat." imagine an experience where you could be tied into your social network by way of a simple text chat client; then when things "got interesting" you could drop in world to see what people were talking about.

and if you were handy with authentication mechanisms, you could probably use the same identity on the chat channel that you did in world. in fact, i would sort of demand it.

what about assets and land?

digital assets in Second Life™ are now inexorably tied to Linden's asset servers. when their servers go down, you lose access to your goods. things are certainly more stable these days than in the old days where sim crashes and grid-wide downtime were normal occurrences. but it's still very annoying to be forced to deal with someone else's network problems.

wouldn't it be nice if you could have your own asset server on your desktop machine? maybe even running your own simulator so you would have a non-social, desktop development option.

an "open" virtual world would give you that option. your assets are stored where you want them stored. if you want to build your 3d objects with AutoCAD or 3DMax and then drag them into the virtual world browser, that should be your option.

the "open" world goes beyond where we are today. it's more than just a few HTTP messages coming in and out of the virtual world, but interoperable, open standards underpinning every aspect of the experience.

Monday, July 19, 2010

VWRAP essentials : capabilities

so i recently got an email from a VWRAP document reviewer asking if web capabilities could be considered "security by obscurity." the simple answer is, "no. as long as you treat a web capability as a one time use password for a resource, you should be fine."

but i realized then there are a number of people who don't grok caps. if you're unfamiliar with the terms "capabilities" or "caps" or "webcaps," this blog post may be for you. (note, i'm cribbing a lot of the text of this post from an earlier message i wrote on the OGPX mailing list )

what are capabilities anyway?

in general, "capabilities" are authorization tokens designed so that possession of the token implies the authority to perform a specific action.

consider a file handle in POSIX like systems; when you open a file, you get back an integer representing the open file. you only get this file handle back if you have access rights to the file. when you spawn a new child process, the child inherits your open file handles and it too can access those files via the file handle, even though the child process lives in a completely different process space. later versions of *nix even allow you to move open file handles between unrelated processes. so the takeaway here is, it's an opaque bit of data (i.e. - a token) and if you have it, it means you have the authority to use it. and, you can pass it around if need be.

capabilities on the web extend the concept. in addition to the token implying authorization to access some resource, it usually also provides the address to access the resource. in other words, a web capability is a URL possessed by a client that the client may use to create, read, update or delete a resource.

a web capabilities in VWRAP are in the form of a "well known" portion of a URL (something like "http://service.example.org/s/") and a large, unguessable, randomly generated string (like "CE7A3EA0-2948-405D-A699-AA9F6293BDFE".) putting them together, you get a valid URL a client can use to access a resource via HTTP. In this example, that URL would be "http://service.example.org/s/CE7A3EA0-2948-405D-A699-AA9F6293BDFE".

why the heck would i ever want to do that !?

no doubt about it, this is not a "standard pattern" for web services. normally, if you have a resource, you publish a well known resource for it and if it's sensitive you require the client to log in prior to being granted access.

for example, you might have a RESTful resource at "http://service.example.org/s/group/AWGroupies" representing the group "AWGroupies". you define an API that says if you want to post a message to the group, you use HTTP POST with data (XML, JSON or whatever) implying the semantic "post this message to this group". for the sake of discussion, let's say the message looks like:
  {
from: "Meadhbh Oh",
message: "I'm giving 50 L$ to anyone who IMs me in the next 5 minutes!"
}
authentication is in order here, but this is a well known problem, i simply use HTTP digest auth over HTTPS (or something similar) and we're done. this is a perfectly acceptable solution.

but there are a couple of issues with this solution.

most notably, every service that presents an interface to a sensitive resource MUST understand authentication. so not only does "http://service.example.org/s/group/AWGroupies" need to understand authentication, so does "http://service.example.org/s/user/Oh/Meadhbh" and
"http://service.example.org/s/parcel/Levenhall/Infinity%20is%20full%20of%stars" and so on.

it's not a problem really, until you start adding new authentication techniques. one day your boss comes to you and tells you.. "hey! we're using Secure/IDs for everything now!" ugh. but it's
still not that painful. you've probably abstracted out authentication, so you have a map of service URLs to authentication techniques and common libraries that actually authenticate requests throughout your system.

this works until the day that your boss comes in and says... "hey! we just bought our competitor cthulhos.com! we're going to integrate their FooWidget service into our service offering! isn't it great!" and then you get that sinking feeling cause you know that this means you've got to figure a way to get their service to talk to your identity provider so their service can authenticate your customers. people who have gonethrough this know that depending on the technology this can turn out to be a bag full of pain.

the standard way of doing this is something like:
  1. service request come into http://service.example.org/s/foo/Meadhbh
  2. http://service.example.org/s/foo/ redirects with a 302 to http://foo.cthulhos.com/Meadhbh
  3. http://foo.cthulhos.com/Meadhbh responds with a 401 getting the client to resubmit the request with a WWW-Authenticate: header.
  4. the client resubmits to http://foo.cthulhos.com/Meadhbh with the proper WWW-Authenticate: header, but remember, these are example.org's customers, so
  5. http://foo.cthulhos.com/Meadhbh sends a message to a private interface on example.org, asking it to authenticate the user credentials.
  6. assuming the client is using valid credentials, example.org responds to cthulhos.com with the digital equivalent of a thumbs up, and finally...
  7. http://foo.cthulhos.com/Meadhbh responds to the request.
and this works pretty well up until the point that the new CIO comes in and says, "infocard! we're moving everything to infocard!" there's nothing wrong with infocard, of course, but in this situation you've got to implement it at both example.org and cthulhos.com. and when we start adding to the mix the fact that the biz dev people keep trying to buy new companies and you get a new CIO every 18 months who wants a new user authentication technology, things can get out of hand.

and i didn't even talk about the fact that each time you change the authentication scheme, thick client developers have to go through the code, looking for every place a request is made.

web capabilities are not a magic panacea, but they can help out in this situation. rather than having each request authenticated, the user's identity is authenticated once at a central location (like example.org.) it coordinates with it's services (cthulhos.com) to provide a unique, unguessable URL (the capability) known only to that specific client and trusted systems (example.org and cthulhos.com)

so the flow would be something like...
  1. a client logs in at http://service.example.org/s/authme and asks for a capability to use a particular service
  2. http://service.example.org/s/authme verifies the user's credentials and verifies the user can access that service
  3. http://service.example.org/s/authme sends a request to a private interface on cthulhos.com asking for the capability.
  4. cthulhos.com generates the unguessable capability http://foo.cthulhos.com/EE409B12-6E9B-4F5B-90BF-161AE5DE410C and returns it to http://service.example.org/s/authme
  5. http://service.example.org/s/authme returns the capability http://foo.cthulhos.com/EE409B12-6E9B-4F5B-90BF-161AE5DE410C to the client
  6. the client uses the capability http://foo.cthulhos.com/EE409B12-6E9B-4F5B-90BF-161AE5DE410C to access the sensitive resource.
both approaches require establishing a trusted interface between example.org and cthulhos.com, but in the case of the capability example, only service.example.org has to know about the specific details of user authentication. thick client developers may also notice that they access the capability as if it were a public resource; that is, they don't need to authenticate each request.

another benefit to capabilities is that they are pre-authorized. if you have a resource that is accessed frequently (like maybe "get the next 10 inventory items" or "check to see if there are any messages on the event queue for me") you don't have to do the username -> permissions look up each time the server receives a request. for environments where the server makes a network request for each permissions check, this can lead to reduced request latency.

capabilities are not magic panaceas. there's still some work involved in implementing them and they start making a lot more sense when you have a cluster of machines offering service to a client, but deployers want identity and identity to permissions mapping functions to live elsewhere in the network than the machine offering a service. (i.e. - "the cloud" or "the grid".)

but how do i provision a capability ?

there are several ways to provision capabilities, but the approach we take in VWRAP is to use the "seed capability."

like many other distributed protocols involving sensitive resources, VWRAP interactions begin with user authentication. this is not strictly true; i'm ignoring the case where two machines want to communicate outside the context of a user request, but let me hand wave that use case away for the moment while we talk about using seed caps.

the process begins with user authentication. the VWRAP authentication specification describes this process; the client sends an avatar name and a password to an authentication server. assuming the authentication request can be validated, the server returns a "seed cap." the client then sends a list of capability names to the seed cap and awaits the response.

what the host behind the seed cap is doing while the client waits for a reply is verifying the requested capability exists and the user is permitted to perform the operation implied by the capability. (and it does this for each capability requested.)

so, for example, let's say you are a client that only wants to update a user profile and send/receive group messages. the protocol interaction might look something like this...

a. authentication : client -> server at https://example.org/login
  {
agent_name: "Meadhbh Oh",
authenticator: {
type: "hash",
algorithm: "md5",
secret: "i1J8B0rOmekRn8ydeup6Dg=="
}
}
b. auth response : server -> client
  {
condition: "success",
agent_seed_capability: "https://example.org/s/CF577955-3E0D-4299-8D13-F28345D843F3"
}
c. asking the seed for the other caps : client -> server at
https://s.example.org/s/CF577955-3E0D-4299-8D13-F28345D843F3
  {
capabilities : [
"profile/update",
"groups/search"
]
}
d. the response with the URIs for the caps : server -> client
  {
capabilities : {
profile/update : "http://service.example.org/user/35A59C5D-315C-4D50-B78D-A38D41D2C90A",
groups/search : "http://cthulhos.com/8579CE1F-9C05-43E8-8677-A645859DCD64"
}
}

expiring capabilities

readers may notice that there is a potential "TOCTOU vulnerability." TOCTOU stands for "time of check, time of use," and refers to a common security problem. what happens if the permissions on an object change between the time the code managing the resource checks the permission and the time it performs an operation on the resource.

this is a common problem with many systems, including POSIX file descriptors. (seriously.. if you change the permissions on a file to disallow writing AFTER the file has been opened, subsequent writes on the file descriptor will not fail in many POSIX systems.)

VWRAP addresses this problem by expiring capabilities when they get old. so if you request a capability, then wait a LONG time before you access it, you may find you get a 404 response. The VWRAP specifications do not require all caps to expire, but the do require servers to signal their expiration by removing them (thus the 404 response) and require clients to understand what to do when a capability has been expired. in most cases, the appropriate response is to re-request the capability from the seed cap. if the seed cap is expired, clients should re-authenticate.

capabilities may also "expire after first use." Also called "single shot capabilities," they are used to communicate sensitive or highly volatile information to the client.

current practice is to include an Expires: header in the response from the server so the client will know when the resource expires.

introspection on capabilities

finally, RESTful resources represented by a capability are described by an abstract interface (like LLIDL), the interface description language described in the VWRAP Abstract Type System draft. several people have requested introspection so clients may request the LLIDL description of a capability and more accurately reason about the semantics of its use.

the proposed solution to this problem for VWRAP messages carried over HTTP is to use the OPTIONS method when accessing the capability (instead of GET, POST, PUT or DELETE.) upon receipt of the OPTIONS request, the server should respond with the LLIDL describing the resource.

conclusion

capabilities are cool, especially in clouds or grids.

references

A pair of Google Tech Talks at:
provide a pretty good technical introduction to the concept of capabilities.

VWRAP's description of capabilities is at:

VWRAP uses capabilities to access RESTful resources. Roy Fielding's
paper on REST is at: