The times they are a-changing

by Matt 11. June 2008 17:24

It’s been a little while since I last posted. Well, not counting the two posts earlier today. And, admittedly, since Smallest Child was born, I’ve struggled to hit double-figures in a single month, and oh-my-god, I’m writing a meta-blogging post.

I always promised myself I wouldn’t blog about blogging, but you know what, I’ve been doing this for (very surprisingly) over 2 years now. So I think I’m entitled to the occasional self-indulgence. Feel free to wander off now – this post is going to be very light on techie stuff.

(My first post was 25 May 2006, not that you’d know if from the blog software I’m using – for some reason, the Posts Archive list starts in June 2006. This blog runs on Single User Blog, an Open Source project started by Mitch Denny that I had a lot of fun hacking on and learning asp.net 2 with. Of course, I never did have enough spare time to implement/fix everything I wanted to (such as conditional gets on the rss feed – ouch) and you know that when the creator of the platform abandons it, it’s time to move on. I’d love to write a new platform, WCF and LINQ based, but I don’t have the time, I wouldn’t finish it, and I think Minima has already done it, so I’ll probably be off to BlogEngine.net sometime soon. I just want to migrate without breaking anything.)

Anyway, there’s more interesting things to write about. Namely, a change of jobs. I’ve had exactly 3 jobs in the last 13 years. One year at a database solutions provider that went bankrupt. 5 years as a C/C++ cowboy developer at a company that creates a finite element analysis suite (and whose website is beautifully, wonderfully, still running on the same dev box, with the same university domain name). And 7 years at a rather large online bank.

And I’ve left. I’d reached the point where there wasn’t much else I felt I could get out of being there – I’d ticked all the boxes I’d wanted to tick. I was a Principal Developer, but had moved away from writing code. I’ve had a brilliant time there – it really was a great place to work (despite what we used to say. We weren’t happy unless we were complaining). I’ve learnt an awful lot, and I’ve shipped a lot of software (there are a number of really important pieces of architecture that I’ve had my hands on that I’ve really rather proud of).

Plus, my wife and I work for the same company, many miles from home. Putting 2 kids in the car for 2 hours a day is a bit rough. Plus First Born is starting pre-school soon, and the child care just doesn’t work out when we’re both working.

So here I am. In London, living away from home contracting. My wife’s also just handed her notice in, and it’s all change Chez Moi.

I’m currently working on a 6 month contract down in London for a rather well known media company, building the back end processing system for pushing TV programmes out onto the Internet. Cool. My contract is for me to be an Application Architect, but I’m kind of on secondment to this project right now, so I guess I’m a dev of some sort. But it’s really good. I’m 3 days in, and have already been pairing, writing test-driven code, and checking stuff in; I’ve been productive, which I was expecting so soon.

“Team Agile” as my ex-boss puts it. He’s not bitter.

But enough navel-gazing.

I’ve got me a new laptop, and quite clearly, my eyes are bigger than my belly. It’s a Dell Inspiron 1720. Yep. 17 inches of goodness. Er, screen. And that’s a whopping 1920 x 1200 resolution. I’m all awash with screen real estate right now; it’s lovely. And the 4 gig helps, too. Check it out:

WEI

This isn’t a laptop, it’s a desktop with a built in screen! It’s even got a numpad!

And it’s just as big and heavy.

Yeah, I’ve sacrificed some convenience for raw power. Oh well. I’ll get over it.

Tags:

Windows Live Writer CTP

by Matt 11. June 2008 15:46

There’s a new test version of Windows Live Writer out, and amongst the changes is one to warm the cockles of my heart – they correctly register the Windows Search filter for .wpost files. Which means that post is now irrelevant.

Thanks guys.

Tags:

Testing

by Matt 11. June 2008 14:50

nothing to see, move along.

Really.

(And this post is staying up, too. Think repaving a new laptop, copying over all your old Windows Live Writer posts only to find that if you edit one and try to publish and it just creates a new file and rather disappointingly even creates a new post, rather than just updating the old post. Time for another not-so-quick hack. Now, how do you do Structured Storage again?)

Tags:

Social networks and data portability

by Matt 20. May 2008 16:27

Speaking of data portability, and how Live Mesh plays with this, some of the big boys have recently made announcements - MySpace, Facebook and Google.

MySpace seems to be playing catch up and are announcing a REST API, initially restricted to a few partner sites. Perhaps the least impressive announcement, but the interesting part is the use of the OAuth standard for authentication.

Facebook then (a little too) hurriedly announced Facebook Connect, which seems to be a way for you to associate your profile with a third party site. That site then seems to get privileged access to the Facebook data.

Google announced Friend Connect, which is a complete sleight of hand trick. It's all about creating a social network at your site, but by hosting Google gadgets. One gadget is the master membership gadget, and all the others are OpenSocial gadgets that do "social" stuff. When you visit the site, you sign in to the membership gadget (via Google Accounts, Facebook, OpenID or AIM account). Then, the membership gadget lists all of your friends from these accounts that are also friends at the new site. The other gadgets provide the social element, such as comments.

So how do the big boys handle Data Portability? Well, poorly.

Surprisingly, MySpace is the most open. Facebook's offering definitely seems to be a step in the right direction, but appears to be limited to certain 3rd parties, and is entirely proprietary. But Google. Oh boy. They're not even trying. All of the data is stored in Google's silos. They aggregate data from other networks, but don't let any of it out - a proper roach motel. Their social offering is all based on gadgets, and the hosting site doesn't see any of the social data.

Charitably, you can view Google's play as not being about data portability, but about enabling sites to easily add a social element - playing to the Long Tail of sites wanting a social element without having to build up the number of members required to make a successful social site.

Of course, the fun only starts there. Facebook have banned Google's Friend Connect from accessing the Facebook API, because they've violated the terms and conditions of the service (more from TechCrunch and a detailed view from Google). It would be very easy to be snarky here and ask how committed Facebook is to Data Portability...

(And Dare makes a welcome return to blogging with a great post about this.)

But it does raise a very interesting question that Live Mesh (which might be able to sidestep these portability issues) doesn't address - ownership of data. DataPortability.org's Chris Saad has an interesting view on this in his blog post "Forget Facebook".

My address book is my own. When you email me, or when you communicate with me, you are revealing something about yourself. You define a social contract with me that means that I can use your information to contact you whenever and however I like - I could even re-purpose my address book for all manor of other things.

If, however, you violate that trust, either directly or indirectly, you break the social contract and I will tend to not deal with you again. We can not perfectly engineer these sorts of contracts into systems - we can try, but in the end social behavior will be the last mile in enforcing user rights.

And I think this nails it. Unless you want one way communication, you have to share information. You need to trust who you're sharing that information with, just like we do in the Real World with telephone numbers and addresses. Any technological barrier we put in place here is just Rights Management, and we all know how well that's worked out for DRM.

Tags:

Live Mesh

by Matt 17. May 2008 18:15

Live Mesh has been out for a little while now, and while I'm still waiting for my invite, I have been digging through the available blogs, documentation and videos.

Now. This is going to be a long post, because Mesh is kinda deep. You've been warned. Go and get a coffee.

Put simply, Live Mesh is a synchronisation platform. We've seen plenty of those before, even from Microsoft themselves (FolderShare, SyncToy), and the current user experience of sync-ing files and folders doesn't really distinguish itself from the other offerings. (DropBox is a beta application that is almost indistinguishable - with a good flash video intro). It might not be terribly remarkable, but it works, and it's definitely a useful tool as it stands. The platform is the best bit.

Let's try and describe what you get in as few words as possible.

A Mesh is made up of multiple devices. A device is really any kind of computing device. The Windows PC is currently the only one supported, but Mac and (Windows) mobile support is coming soon. You can create special "Live Folders" on your devices, and the contents of these are replicated to any or all of your devices. Any changes you make to any files in any folders on any device are replicated to all devices. So far so good.

There is a special device called Live Desktop. This is more than just another device, and is provided by Microsoft. Firstly, it's a device living in the cloud, and provides you with 5 gig of cloud based storage. Secondly, it's accessible via the browser (using a simulated desktop UI, complete with Explorer windows). Thirdly, it's really the coordinating service and notifies all the other devices when changes are made, so that they start updating their copies (future versions will apparently support a more peer-to-peer approach for this kind of thing), and it is instrumental in setting up a browser based (as in, ActiveX) remote desktop into your devices.

So we've got a platform that allows me to have my files locally, on any device I own. It also gives me access to those files remotely, via the cloud storage or via remote desktop. It's the Software + Services model, but larger. Instead of giving me access to my data from wherever I need it, it puts my data wherever I am. A subtle distinction, but incredibly significant when you start to consider things outside of the mesh I've described so far.

The synchronisation platform Microsoft have built is where things start to get fun. It's all built on feeds. You know, RSS and Atom. Everything that is a list is a feed - list of devices? Feed. List of folders to sync? Feed. List of files and folders in each folder? Feed. Each file's metadata is stored as the item entry of a feed, and the file itself is referenced as an enclosure. And then they layer FeedSync on top of the feed. FeedSync is Microsoft's extension to feeds to provide versioning, history and conflict detection (but not conflict resolution. I don't know how Mesh handles conflicts).

This is probably the masterstroke of the platform. They haven't just built a platform for synchronising files and folders, they've built a platform for synchronising feeds. And feeds can hold any kind of structured data. Contacts, bookmarks, comments, status updates, calendars, bank transaction data, you name it. And they've used existing, open data formats. The data is available from the cloud as Atom, RSS or JSON, via a REST interface, using the Atom Publishing Protocol. All the current industry darling buzzwords - everything to make life easy to make mash ups.

And (the SDK isn't yet available but I think this is how it's going to work) you can easily imagine a web site that talks to the Mesh cloud interface and gets (secure) access to your Mesh data. And your rich, desktop application can make the same requests of the cloud. And because it's all synced, your rich, desktop application could simply use the current device's local version of the data (using the same REST API, of course), enabling offline access. Software + Services and mash ups from the same interface.

With this in mind, it's easy to see how you would share content amongst friends - simply start synchronising a feed between the two of you. And this is exactly what happens with the current implementation. There's even a feed of activities performed against the data being shared, to which users can add comments.

So, let's run with this, and see what falls out.

Subscribe to Twitter. Subscribe to Facebook. Blogs. Del.icio.us. All of this data is now aggregated, just like FriendFeed.

Take a photo with your phone, that just happens to be a device in the mesh. It automatically gets included into the mesh and flows to all the devices that are sharing that data. Want to publish that photo to Flickr? Create Flickr as a device and it will automatically get published. Someone leaves a comment on Flickr, and since you've subscribed to the Flickr feed, that comment gets synchronised to all devices as metadata associated with the photo.

Generalise that a little. Imagine all of these social networks as devices. All of a sudden your problems with the Centralised Me disappear. You data still lives in the data silo of each social network, but each social network is an integral part of your mesh. You can share the items on your social network, or you can share them from your mesh. Data Portability is less of a problem, because your data doesn't need to be portable; your mesh is a superset of all of these silos.

Want more than the 5 gig of storage Microsoft gives you? Create a device that's backed by Amazon's S3. It's all just feeds and https. In fact, Microsoft are already planning to enable enterprises to replace Microsoft's cloud storage and store data internally.

Subscribe to a feed of bank transactions, using OAuth. Subscribe to all of your banks' feeds and you've got enough data to build a client side aggregator. If the web sites of all the banks can make use of the data in the Mesh (with appropriate security), then every bank has the ability to include aggregator functionality in their site, and they now have an incentive for providing the feed in the first place.

Of course, this is just speculative, but it's easy to see that there is a huge potential to this model. It all depends on how Microsoft handles it. There are several warning signs. Joel pinpoints them quite well in his post "Architecture astronauts take over". Microsoft are really hyping the future of the platform while the current application is not as exciting. (Dare Obasanjo offers a good reply to that post.) And it's still very Microsoft centric. Authentication happens with Windows Live ID, they maintain the index of which devices are in your mesh, and the Live Desktop plays that coordinating role in notifications. People didn't trust Hailstorm, or Passport; will they trust Live Mesh? Will Microsoft allow splitting up of those central services? Logging in via OpenID? Federating the cloud storage? Allowing people to create their own meshes which can interact with Live Mesh services? We'll know more in the Autumn, when Microsoft hold their Professional Developers Conference.

So that's Live Mesh. Boil it down, and it's a deceptively simple premise - it synchronises feeds. The power (and the potential for failure) is the promise that everything is consumable as a feed. Will that happen?

Tags:

Those .net dudes have been busy!

by Matt 13. May 2008 14:55

Lordy. Check out ScottGu's blog of the changes in the beta of .net 3.5 SP1! This is way more than a service pack.

Tons of new stuff for the server, ASP.NET, ADO.NET and WCF, Visual Studio has had quite a bit of love but since I'm a closet fan of WPF, it's nice to see Dr Sneath's rundown of the substantial changes on the client side. I'm looking forward to the startup improvements...

And I like the .net Framework Client Profile - they're starting to split up the framework into a desktop version. A download the size of Adobe Reader is a nice way to put it into perspective. And a nice use of Windows Update to drizzle the full framework down in the background. Is this the start of work for getting .net support into Server Core?

Oh, and Greg Schecter covers some of the more interesting uses of the new Effects API.

Surely this can't be a service pack? Shouldn't this be .net 3.6?

Tags:

Decentralising Twitter - Centralising Me

by Matt 8. May 2008 18:38

Case in point. Scott Hanselman is looking at an open, distributed implementation of Twitter, and in so doing gives me a great excuse for another example of the illusion of the Centralised Me.

Play along at home.

Twitter is centralised, and so downtime has a massive impact. The data is centralised in their data silos and so it is part of the "Decentralised Me".

If we view Twitter as micro-blogging (another great buzzword), why should I use a 3rd party service when I already have my own blog? I can decentralise Twitter by centralising it on my blog. I've fixed the (service level) downtime problem, and now the data lives in the "Centralised Me".

And you could subscribe to my Tweets (terrible buzzword), and I could publish a list of all the people I was following. But by decentralising the service, I now don't know who's following me (assuming a standard RSS feed subscription - which we of course will decentralise by moving to the centralised FeedBurner service).

Twitter's centralised service provided a whole heap of very important plumbing. Not least of which was usernames. With a centralised list of usernames, I am able to address messages to people. And with a centralised message store, people can be notified when they are addressed by people they aren't following. A decentralised service cannot offer this without additional, dare I say it, centralised infrastructure (think global usernames and semantic search).

Yes, I'm clearly having far too much fun with this.

I just wanted to reinforce what I said last time - centralised and decentralised are simply points of view.

Which means the Centralised Me is either going to be an illusion, like FriendFeed, where your data is aggregated - read copied - from multiple data silos into one new data silo, or it's going to be something much more interesting. How about a data silo that's a superset of all the data silos you've contributed to?

Tags:

The Centralised Me

by Matt 30. April 2008 07:15

Yes, yes. All the cool kids are talking about Live Mesh. I'll get to that; this is related.

There was a very interesting post on TechCrunch a couple of weeks ago entitled "FriendFeed, The Centralized Me and Data Portability". It's really struck a chord with me.

It's introduced the concept (buzzword) of the "Centralised Me", which is lovely marketing, but might very well be a bit of a red herring.

The thinking goes like this: back in the day, all you had on the internet was your home page, and all your random thoughts, photos of your cats and interesting links went up there (usually edited by hand, in raw html. Hardcore). Nowadays, there's Flickr for your photos, YouTube for videos, Facebook for your friends, Twitter for your inane babble and so on. In other words, we've gone from having a very centralised view of "me", to a very decentralised view - "me" is spread across many sites.

The first problem I see with this is that it's not quite true. We might not have had Facebook, but we did have, for example, Usenet and mailing lists, both highly decentralised. Getting a single view of all of my interactions would have been a daunting task. And what about blog comments? Again, very decentralised.

And let's just think about that for a second.

Centralised and decentralised are just points of view.

All of my comments to blogs are very decentralised, but to the blog owners, those very same comments are completely centralised - attached to the blog posts they're in response to.

Facebook messages are centralised to all my friends on Facebook, as my Flickr photos are centralised to my Flickr friends. They're just decentralised to someone looking for "all" of my stuff.

Even if you look at the poster child of decentralised authentication that is OpenID, it's only decentralised as far as the protected web site is concerned. As far as I'm concerned, I always log in at the same point - my OpenID server. Centralised. All Yahoo IDs are now OpenIDs. Centralised.

So where does that leave us with social networks?

FriendFeed is an aggregator of your other social networks. You join up, and start broadcasting an aggregated view of every other social network you're a member of. You subscribe to other FriendFeed members, and you've now got a single port of call for all the updates you're interested in. It aims to be the Centralised Me.

But to quote the TechCrucnh article, this just means it's another "data silo".

And this brings us to the Data Portability Project. Ideally, it should help to protect us against data silos. In theory, as long as each silo implements the correct Microformats, and authentication (OpenID, OAuth) we should be able to access and copy/move our data out of a silo.

What I haven't seen is where we move the data to. Another silo?

The most interesting thing I have seen in this space is Google's Social Graph. It indexes the Microformat information found in web pages, and automatically builds up a social graph from this (see also Microformat's social network portability). Extrapolate a little here, and you can easily see how this technique could get a single view of my entire social network. It wouldn't matter where I put what, Google would find it for me. Google would find me. Not centralised, but aggregated. However, it's not as easy as that. Google can only access public information. It would be great for Twitter, and irrelevant for Facebook. We could give it credentials, but then it becomes another data silo.

There is no Centralised Me. There's just convenient and inconvenient.

Or rather, centralised and decentralised don't apply here. It's not a hub and spokes model. It's a (gulp) mesh.

Tags:

Web App + Offline = Crappy Client App

by Matt 22. April 2008 09:02

I'm with Furrygoat:

Q: What do you get when you cross a browser application with the ability to go offline?

A: A client application without any the goodness that the platform (be it Windows or OS X) has to offer.

Really? Do people really want this?

Don’t get me wrong, I get the convenience of having access to your data from whatever machine your on, but wouldn’t a better model be to store the data in the cloud and provide a good abstraction on top of it so that it could be accessed from either a really well done rich client and a web application?

Point in case: I find it interesting that most of the twitter feeds that I read are created by client applications accessing the twitter API.

Perhaps there’s been so much blah blah blah about web 2.0, social networks, etc., or that folks have just gotten so lazy that they’ve forgotten how to write client applications. It’s sad really.

Another case in point: how many bloggers rave over Windows Live Writer?

I think the future is going to be all about Services + Software. Use the full resources of the desktop when it's available to you, but the data is there in the cloud when you're not. The main benefit of browser based applications is their availability on *any* desktop. The main drawback is the offline question. But these aren't opposing ends of the same spectrum, and don't have to be fixed in the same application.

And when you find people looking to share cached javascripts between sites because the frameworks are too large, you know you're in trouble. This is just a client side install.

Update: I didn't think I'd see such agreement from such A-listers as Yahoo!'s Jeremy Zawodny or (the sadly missed) Dare Obasanjo. That pendulum keeps on swinging...

Tags:

The pendulum swings. Again.

by Matt 14. April 2008 06:47

We've had the boom times of Fat Clients. That AJAX acronym got us all excited over Thin Clients. And now we're back to renting time on mainframes...

As they say; so it goes.

Tags:

Rel=Me

Month List

RecentComments

Comment RSS