How to use XmlResolver. Or, reading an xhtml file in .net

by Matt 28. June 2007 06:24

So, reading xhtml. Dead easy, right? After all, it's just xml. Whack it into an XmlReader and Bob's your uncle.

Unless your xhtml uses an entity such as £.

Xml has 5 defined entities, &lt; (<), &gt; (>), &amp; (&), &apos; (') and &quot; ("). All self-respecting xml parsers will handle these. But xhtml, and it's poorer cousin, html define a whole raft more. Try and put such an xhtml file through an xml parser, and there will be problems.

What you need to do is tell the xml parser about these extra entities. Which means getting the xml parser to also read in a bunch of dtd's. Again - dead easy, right?

Well, when you know the correct voodoo, yes, it's kinda easy.

This post provides sample code on how to do this in .net. The idea is that you need to tell the parser that it's reading an xhtml file, and then provide the xhtml dtd for it when it asks. The code here does just this, and also shows how to keep that dtd and associated files in your applications resources. Unfortunately, it's all a bit confusing as to how and why it actually works, so I thought I'd try and demystify it a bit. For the moment, we're going to ignore the idea of pulling content from resources, and just explain what's happening in the normal case.

This file is xhtml

Firstly, let's tell the parser that we're reading an xhtml file. This just means giving it a DocType, such as:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">

This should really be specified in the file itself, but if you're just parsing a fragment, you need to tell the XmlReader explicitly. This is accomplished by populating the DocTypeName and PublicId fields of XmlParserContext, as the post demonstrates.

Define xhtml...

The next step is to get the dtd into the parser. Which is where we start looking at XmlResolver, and where things get a little confusing. The interaction and relationship between the XmlReader and the XmlResover isn't very well documented, but it boils down to this - any time the XmlReader has to get content from a URI, it defers to the XmlResolver.

The best way to explain this is by example.

Say you're reading an xhtml file, via a call to XmlReader.Create, passing in a filename - at least, that's the common usage. The filename is actually a URL and could easily be a http URL. The first thing XmlReader does is pass this URL into XmlResolver.ResolveUri. This allows us a hook to modify or replace the URL of the file, if we want to (e.g. instead of loading it over http, get it from a cache on the local file system). Essentially, we just return back a new URI that is the actual location of the file.

Once the URL to the file has been resolved, it's passed into XmlResolver.GetEntity, which will open and return a stream to the file. Since it's a stream, the file could be anywhere - on the file system, over http or in a resource. Now the XmlReader has the file to parse, and the resolved URL is considered to be the base URI of the file.

Incidentally, if we don't use a URL to load the file into the XmlReader, we can still pass in a URI via the BaseURI field of XmlParserContext. The resolved version of this URI is then the base URI of the file.

Note that the base URI is the URI to the file itself - not to the parent "directory" of the file. This is actually the same as the base attribute in html, even though I was expecting it to be the directory.

If the XmlReader needs to bring in any more content from within the file (a nice example would be xlink or xinclude, except I don't think they are supported), it will pass the URI identifying the content to the XmlResolver, and then pass the resolved URI back to GetEntity to actually get the content.

If there is a DocType associated with the parser (via the file, or via the context) that will need to be resolved. So the public id is passed into ResolveUri. In the case of (strict) xhtml, this will be "-//W3C//DTD XHTML 1.0 Strict//EN". The XmlResolver needs to know about this DocType and return back a URI that can GetEntity will be able to open a stream on. Let's assume we've subclassed XmlUrlResolver so it's ResolveUri knows that the xhtml DocType maps to the correct http URL so we simply return "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd" and let the standard implementation of GetEntity download it for us.

Now the xhtml1-strict.dtd references other files to pull in the actual entity definitions. The XmlReader follows the same procedure - it calls ResolveUri and then GetEntity. The important part to remember here is that these references might be relative. In other words, the dtd might not have fully qualified http URLs. The base URI passed to ResolveUri is the resolved URI of the DocType - "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd". Resolving "xhtml-lat1.ent" against that URL means we should return "http://www.w3.org/TR/xhtml1/DTD/xhtml-lat1.ent", which is the correct location for the entity file. The reader will just download that file and continue.

Of course, it's entirely possible that the reference is fully qualified, in which case, we could just return it directly.

Lett's revision guide version

To recap:

  1. Resolve the file URL and download it.
  2. Resolve the public Id of the DocType fully qualified URI, using the resolved URI of the file as the "current directory" if required, and download the dtd.
  3. Parse the dtd and resolve any external references against the resolved URI of the dtd (if appropriate) and download them.
  4. Parse the references and the file and resolve and download any other external references.

So, it's actually quite straightforward, especially in the use case of file:// and http:// URIs.

Pulling content from resources

Now, back to the sample code. The code is absolutely fine as it stands, but I think I'd implement it differently. It currently has a list of known URIs, made up of "urn:" plus the DocType or the dtd/entity filename. In the ResolveUri method, it takes the given relative URI and appends it to "urn:" and then compares it against the list of know URIs. The GetEntity method just compares the given absolute URI and returns the relevant resource stream.

This feels a bit fragile. I'd implement it more like the file or http URI handlers. I'd have one known resource, and that's the xhtml dtd, keyed on the DocType URI. ResolveUri would match the relative URI against this DocType URI and return back a resource:// URI, which would contain an assembly identifier, plus the namespaced resource name, such as "resource://sticklebackplastic.xhtml/sticklebackplastic.xhtml.resources.xhtml1-strict.dtd". The GetEntity method can then parse this to get the relevant resource stream. I think this way is better because when the dtd requires an external reference, it will call ResolveUri with "xhtml-lat1.ent" as the relative URI and "resource://sticklebackplastic.xhtml/sticklebackplastic.xhtml.resources.xhtml1-strict.dtd" as the base URI. Simply combining the URIs gives me "resource://sticklebackplastic.xhtml/sticklebackplastic.xhtml.resources.xhtml-lat1.ent". Doing this requires that all the external references are stored in the same resource namespace, the same as with the file or http cases, and the same for the sample code. But it also means that the resolver only needs to know about the mapping between the DocType and the dtd, and doesn't have to worry about creating resource URIs for all stored files.

Now I've just got to implement it.

Tags:

Real life use of the SQS service

by Matt 27. June 2007 17:33

I think I might be stuck in too much of a traditional enterprise mind set. I was wondering who would use Amazon's message queue service, arguing that it's a B2B enterprise style offering, yet surely an enterprise would host its own queues?

Turns out I need to be thinking a bit more web 2.0.

Here's a post from the Gravatar blog that gives some insight into their infrastructure (along with some technical difficulties they've been having recently).

(In case you haven't come across Gravatar before, it's a site that allows you to upload an image (avatar) that you can assign to your email address. Blog sites can then link to the image in their comments. Nice, but I don't know how they pay their leccy bills.)

This is a nice decoupling of the image serving servers, the safely backed up images and the image upload process. It may not be B2B, but it's an enterprise scale system and it's "publicly" hosted.

Who needs data centres?

Tags:

REST. Getting closer to the lightbulb moment.

by Matt 14. June 2007 07:13

Does REST need a service description language? Great question. And one I would have initially said a most emphatic "yes" to. Now I'd probably so "no". Or more accurately - "you're asking the wrong question".

I think some of the fundamental differences between WS-* and REST are finally beginning to sink in. I've been viewing them as pretty much the same thing, just with different formats and tool support and stuff. But conceptually, they're actually pretty orthogonal.

WS-* is all about messaging. You send and receive messages to and from an endpoint. That endpoint is simply a processor - by and of itself, it means nothing. It's just an address. The interesting stuff is in the message. This tells the service what to do and with what data. The reply message contains the data relevant to the operation - what happened, the returned data, whatever.

REST is all about interacting with resources. You POST, PUT, GET or DELETE resources. The url is all-important - this identifies the resource. As the above post mentions:

The consequence of [this] is that there isn't much to describe; there aren't any methods or signatures thereof to document, since access to resources is uniform and governed by the verbs defined in RFC 2616 (in the case of HTTP, anyway)

You don't need a service description language, because there isn't a service. You're accessing and operating on resources. The service is pretty much implicit in the HTTP verb.

What you do need is a specification for the media type. And preferably in a machine readable format that can be used to generate code for interacting with the resource's data.

This is the question you should be asking.

The Atom Publishing Protocol is one such media type, and there are libraries (not necessarily tools) for working with that. Interestingly, following the small flurry of (*cough*knee-jerk*cough*) reactions to Dare Obasanjo's posts on Google's (out of date) implementation of the APP spec, (and without reading the spec at all!) it looks like APP is able to act as a kind of REST envelope. By which I mean when you POST a new customer to a resource (such as a collection of users), you can get an Atom doc back detailing the url of the newly added resource (customer). My knee-jerk reaction - is APP REST's SOAP?

I'm being kinda oblique with this - I know I need to study the APP before I can really make remarks like this, and I think this example was how to use the Atom Publishing Protocol to publish non Atom (as in RSS-like media type) entries. But if the cap fits...

(One point made that I really like is how the http protocol provides the optimistic concurrency for the updates via the humble etag. Very clever of the http people, and this really helped with defining the REST ideals for me. But it does mean state is maintained at the transport level, rather than at the resource level, something which SOAP has worked around by trying to be transport neutral. Another difference for you to weigh up.)

I don't buy that a client shouldn't know about the URI space of the server. For individual resources, yes. But for collections, or resources such as locks or printers, then surely the client needs to know about these (or how else does it add a new customer? Print?) So a description/discovery language of some sort is needed here too - at the least a means of knowing what media type + verb a particular resource endpoint accepts. Perhaps this is where WADL comes in (but I have to agree with Don Box's analysis of WADL vs WSDL)?

And I don't know how security and authentication is solved, either. Whatever happens, this will demand a spec at the least.

So, with all of this, is REST any simpler than WS-*? Or is it just an orthogonal concept, but just as complex?

Tags:

Fixing JavaScript

by Matt 13. June 2007 15:48

Looks like Microsoft are actually working on a fix for the circular reference memory leak in their JavaScript implementation. About time, perhaps, but I'd love to know the techy details. How do they get their JavaScript garbage collector to know what references (direct or indirect) an unknown COM object holds?

Is there going to be a separate interface that lists references? This would have to be implemented on the HTML element objects, and so would fail with ActiveX COM objects.

Perhaps the GC lists all properties of a (non-JavaScript) expando object and examines them for a known interface to indicate a JavaScript object. This would allow it to know when a circular reference was made, but it wouldn't identify if the original non-JavaScript object was referenced outside of the script engine.

It's an interesting problem.

But there goes my theory about them using the new managed JavaScript from Silverlight...

Tags:

COM interface types. A quick glossary

by Matt 13. June 2007 06:24

Right. Let's brush away some cobwebs, and write this down so I don't have to google all over the web whenever I need a quick COM 101 refresher:

  • A custom or vtable interface is a COM interface that derives from IUnkown. It only supports early binding through the compiler's vtable.
  • A dispinterface is a purely IDispatch based interface. The methods defined in the IDL file are only callable via IDispatch::Invoke, and not via a vtable. These are usually used as event interfaces (i.e. you implement the interface on your object and pass it to another object that you want to receive events from). Knowing this explains why it's not a cardinal COM sin that Microsoft have been expanding DWebBrowserEvents2 for each release of Internet Explorer.
  • A dual interface is a COM interface that derives from both IUnknown and IDispatch - it's both a vtable interface and a dispinterface.
  • Expando objects implement IDispatchEx and allow you to add methods and properties at runtime. This is how JavaScript, VBScript and Internet Explorer allow you to expand script and HTML objects.

A type library can be used to store interface information. It can contain vtable layouts and the DispId's required to call IDispatch based interfaces.

You can implement IDispatch by hand, with a huge switch statement, if you want to, but remember that you'll need to crack the parameters out of arrays and stuff them back in again for the return value. 

Alternatively, you can let someone else do the heavy lifting for you and use an implementation of IDispatch which is based on a type library. (And seeing how one of the methods of IDispatch is to get a pointer to an ITypeInfo interface representing a type library - you might as well). This works by calling LoadTypeLib to get an instance of ITypeLib, then calling ITypeLib::GetTypeInfoOfGuid. The resulting ITypeInfo can be used to defer the interesting IDispatch methods (including Invoke). This is what ATL's IDispatchImpl does. It's interesting to note that the Invoke method will call vtable based methods on your interface. This is pretty good voodoo.

As far as I can make out, you could also use CreateStdDispatch to create an IDispatch interface for you, which essentially does the same thing as IDispatchImpl. Or, you can implement bits of IDispatch yourself and defer to methods such as DispInvoke and DispGetIDsOfNames.

If you want to support dispinterfaces, you can use ATL's IDispEventImpl. This is an implementation of IDispatch that doesn't require an implementation of each member on the dispinterface. It does this by using a map that routes DispId's to functions - ideal for an interface that doesn't actually have a vtable. You could even use IDispEventSimpleImpl if you didn't want to use a typelib.

Sheesh. It's nice that .net moves well away from all of this malarky, but it's still something that you need to know from time to time...

Tags:

Customer Debug Probes == Managed Debug Assistants

by Matt 8. June 2007 10:52

In the hopes that I can save someone wasting quite as much time as I just have on this - Customer Debug Probes have changed name and become Managed Debug Assistants. (Google doesn't appear to have joined the dots on these two, so let's hope this helps).

They're exactly the same concept, but have had a bit of an overhaul, including a new name and naturally enough, a completely different way of enabling them. This MSDN page details how to enable and configure them (individually, and in an application.mda.config file).

These are debugger messages, so you have to be in a debugger to view them. One gotcha is that the application.mda.config file doesn't work when in Visual Studio. Instead, you need to go to the Debug menu -> Exceptions window and you can enable and disable the assistants there. In use, they interrupt your running as though an exception were thrown.

This has been a public service announcement on behalf of those struggling with COM interop.

Tags:

Vista preview handlers

by Matt 5. June 2007 17:07

Here's a right little collection of links for you. I should really put them on my list of Programs What I Run.

Anyway. Vista's got this nice preview pane that allows you to see, well, a preview of a selected file. Handy. Naturally enough, not all of your favourite file types are going to be supported out of the box, so here's a list of shell extensions to add that special magic.

Right. I feel a bullet list coming on.

But first things first. There is an MSDN article that showed how to build these extensions in managed code (which is allowed, because they run out of process. Don't run .net shell extensions in-process.) It adds a whole heap of previewers, but since it comes from a developer centric article, it's not exactly user friendly in terms of either installing or documentation. So, download the file, double click to extract it, go into the Installer directory and read the README.txt.

Now, you should have a previewer for:

  • .bin and .dat files, treated as binary files
  • .csv files displayed in a DataGridView
  • .isf files, which appear to be ink files from table pc's
  • .msi installer files displaying a list of all the files that might be installed
  • .resources files (used to hold strings and images during .net compilation)
  • .resx files (an xml based file that becomes a .resources file)
  • .snk and .keys files, again used by .net compilation to provide a strong name to an assembly.
  • .zip and .gadget. A rather lovely idea - displays a tree view of all the files in a zip based file. Gadget files are also zip files, so are also handled. You could take this further and register it as the previewer for .xpi (Firefox extensions), .jar files, and even .docx (although you might want to keep that one for the proper Word preview handler)
  • .pdf, via the Adobe Acrobat ActiveX control
  • .xaml!
  • Finally, there's an Internet Explorer preview handler. This is rather nifty and just displays the file in an IE control. It's registered for xml files (via the "xmlfile" registry key, not .xml. This means it gets anything that identifies itself as a .xml file, such as .rels). It's also registered for .config and an unknown .psq file type (which this link identifies as a "Product Studio Query File" and this link identifies "Product Studio" as a bug tracker within Microsoft). It also handles .xps files, because IE is the default xps viewer.

Now, a bunch of those preview handlers could handle other file types. The IE handler could display, oh, I don't know, html files? The MSDN article does have a sidebar on how to register other file types with existing extensions, but if that's too much of a drag, you can try this association editor.

And that's all from the same guy. What can anyone else add?

Oh, and these all work in Office 2007 on Vista, too. But not XP. Fortunately, someone has done all the dirty work of back-porting support for Office 2007 on XP. This package also includes most of the preview handlers already mentioned, but with a few more file types registered, such as .html, .htm, etc.

Tags:

Vista | Shell Extensions | Preview Handlers

Programmatic access to UAC. Kinda.

by Matt 5. June 2007 05:28

I kinda like this web site. It offers a C++ source file with various functions to help in install time situations; functions like IsVista() and IsWow64(). And it also gives some functions to deal with User Account Control. GetElevationType(), IsElevated() and RunElevated() are going to be very useful.

The killer function, though is RunNonElevated(). This is a doozy.

Picture this - I run my installer. It needs to write to C:\Program Files, so gets elevated. Trying to be as nice as I can to my user, I offer them a chance to run my program at the end of the installer, as is common. Unfortunately, since the installer is now running elevated, my newly installed program will run elevated, which is not what I want.

So this function could be very useful. The downside is that Microsoft haven't actually provided any means of running a program non-elevated, so this function has to hack around it by injecting itself into the shell, which it knows is not elevated and getting explorer to spawn the process.

Is it just mean that sees the irony in having to inject code into one process to run another process securely?

Tags:

Vista

Using C# 3.0 from .NET 2.0

by Matt 4. June 2007 18:32

I'm looking forward to Orcas. It's a (mostly) additive release, similar to 3.0. The CLR itself doesn't really change (there is a service pack level update, but you're essentially still running v2). There are updates to the compilers, the IDE and new libraries that add to .net 2 and .net 3.0 (such as System.Core.dll).

One of the nice things Orcas allows is "multitargeting", where you can target different versions of the framework (2.0, 3.0 or 3.5). Of course, if you're targeting a down-level version, you'll lose support of that framework version's assemblies (e.g. no WPF apps in a 2.0 targeted application). What might not be obvious is that the compiler still supports the 3.5 features, including the nice LINQ "select ... from ... where" syntax (although there's a caveat around that one).

Thanks to Daniel Moth for all this info!

Tags:

.net tools

Amazon's SQS service

by Matt 4. June 2007 05:57

Wow. I've only just realised how impressive Amazon's web services are. I've always liked their Simple Storage Service (S3), but well, that's just data storage - it's not terribly exciting (although I keep meaning to look into the opportunities for backup-to-the-cloud).

Today I stumbled across their hosted message queue service. This is all a bit more, well, enterprise-y. It's a B2B type solution, and normally, if you needed a message queue in your enterprise architecture, you'd host it yourself. Now you don't have to. Instead, you can pay Amazon to do it - per message, per amount of data in and per amount of data out.

And on top of this, Amazon have released a WCF transport for it, making it nice and easy to use from your favourite Microsoft messaging library. (Although looking at it, it's a community sample, albeit one provided by the Amazon team - the download is hosted on Microsoft's .net 3 sandbox site.)

Microsoft have recently dipped their toes in this area, too, with the release of a CTP version of their BizTalk Services (of course, Amazon's queue is already released). Microsoft's offering is perhaps broader in scope (as always), including identity federation (via the marvelous CardSpace) and a service relay, which is, at a squint, the closest match to Amazon's queue, although it's most definitely not a queue. It looks like Workflow hosting will come soon, too, which will be one to watch.

These are interesting moves, pulling key enterprise architecture out to hosted providers. I wonder where this will go, and who will use it? And I wonder if the people who do use these types of queues will still have queues at their end of the wire?

Tags:

Rel=Me

Month List

RecentComments

Comment RSS