New DOM bindings

In my previous post here I walked through some of the history behind the DOM bindings in the Mozilla code. This post is a followup to that where I’ll walk through the “new” bindings that we already have in part, and are working on completing over the coming months.

One of the big performance problem areas with the old style bindings was performance of accessing array like DOM objects, i.e. accessing properties of live lists of DOM objects such as NodeList (element.childNodes[n]), HTMLCollection (form.elements[s]), etc. Not only was that slow, but it also didn’t always work right, especially when dealing with lists from which objects were removed. The reason there was that the way those bindings were set up (and had to be set up given the binding mechanism worked) meant that any property that’s ever been touched from JS will forever be present on the JS object for that list. Its value will update if the underlying live list changed (may even become null), but if an entry in the list was no longer present, the property for that entry still existed on the JS object. This is something that’s hard (i.e. impossible) to do right with DOM bindings that are based on JSClass hooks, which all our bindings so far have been.

In mid 2011 we started thinking hard about how to move forward on this particular front, and faster bindings overall too. We chose to tackle the list like objects first for two main reasons, the current ones were both slow and broken, and there were relatively few of them so the change would be relatively isolated. The answer to the problems we were having with list like objects was to use the then fairly new support for JS proxies. Proxies are basically a new way to give JS objects custom behavior, one that’s in particular much more suited for live list like objects like NodeList etc. In May of 2011 we started to plan this work when Peter Van der Beken, Blake Kaplan, and Andreas Gal got together in Paris, France, (with Boris Zbarsky on many hours of video conferencing) to design new DOM bindings for our list like DOM objects. The outcome of this work was a code generator, one that read a configuration file, the relevant IDL files, and generated C++ template specializations of the ListBase template class implementations of proxies for list like objects. That work landed in August of 2011. It gave us better performance than we’d ever had for list like objects, and it also fixed the broken behavior of the old bindings. And while this work did give us performance improvements, there’s still room to improve in the new system as well, in particular with how the JS engine JIT works with JS proxies. So more to come there.

The last remaining large piece here is to write new DOM bindings for our regular DOM objects (i.e. objects that are not list like). That work (internally known as “Paris Bindings”) will be split up into several pieces as well, and the first one of that started early this year. In January of 2012 a group of people (Boris Zbarsky, Blake Kaplan, Mounir Lamouri, Olli Pettay, Kyle Huey, Bobby Holley, Ben Turner, and myself, plus Peter Van der Beken joining us on the last day that week) got together, again in Paris, France, to discuss the next piece of this work. The outcome of that set of meetings was a plan to tackle the bindings for a single object (plus a few dependent ones) as the first step. The object we chose was XMLHttpRequest, reason being that it’s a fairly self-contained object, it’s one that exists both on the main-thread and in DOM workers, and yet it’s a non-trivial object that poses some challenges (the fact that it’s an EventTarget being one of those challenges). We also had a plan on how to do that, part of which was preliminary code, even.

We then set out to hack this up, with Kyle Huey working on a WebIDL parser. Bobby Holley was working on the code generator (which uses the WebIDL parser), Boris Zbarsky working on argument and “this” unwrapping, return value wrapping, prototype setup, and a bunch of other low level stuff. Ben Turner worked on getting DOM workers ready for the new bindings (which was a significant amount of work given the then current worker bindings which were all hand-written JSClass based bindings). The rest of us helped out with various tasks as they came up, including writing some WebIDL files. In the following weeks these new bindings started to take shape, first we had the basic utilities working, then prototype setup, followed by basic code generation for methods/getters/setters. Things were starting to look pretty functional, even if there were still missing pieces to the puzzle. It was Peter Van der Beken who put the final pieces of the puzzle in place, and got a functioning build with new bindings in place for XMLHttpRequest. Then we of course found a bunch of details that were not right once we were able to push this stuff through the try  server etc, so a good bit of cleanup and missed pieces needed to be sorted out before finally landing this work at the end of March 2012. The landing went very smoothly, with one silly missing null check that Kyle Huey fixed being the most serious fallout. After this change having been in the tree for a bit more than a week now there’s some other stuff that was found, but all things considered, things are looking very good.

Writing the code generator for these new bindings brought up a set of interesting questions, one of which was what the signature of the method that the generated code would call should look like. We’ll be calling directly into a concrete class here (avoiding virtual calls when possible), so no need for any XPCOM goop, but at the same time we often do need to deal with propagating errors from the DOM method. That means we need an error code (i.e. an nsresult return value or out parameter), yet there’s plenty of cases where we simply don’t need one and requiring that the out parameter, or return value, always be present adds overhead we simply don’t want. What we finally settled on there was to have a nsresult out parameter, passed by reference, in the cases where we generate calls to fallible methods (which at this point can be controlled in the configuration file). That’s something we’ll likely move from the configuration file into per method/property annotations in the WebIDL itself as we do some pending cleanup now that the basics are working.

There’s also the issue of the name of the method that gets called. Since we’re calling methods on a concrete class, and that concrete class is likely to also inherit a bunch of XPIDL DOM interfaces (for now at least), we’ll likely run into conflicts where the signature of a method we’ll be invoking from the generated code causes conflicts with existing methods. There’s no obvious answer to this, but the fact that we return the error code in an out parameter (assuming there is one) helps as the signature in that case is likely different between the XPIDL method and the method we’ll call from the generated code. But if we’re dealing with an infallible method, like say xhr.abort(), then there will be a conflict due to the only difference between what our generated code will call and what we currently have from the implementation of our IDL interface is the return type. We decided to deal with this problem by renaming the binary name in the old XPIDL files, since we already had support for doing that (i.e. the binaryname attribute in IDL).

Another piece of this puzzle was making XrayWrapper work right with these new bindings. The existing XrayWrapper hooked into XPConnect and used its machinery to call the “original” unaltered method/getter/setter, but for our new bindings, XPConnect doesn’t know anything about our objects (or won’t, once we get further along here). The answer here was to write another flavor of our XrayWrapper, called XrayDOM, that specializes XrayWrapper such that it does the right thing on the new binding objects.

And the existence of QueryInterface() on the old bindings objects caused us problems too, particularly in our tests. The solution to this problem for now was to add a method named QueryInterface() on the new bindings as well, but calling it is a no-op. This of course led to callers of QueryInterface() expecting properties on the interface they were QueryInterface()’ing to to exist on the returned object (i.e. the binding object itself), fortunately the number of interfaces and properties we had to deal with was very low, and they’re only relevant to chrome code and thus only appear on chrome XMLHttpRequest objects, and that was enough to pass all our tests, and has yet shown any problems in the wild either. This is us paying the price of ever exposing QueryInterface() on DOM objects. Going forward this is something we need to keep an eye on, as this is likely to creep up with more objects, including some DOM elements. Our long term goal is to remove this completely, at least on non-chrome objects.

So what do these new bindings look like you ask? Well, we have a set of common utilities that are shared, and per DOM type we have roughly one JSClass, two lists of methods and properties (one for chrome, one for untrusted code, because there are things we expose to chrome that we don’t expose to untrusted code), and a list of constants (I’d love to link to examples here, but they’re all generated at build time so I can’t easily do that). And double that (minus the chrome only stuff) for each DOM type that’s available in DOM workers. Plus of course the implementation of the methods, getters, and setters. As you can tell if you start reading some of the actual code here (which ends up living in $objdir/dom/bindings/*.{cpp,h}), there’s very little XPCOM stuff left here. We don’t use XPIDL files, we use WebIDL, meaning that it’ll certainly be possible to write a DOM class for which there’s no XPIDL file in this setup. The bindings can simply call straight into methods on the concrete class, which is named in the configuration file. Inheriting from nsWrapperCache is a requirement for using the new bindings, that’s how we map from C++ object to JS object, and having a link from the DOM object to its parent (and ultimately its window or global object) is also a requirement in this new setup so that we can tell which JS compartment to create the binding object in.

We’re also still ironing out exactly what the configuration file and the WebIDL files will look like long term, but what we have for now is a good start, enough to learn how to move forward here. We’re also going to be consolidating some of the code generators we’ve grown over time, and eventually we’ll be able to remove support for quick stubs, slim wrappers, and probably a bunch of other optimizations that have been made in the XPConnect code too. We also went a bit overboard with the use of C++ namespaces in the generated code, so we’re cleaning that up now as well to be a bit more developer (and debugging) friendly.

The next steps here are to first do new bindings for a few more objects, that’ll let us iron out more details in our WebIDL parser and code generator by throwing more WebIDL at the system. The objects we’ve chosen for that are the canvas objects (because they’re very performance sensitive) and the CSS2Properties (because it poses several challenges on its own given how it is implemented, and it has a lot of properties whose presence is controlled by preferences). And Boris Zbarsky is also looking into the WebGL bindings, because they too are very performance sensitive. I would expect that we’ll land the above over the next weeks or couple of months.

After that the largest remaining piece is to write new bindings for our actual DOM nodes and the other miscellaneous classes. There’s a lot of those, so that’ll be a big chunk of work as well, but hopefully by the time we get there the infrastructure for all this will be pretty solid and we can focus on the bindings rather than the supporting code.

About these ads
This entry was posted in mozilla and tagged . Bookmark the permalink.

17 Responses to New DOM bindings

  1. azakai says:

    Very interesting!

    Any benchmarks about the improvement from these new bindings?

    • jstenback says:

      bz does have some numbers from a micro benchmark he wrote, but I don’t have them in writing so I won’t quote him here as I’ll probably get them wrong. But the new bindings are already faster, and that’s w/o paying too much attention to that yet as XMLHttpRequest is not very interesting from a performance point of view. It’ll be much more interesting to look at actual performance once we have new bindings for canvas or WebGL. So more on that as we learn more!

    • Boris says:

      I have some limited numbers for the overhead of getters. In brief, the overhead of getters with the new bindings is about 30% less than the fastest path we had with the custom quickstubs, and a good 5-6x less than the typical non-custom quickstub path. It’s about 50x less than xpconnect dispatch. For setters it’s hard to get numbers because the JS engine doesn’t IC setters yet, so the overhead is dominated by the JS engine overhead. For method calls with few arguments, the overhead numbers look similar to getters (albeit with a bit more overhead on the JS side because the shape guard doesn’t cover callee identity in this case).

      This is all in microbenchmarks, and ignoring the actual cost of the methods being called. Though in the case of getters the actual thing being called is often very fast (e.g DOM tree traversal is just doing inlined member lookups on the C++ side).

      For methods with more arguments, I’ll try to get some numbers once the WebGL bindings are up and running. I know that numeric arguments should be much faster with the new bindings, but I’ll have to do some profiling of other cases.

      JS wrapper creation is definitely much faster in the new bindings than even the slimwrapper case, as far as I can tell. Easily 2-3x faster.

  2. AndersH says:

    If I understand correctly, the old (current) XPConnect is build on top of XPCOM. Is the new build on js-ctypes?
    Is XPConnect is going away entirely?
    It seems to me that XPCOM offers (1) dynamic linking and dynamic casting (via QueryInterface), (2) memory management (via AddRef and RemoveRef) and (3) error handling (via return type convention). But since the frozen interfaces aren’t frozen (1) is out. With better (and incremental and compacting) garbage collectors maybe (2) is less important or how do you manage memory in the new bindings? Could this mean that XPCOM is going away (other than in a backwards comparability layer) or is that just silly?

    • Boris says:

      The new bindings are not based on js-ctypes; they’re based on JSNative-backed functions, basically.

      The long-term plan is for XPConnect to go away entirely for web-exposed objects. I’ll stay for non-web-exposed stuff.

      The web does not use or need dynamic casting, we’re still using refcounting memory management, and the error handling in XPCOM is not really that great for the web, but we’re sticking with something similar for the moment.

      We _are_ planning to remove XPCOM reflections of some DOM objects (e.g. the WebGL context) in the near future. Removing them for others (e.g. DOM nodes) would be nice, but is a very long-term project. Again, XPCOM for non-web things is not going anywhere.

  3. Very well written. Shall we have more gains once IonMonkey take shape?

    • jstenback says:

      Absolutely! The JS engine team is working on both making the engine itself faster, and also making it faster at calling into native (read C++) hooks etc. Some of that will come with IonMonkey, some will come independently, but there’s definitely more improvements on their ways.

  4. Pingback: DPS911 Summary « diogogmt

  5. Pingback: Aurora 14 is out! What’s new in it? ✩ Mozilla Hacks – the Web developer blog

  6. Brandon Benvie says:

    XMLHttpRequest is also the IDL interface I implemented for dom.js. (though the implementation relies on support from Node.js since I don’t know of any other low level support for networking in JS). https://github.com/Benvie/dom.js/blob/master/src/impl/XMLHttpRequest.js

    It’s interesting how a lot of the constraints you work with aren’t necessarily from IDL itself, but rather having to expose it to multiple consumers at once. Between XPCOM stuff and JS itself. Implementing it in JS for JS consumers is relatively straightforward!

    • Boris says:

      It’s straightforward if you don’t actually implement what the WebIDL spec says. Which the implementation you link to doesn’t, in all sorts of ways. Most notably, as far as I can tell it doesn’t actually enforce that its methods are called on the right sorts of objects, its readonly attributes are not readonly, it’s not implementing some parts of the interface at all. I’ll assume the underlying Node impl that it’s using for send() actually handles all the various things send() can take correctly, and that Node’s implementation of the user/password stuff is compatible with what the XHR spec actually says (which I rather doubt, a priori).

      Nothing wrong with cutting out the edge cases that make things difficult to implement if they’re not relevant to your problem domain, but unfortunately we have to deal with those edge cases in a browser.

      • Most of those issues are dealt with at the interface layer (generated from IDL) that dom.js uses. It enforces types and does translations using things like OptrionalStringOrNull, OptionalBoolean, ToUint32, etc. So anything so that makes it to the impl has already been vetted for types. There are some shortcuts taken, and the implementation isn’t complete, but it’s much closer to complete than it appears just in that code.

  7. I forgot that stuff is generated and not included in the repo. The XHR interfaces look like this https://gist.github.com/2552989

    • Boris says:

      Ah, interesting. See, the blog post here is in fact talking about the equivalent glue layer for C++. The actual C++ impl is pretty simple (again, if you exclude the hard parts like firing the relevant events and sync XHR tha).

  8. A lot of the public chatter coming out of Mozilla these days relates to Firefox OS. Are these bindings still under development?

    • jstenback says:

      Yes, significant parts of these new bindings are still under development, and the foundation of these bindings is already part of Firefox OS v1.0 as well. What’s left is largely the long tail of converting more objects to use the new bindings (actively worked on), and some further performance improvements that are now possible due to how we implemented the new bindings (also very actively worked on, some landed in nightly builds just days ago).

  9. Pingback: About DOM binding | 鶴

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s