Ultraviolet: In Need of Common Identity

One of the biggest problems with DRM in the past has been that each DRM system had its own ecosystem, it’s own set of devices that you could play the media on.  Your purchases were tied back to the vendor that you made the purchase with.  If that vendor goes down, or they decide to switch businesses – that’s it you were on your own.  Sure, this happens with physical media too (Optical Disk, HD-DVD etc) but it’s far less volatile than the start-up heavy world of internet.

The audio world has just given up with DRM completely, outside of streaming and all-you-can-eat models, if you own the media then it has to be DRM free.  The world of movies has hung onto the DRM model – this is mainly because the files are larger and most people want to rent a movie rather than own it forever.  That said, everybody has their favorites that they want to call upon at any time.  The studios have got together to create a consortium that allows people to share their virtual library of movie ownership across different media vendors.  Companies like Flixster and Vudu allow you to purchase movies and add these purchases to your digital locker in the sky.  They also let you redeem special codes that you obtain when you buy certain DVDs or BluRays so that you can watch the movies online too. 

That’s the theory anyway, and it’s early days.  The reality is that Ultraviolet makes you juggle two or three accounts to watch your movies online.  The problem here is that each vendor uses their own username and password system and Ultraviolet itself also uses its own scheme.  The process of redeeming codes involves creating accounts for the studio and linking those accounts back to your Ultraviolet account so that you can watch it.  Simple enough, but is this really necessary?

Take Dolphin Tales: Inside the disk case there is a little UV slip with details about redeeming the movie streaming rights.  This involves going to a special URL (in this case they use Flixster), creating a Flixster account with unique username/password, creating a Ultraviolet account with another username and password, then linking them together.  After all this you enter the UV code and it tells you that the movie has been added to your library.  This is a BluRay movie, so you would expect you would have the HD movie in your library, right?  Wrong.  The UV rights only include streaming the SD version of the movie. 
 
Take another movie: – Cowboys and Aliens.  This is a Universal Movie, but the process is similar.  This time the special URL takes you to the Universal web site where you are asked to create a Universal Account.   This seems to be in beta, but even so the account name restrictions are silly.  You create an account name (not email) and the password is restricted to 6-8 characters !  So now you have to remember your username, email and especially shortened password.  Again, you have to link your accounts together as before.
 
So, how do you watch the movies?  On the XBOX 360 you can download the Vudu video app (which is owned by WalMart).  This allows you link your UV account and all your movies show up instantly in your ‘My Library’ section.
 
So, for two movies, that’s 4 new accounts and a fair bit of juggling and linking.  Sure, this won’t be the case for each new movie and it should be fairly painless if the movie is from the same studio, but this may be enough to put off a lot of people.  On the plus side, the movies synchronized very quickly, but again – only with SD viewing rights. 
 
The answer to all this of course is to use a single identity system.  Pick your identity provider: – Facebook, Google, Microsoft Live, Twitter – any of these could be used to authenticate the user once for all the accounts.   Devices are getting smarter and starting to integrate support for identity, for example, granting your UV account access to your Windows Live credentials would allow you to play on an XBOX that is already logged in with your gamertag associated with the same account. 
 
What’s frustrating is that some of the account set-up processes involved Facebook integration.  They weren’t using Facebook as a authentication system, instead they were asking permission to post rental activity on your wall.  In one case, there was an option to ‘login with facebook’, but the account sign-up process went on to ask for a new separate password.  This is missing the point. 
 
This may be a trust issue.  After all, some of the accounts (like Vudu) provided the ability to rent and buy new movies.  If this is the case, then surely the solution is to use an external authentication system to provide read-only access to your digital locker and provide a more secure account with rental and purchase options.
 
Ultraviolet shows a lot of potential.  But it also shows the need for better adoption of identity systems.
 

Pan and Zoom in WPF

DeepZoom is a cool feature of Silverlight 2.0 allowing the user to zoom and pan around an image, while optimizing the bandwidth and how much of the image is downloaded.  The UI metaphor is potentially quite powerful – even outside of image viewing.  Take WPF, with its scalable vector content, panning and zooming around ad-hoc rendered content could have several uses – even without the dynamic image loading.

This quick post details how to achieve this is WPF with a simple ContentControl.  It borrows some functionality from Jaime Rodriguez’s excellent DeepZoom Primer.

The entire pan and zoom functionality can be achieved by a single transform group.  I use a class derived from a content control with two transforms, a scale transform for zooming and a TranslateTransform for panning.  This transform group will then pan and zoom the content of the content control (much like the scroll viewer).  The initialization is done in code:

     this.source = VisualTreeHelper.GetChild(this, 0) as FrameworkElement;
      this.translateTransform = new TranslateTransform();
      this.zoomTransform = new ScaleTransform();
      this.transformGroup = new TransformGroup();
      this.transformGroup.Children.Add(this.zoomTransform);
      this.transformGroup.Children.Add(this.translateTransform);
      this.source.RenderTransform = this.transformGroup;

The DoZoom function modifies these transforms based on the parameters sent.  This is similar to Jaime’s DoZoom function:

 /// <summary>Zoom into or out of the content.</summary>
 /// <param name="deltaZoom">Factor to mutliply the zoom level by. </param>
 /// <param name="mousePosition">Logical mouse position relative to the
 /// original content.</param>
 /// <param name="physicalPosition">Actual mouse position on the screen
 /// (relative to the parent window)</param>
 public void DoZoom(double deltaZoom, Point mousePosition, Point physicalPosition)
 {
   double currentZoom = this.zoomTransform.ScaleX;
   currentZoom *= deltaZoom;
   this.translateTransform.BeginAnimation(TranslateTransform.XProperty,
        CreateZoomAnimation(-1 * (mousePosition.X * currentZoom - physicalPosition.X)));
   this.translateTransform.BeginAnimation(TranslateTransform.YProperty,
        CreateZoomAnimation(-1 * (mousePosition.Y * currentZoom - physicalPosition.Y)));
   this.zoomTransform.BeginAnimation(ScaleTransform.ScaleXProperty,
        CreateZoomAnimation(currentZoom));
   this.zoomTransform.BeginAnimation(ScaleTransform.ScaleYProperty,
        CreateZoomAnimation(currentZoom));
}

The rest of the code simply hooks up the mouse events to the DoZoom function.  This is all self contained within the ContentControl, so using this Pan And Zoom functionality is simply a matter of adding the control and filling in the content.

It’s still missing some functionality, like the navigation overlay.  It would also be nice to add events for the zoom detail so you could adjust detail of the rendered content based on the zoom level.

Source code can be found here

Updated this post and source code link here: http://blogs.windowsclient.net/joeyw/archive/2009/06/02/pan-and-zoom-updated.aspx


On Silverlight/HTML5–maybe WPF is the winner

A lot has been written over the weekend about Microsoft’s client technology ‘shift of focus’.

There was huge Twitter feedback at the PDC when the focus was IE9 and HTML5.  Then came Mary Jo Foley’s interview with Bob Muglia: http://www.zdnet.com/blog/microsoft/microsoft-our-strategy-with-silverlight-has-shifted/7834?tag=mantle_skin;content

Which caused a lot of feedback: http://silverlighthack.com/post/2010/10/29/PDC-2010-Top-5-Reasons-Why-Microsoft-Completely-Screwed-up-their-web-strategy-with-HTML-5.aspx

and a lot of maybe exaggerated press reaction: http://techcrunch.com/2010/10/30/rip-silverlight-on-the-web/

So, what should Microsoft do next with its client platform? What should be the path for the next few years?  These are some of the facts of the situation:

  • HTML5 will be supported well across the major browsers in the next version.   Thanks to much more investment in testing and conformance standards support should be better than before.  But not perfect.
  • Old browsers will be around for a while.  Enterprises take a while to migrate.  But these tend to be on Windows based PCs.
  • WebApps and desktop apps are converging, albeit slowly.
  • Proprietary app-store ‘Apps’, typically on devices, are more powerful than websites.
  • It is very costly for IT shops and customers to support several, orthogonal platforms.
  • The Mac is getting an App Store, where any dependency to runtimes (even Java and Flash) are not allowed.  The App Store may restrict non-managed app installs.  Windows will likely follow suit.
  • The capability of .NET + Silverlight exceeds that of HTML5/CSS3/JavaScript

So it’s clear that long term, the capability of the browser as a platform is increasing.  But, the browser will be limited with access to platform specific features, and slow innovation to new hardware features.

The question is, how to build platforms for ‘reach’ (HTML) but at the same time also be able to extend this to native platform features.  For example, what if Windows 8 is released with Kinect motion sensing.  How would my HTML5 application be extended to use motion gestures.

Also, HTML’s runtime capability may be improving, but the development productivity is not.  Years of investment has gone into the .NET framework as a very mature client development platform.  This investment cannot be ignored with a switch to HTML5.  Would developers really be willing to say goodbye to MVVM, RIAServices, LINQ, C# Async etc…?

So, what to do?

  • Combine Silverlight and WPF into one product.  If Silverlight has a phone specific version then it’s not cross platform and shouldn’t be restricted to this.  It has essentially become WPF Lite.  One brand for one platform.  Let’s call the new platform Silverlight and work in WPF /.NET 4 client profile features.
  • Tier Silverlight using different capabilities.  One profile for WP7, another for Mac/Windows and yet another for Windows only (replacing WPF).
  • Add a new target platform – HTML5/CSS3/JavaScript.  Use an approach similar to Google’s GWT of translating core parts of Silverlight to their HTML equivalent.  This could start with the basics (Script# for C#, Styles to CSS, Layout panels to CSS3). Any conversion is better than nothing.  This effort could even be open sourced (work with the mono guys)

With this the strategy is clear.  For reach Silverlight is just framework used to target HTML5 on the browser – no plugin required.  Cost sensitive IT shops could target all modern devices from their single Silverlight core.  They could even use MonoTouch to leverage this code to go native on OSX.

For desktop scenarios the app store is key.  For Windows that should mean full .NET Silverlight – given that the app store will likely require some runtime.  For the Mac app-store any Silverlight based apps would need to be statically linked as a dynamic runtime dependency is not allowed.

For the phone and other Windows devices, the Silverlight phone tier is the strategy.  Will HTML5 take over as the client platform of choice, maybe for some applications – but it shouldn’t matter.  This is all about multi-targeting.

So, in effect this raises the profile of WPF at the cost of Silverlight.  Other than repositioning, the only piece missing is something to translate existing XAML/.NET assets into something usable in an HTML5 world.  Adobe has started investing in this and Google has a lot invested in GWT already.  It makes sense to do the same and unify the client stack under a single brand.


WPF ItemsControl Virtualization and Fast Moving Data Sets

As some have commented, GUI object virtualization is an important part of WPF.  It allows WPF to only create a small subset of UI element objects when binding to a much larger set of data objects.  The subset usually relates to the elements visible on the screen.  These UI objects can also be reused as the user scrolls through a large data set, reducing pressure on the garbage collector.  This post covers a WPF sample demonstrating the behavior of the ItemsControl and looks at the possibility of using UI virtualization to optimize model/view updates for a fast moving data set.

In the finance industry data objects can be updating very quickly.  It’s common to have a data set of around 3000 objects updating at over 7000 times per second.  The challenge is to efficiently update the data model without impairing the user experience.  Sometimes it’s possible to throttle updates to the client process through an intermediary server, but in some cases these updates need to be real-time. 

One technique used when binding to a fast moving data model is to avoid sending update events if the data is not visible – i.e. there are no visible UI elements hooked up to the data.   A possible way of detecting if a data object is being viewed is if the INotifyPropertyChanged PropertyChanged event has been wired up or not.  If an ItemsControl is bound to a collection of data objects supporting this interface the virtualization of the UI objects will cause this event to be hooked and unhooked depending if the data is being displayed or not.  In theory, a fast moving data set could simply use the status of the event on the INotifyPropertyChanged interface to test if it needs to notify the presentation tier that data has changed.  Realistically, this technique needs to be used with other optimizations such as conflating and queuing the data updates.  But having an effective ‘I am being observed’ flag would be really useful way of optimizing the protocol between the model and the view.

To demonstrate the behavior and test the usefulness of using this event I’ve posted a small sample application.  This shows a ListBox bound to 1000 items (of type MyObject).  The MyObject class contains a static property – a separate collection that represents the INotifyPropertyChanged.PropertyChanged event’s hooked status.  The MyObject catches hooking and unhooking this event and updates the HookStatus collection.

The ListBox is bound to the MyObject collection so that we can test the behavior of the ItemsControl and how it hooks and unhooks the MyObject class as the virtual UI objects are created and reused.  A custom ItemsControl is used to show the separate hooked status collection.  This ItemsControl is bound to the HookedStatusCollection and simply shows a red pixel if the MyObject is hooked up, and an empty pixel if it’s not.  The 1000 element collection is represented by the 1000 pixels across the top of the window.  The project is linked below (VS2008, .NET 3.5 binary included).

http://cid-fcb8a93dfc444f40.skydrive.live.com/embedrowdetail.aspx/Public/ItemsControlNotification/TestBinding.zip

As can be seen by running the application the ItemsControl is very generous when it comes to creating UI objects.  It’s very easy to scroll through the ListView and hook up nearly all the observed objects.  This means that using the state of this event is not a good indicator of the visibility of the data.  I need to investigate a little further to see how much code is executed if an invisible UI object receives a update event from its sourced object.  The overhead may be minimal, but avoiding invoking the event at all would be better.

Ideally this behavior should be configurable.  In some cases it makes sense to be more conservative with UI objects, especially when the cost of keeping these objects is more than the memory that they use.  If there was a way of setting guideline parameters for the maximum number of UI objects that an ItemsControl should maintain then we could optimize some lists for showing real-time data updates. 


Animating Layout in an ItemsControls

Last year Karsten Januszewski posted a blog entry about ‘layout to layout animation’ called Phenomenological Layout.  In his code he took a background Panel and replicated the positions of elements onto a Canvas.  On this canvas he was able to apply animations and reuse whatever Panel the developer decides is best for their layout.

I wanted to use a similar idea but focus on an ItemsControl as the basis of the animation.  I also wanted to try to hookup the LayoutUpdated event to efficiently set-up and animate each element.  The idea here is that I have a databound ItemsControl that animates as the layout changes, using whatever layout schema I want.  Even a re-sort of the elements in the ItemsControl should animate the elements into the correct order.

The problem with the layout engine that the Panel classes provide is that it’s very difficult to hook-up events to find out what the Panel is doing.  The behavior isn’t easily observable in that there’s no simple event providing the child element, the old location/size and the new location/size.  If the base Panel class had this event then layout to layout animation would be so easy – it could even be done through XAML and Expression Blend.

The solution that I opted for adds this functionality by way of a utility class that is pretty easy to call from a custom ItemsControl.  The LayoutUpdated event is hooked up using a surrogate object and the derived ItemsControl holds a reference to a container of these surrogates.  When items are added to the ItemsControl (seen below using the PrepareContainerForItemOverride virtual function) the Items control calls out to the container and hooks up the Layout event so the surrogate can do something when its being resized or repositioned.

protected override void PrepareContainerForItemOverride(DependencyObject element, object item)
{
    base.PrepareContainerForItemOverride(element, item);
    this.container.Add( new Surrogate( this, element, item ));
}

The surrogate object uses the LayoutUpdated event of the new element and wires up its own event handler. The big problem with this event is that it doesn’t come with any indication what is being updated, by who and to where.  So I use the surrogate to keep hold of this information.  The wire up process on the surrogate object looks like this from the constructor:

public Surrogate(ItemsControl itemsControl, DependentObject element, object item )
{
     :: ::
     element.LayoutUpdated += new EventHandler(this.target_LayoutUpdated);
     if (this.element.RenderTransform == null || !(this.element.RenderTransform is TranslateTransform))
     {
         this.element.RenderTransform = new TranslateTransform(); // setup render transform
     }
:: :: // etc }

Now everything is wired up it’s simply a matter of setting the animation in motion when a layout even has changed:

public void target_LayoutUpdated(object sender, EventArgs e)
{
    ::
    // get new position of element
    Point newPosition = this.Element.TranslatePoint(new Point(0, 0), this.Container.animatedItemsControl as ItemsControl);
    TranslateTransform translateTransform = this.Element.RenderTransform as TranslateTransform;
    // offset this position of the current render Transform offset
    newPosition = new Point(newPosition.X - translateTransform.X, newPosition.Y - translateTransform.Y);
    // check for rounding errors
    Point deltaToOldPosition = new Point(this.Destination.X - newPosition.X, this.Destination.Y - newPosition.Y);
    if (Math.Abs(deltaToOldPosition.X) < 1 && Math.Abs(deltaToOldPosition.Y) < 1)
    {
        return; // nothing to do
    }
// we now have the new and old positions - so animate the render transform from where it was to zero
// because the element has already been repositioned
translateTransform.BeginAnimation(TranslateTransform.XProperty, new DoubleAnimation(deltaToOldPosition.X, 0.0, new Duration(animationTime), FillBehavior.HoldEnd)); translateTransform.BeginAnimation(TranslateTransform.YProperty, new DoubleAnimation(deltaToOldPosition.Y, 0.0, new Duration(animationTime), FillBehavior.HoldEnd)); :: :: // etc }

This isn’t the exact code, but the key points are here.  The same adjustment can also be used for the size transformation – the process is the same by using the delta from the old size to the new size to feed into the render transformation.   The elements themselves can be recycled through the ItemsControl so it’s important to unwire the association between the object and the element using the ClearElement override.

This all gives the necessary functionality to animate elements off the back of the mysterious LayoutUpdated event by giving it some context.  As a byproduct we also have the current location of any child element given the bound object that is being templated in the ItemsControl.  With this we can now build a generic ItemsControl that throws RoutedEvents for layout changes that can be used with Triggers in animation and even animate between ItemsControls by looking up the location of a bound object.

Technorati Tags: ,,

Zune Firmware Update – rating feature reduced

Technorati Tags: ,
Zune Insider reported yesterday that the new Zune firmware removes the regular 5 star rating system and replaces it with a ‘good’ or ‘bad’ rating system.  This means we’ve gone from 5 levels to just two.  The reason for the removal of this feature is to make it simpler and add more parity across users ratings – I assume, so that ratings can be aggregated and reported through some social website.
 
This argument doesn’t really stand, as the same parity issues still apply.  What is defined as Good and Bad for users?  Some users may download their favorite tunes and only rate 10% as Good.  Other users may download lots of tracks and mark most of it as good.  When aggregating ratings there will be nothing to distinguish an excellent track from a quite good track. 
 
The real problem, though, is the downgraded functionality.  Like a lot of users, I use ratings to set-up automatic playlists based on genre and artist.  I also use ratings as a way to filter through mass downloaded subscription music.  I flag the music that I want to delete with a ’1 star rating’. 
 
Even more of a problem is the migration process.  The upgrade will automatically map any track >2 stars to be good and all others as bad.  What isn’t clear is how the new rating system will be managed?  How will these ratings be stored?  At the moment the Zune software uses the track itself.  There could be some very unhappy audiophiles with lost track ratings next week if the upgrade decides to update the song files with its new scoring mechanism.

Technology Evangelism

Scott Barnes has an interesting entry about Evangelism.  In it he quotes Guy Kawasaki on his The Art of Evangelism - saying that "Look for agnostics, ignore atheists".

Firstly, I agree with this point entirely – but I feel it should be renamed to "Look for agnostics, ignore theists".  What Guy is really saying here is that if somebody already has a Tool/Product affiliation then it’s hard to persuade them to use another tool instead.  Taking the religious analogy further – I would say that atheists do not believe in affiliation with a product or tool.  They have ‘no belief’. 

To me, the analogy to religion is a little stretched.  Religious evangelicals try to persuade people into believing in something that there is no evidence.  They preach the use of faith.  Whereas, technology evangelists preach the merits of tools and products based on evidence – which is very different.  Who would adopt a technology based on blind faith?


Follow

Get every new post delivered to your Inbox.