Ultraviolet: In Need of Common Identity

One of the biggest problems with DRM in the past has been that each DRM system had its own ecosystem, it’s own set of devices that you could play the media on.  Your purchases were tied back to the vendor that you made the purchase with.  If that vendor goes down, or they decide to switch businesses – that’s it you were on your own.  Sure, this happens with physical media too (Optical Disk, HD-DVD etc) but it’s far less volatile than the start-up heavy world of internet.

The audio world has just given up with DRM completely, outside of streaming and all-you-can-eat models, if you own the media then it has to be DRM free.  The world of movies has hung onto the DRM model – this is mainly because the files are larger and most people want to rent a movie rather than own it forever.  That said, everybody has their favorites that they want to call upon at any time.  The studios have got together to create a consortium that allows people to share their virtual library of movie ownership across different media vendors.  Companies like Flixster and Vudu allow you to purchase movies and add these purchases to your digital locker in the sky.  They also let you redeem special codes that you obtain when you buy certain DVDs or BluRays so that you can watch the movies online too. 

That’s the theory anyway, and it’s early days.  The reality is that Ultraviolet makes you juggle two or three accounts to watch your movies online.  The problem here is that each vendor uses their own username and password system and Ultraviolet itself also uses its own scheme.  The process of redeeming codes involves creating accounts for the studio and linking those accounts back to your Ultraviolet account so that you can watch it.  Simple enough, but is this really necessary?

Take Dolphin Tales: Inside the disk case there is a little UV slip with details about redeeming the movie streaming rights.  This involves going to a special URL (in this case they use Flixster), creating a Flixster account with unique username/password, creating a Ultraviolet account with another username and password, then linking them together.  After all this you enter the UV code and it tells you that the movie has been added to your library.  This is a BluRay movie, so you would expect you would have the HD movie in your library, right?  Wrong.  The UV rights only include streaming the SD version of the movie. 
 
Take another movie: – Cowboys and Aliens.  This is a Universal Movie, but the process is similar.  This time the special URL takes you to the Universal web site where you are asked to create a Universal Account.   This seems to be in beta, but even so the account name restrictions are silly.  You create an account name (not email) and the password is restricted to 6-8 characters !  So now you have to remember your username, email and especially shortened password.  Again, you have to link your accounts together as before.
 
So, how do you watch the movies?  On the XBOX 360 you can download the Vudu video app (which is owned by WalMart).  This allows you link your UV account and all your movies show up instantly in your ‘My Library’ section.
 
So, for two movies, that’s 4 new accounts and a fair bit of juggling and linking.  Sure, this won’t be the case for each new movie and it should be fairly painless if the movie is from the same studio, but this may be enough to put off a lot of people.  On the plus side, the movies synchronized very quickly, but again – only with SD viewing rights. 
 
The answer to all this of course is to use a single identity system.  Pick your identity provider: – Facebook, Google, Microsoft Live, Twitter – any of these could be used to authenticate the user once for all the accounts.   Devices are getting smarter and starting to integrate support for identity, for example, granting your UV account access to your Windows Live credentials would allow you to play on an XBOX that is already logged in with your gamertag associated with the same account. 
 
What’s frustrating is that some of the account set-up processes involved Facebook integration.  They weren’t using Facebook as a authentication system, instead they were asking permission to post rental activity on your wall.  In one case, there was an option to ‘login with facebook’, but the account sign-up process went on to ask for a new separate password.  This is missing the point. 
 
This may be a trust issue.  After all, some of the accounts (like Vudu) provided the ability to rent and buy new movies.  If this is the case, then surely the solution is to use an external authentication system to provide read-only access to your digital locker and provide a more secure account with rental and purchase options.
 
Ultraviolet shows a lot of potential.  But it also shows the need for better adoption of identity systems.
 

Pan and Zoom in WPF

DeepZoom is a cool feature of Silverlight 2.0 allowing the user to zoom and pan around an image, while optimizing the bandwidth and how much of the image is downloaded.  The UI metaphor is potentially quite powerful – even outside of image viewing.  Take WPF, with its scalable vector content, panning and zooming around ad-hoc rendered content could have several uses – even without the dynamic image loading.

This quick post details how to achieve this is WPF with a simple ContentControl.  It borrows some functionality from Jaime Rodriguez’s excellent DeepZoom Primer.

The entire pan and zoom functionality can be achieved by a single transform group.  I use a class derived from a content control with two transforms, a scale transform for zooming and a TranslateTransform for panning.  This transform group will then pan and zoom the content of the content control (much like the scroll viewer).  The initialization is done in code:

     this.source = VisualTreeHelper.GetChild(this, 0) as FrameworkElement;
      this.translateTransform = new TranslateTransform();
      this.zoomTransform = new ScaleTransform();
      this.transformGroup = new TransformGroup();
      this.transformGroup.Children.Add(this.zoomTransform);
      this.transformGroup.Children.Add(this.translateTransform);
      this.source.RenderTransform = this.transformGroup;

The DoZoom function modifies these transforms based on the parameters sent.  This is similar to Jaime’s DoZoom function:

 /// <summary>Zoom into or out of the content.</summary>
 /// <param name="deltaZoom">Factor to mutliply the zoom level by. </param>
 /// <param name="mousePosition">Logical mouse position relative to the
 /// original content.</param>
 /// <param name="physicalPosition">Actual mouse position on the screen
 /// (relative to the parent window)</param>
 public void DoZoom(double deltaZoom, Point mousePosition, Point physicalPosition)
 {
   double currentZoom = this.zoomTransform.ScaleX;
   currentZoom *= deltaZoom;
   this.translateTransform.BeginAnimation(TranslateTransform.XProperty,
        CreateZoomAnimation(-1 * (mousePosition.X * currentZoom - physicalPosition.X)));
   this.translateTransform.BeginAnimation(TranslateTransform.YProperty,
        CreateZoomAnimation(-1 * (mousePosition.Y * currentZoom - physicalPosition.Y)));
   this.zoomTransform.BeginAnimation(ScaleTransform.ScaleXProperty,
        CreateZoomAnimation(currentZoom));
   this.zoomTransform.BeginAnimation(ScaleTransform.ScaleYProperty,
        CreateZoomAnimation(currentZoom));
}

The rest of the code simply hooks up the mouse events to the DoZoom function.  This is all self contained within the ContentControl, so using this Pan And Zoom functionality is simply a matter of adding the control and filling in the content.

It’s still missing some functionality, like the navigation overlay.  It would also be nice to add events for the zoom detail so you could adjust detail of the rendered content based on the zoom level.

Source code can be found here

Updated this post and source code link here: http://blogs.windowsclient.net/joeyw/archive/2009/06/02/pan-and-zoom-updated.aspx


On Silverlight/HTML5–maybe WPF is the winner

A lot has been written over the weekend about Microsoft’s client technology ‘shift of focus’.

There was huge Twitter feedback at the PDC when the focus was IE9 and HTML5.  Then came Mary Jo Foley’s interview with Bob Muglia: http://www.zdnet.com/blog/microsoft/microsoft-our-strategy-with-silverlight-has-shifted/7834?tag=mantle_skin;content

Which caused a lot of feedback: http://silverlighthack.com/post/2010/10/29/PDC-2010-Top-5-Reasons-Why-Microsoft-Completely-Screwed-up-their-web-strategy-with-HTML-5.aspx

and a lot of maybe exaggerated press reaction: http://techcrunch.com/2010/10/30/rip-silverlight-on-the-web/

So, what should Microsoft do next with its client platform? What should be the path for the next few years?  These are some of the facts of the situation:

  • HTML5 will be supported well across the major browsers in the next version.   Thanks to much more investment in testing and conformance standards support should be better than before.  But not perfect.
  • Old browsers will be around for a while.  Enterprises take a while to migrate.  But these tend to be on Windows based PCs.
  • WebApps and desktop apps are converging, albeit slowly.
  • Proprietary app-store ‘Apps’, typically on devices, are more powerful than websites.
  • It is very costly for IT shops and customers to support several, orthogonal platforms.
  • The Mac is getting an App Store, where any dependency to runtimes (even Java and Flash) are not allowed.  The App Store may restrict non-managed app installs.  Windows will likely follow suit.
  • The capability of .NET + Silverlight exceeds that of HTML5/CSS3/JavaScript

So it’s clear that long term, the capability of the browser as a platform is increasing.  But, the browser will be limited with access to platform specific features, and slow innovation to new hardware features.

The question is, how to build platforms for ‘reach’ (HTML) but at the same time also be able to extend this to native platform features.  For example, what if Windows 8 is released with Kinect motion sensing.  How would my HTML5 application be extended to use motion gestures.

Also, HTML’s runtime capability may be improving, but the development productivity is not.  Years of investment has gone into the .NET framework as a very mature client development platform.  This investment cannot be ignored with a switch to HTML5.  Would developers really be willing to say goodbye to MVVM, RIAServices, LINQ, C# Async etc…?

So, what to do?

  • Combine Silverlight and WPF into one product.  If Silverlight has a phone specific version then it’s not cross platform and shouldn’t be restricted to this.  It has essentially become WPF Lite.  One brand for one platform.  Let’s call the new platform Silverlight and work in WPF /.NET 4 client profile features.
  • Tier Silverlight using different capabilities.  One profile for WP7, another for Mac/Windows and yet another for Windows only (replacing WPF).
  • Add a new target platform – HTML5/CSS3/JavaScript.  Use an approach similar to Google’s GWT of translating core parts of Silverlight to their HTML equivalent.  This could start with the basics (Script# for C#, Styles to CSS, Layout panels to CSS3). Any conversion is better than nothing.  This effort could even be open sourced (work with the mono guys)

With this the strategy is clear.  For reach Silverlight is just framework used to target HTML5 on the browser – no plugin required.  Cost sensitive IT shops could target all modern devices from their single Silverlight core.  They could even use MonoTouch to leverage this code to go native on OSX.

For desktop scenarios the app store is key.  For Windows that should mean full .NET Silverlight – given that the app store will likely require some runtime.  For the Mac app-store any Silverlight based apps would need to be statically linked as a dynamic runtime dependency is not allowed.

For the phone and other Windows devices, the Silverlight phone tier is the strategy.  Will HTML5 take over as the client platform of choice, maybe for some applications – but it shouldn’t matter.  This is all about multi-targeting.

So, in effect this raises the profile of WPF at the cost of Silverlight.  Other than repositioning, the only piece missing is something to translate existing XAML/.NET assets into something usable in an HTML5 world.  Adobe has started investing in this and Google has a lot invested in GWT already.  It makes sense to do the same and unify the client stack under a single brand.


WPF ItemsControl Virtualization and Fast Moving Data Sets

As some have commented, GUI object virtualization is an important part of WPF.  It allows WPF to only create a small subset of UI element objects when binding to a much larger set of data objects.  The subset usually relates to the elements visible on the screen.  These UI objects can also be reused as the user scrolls through a large data set, reducing pressure on the garbage collector.  This post covers a WPF sample demonstrating the behavior of the ItemsControl and looks at the possibility of using UI virtualization to optimize model/view updates for a fast moving data set.

In the finance industry data objects can be updating very quickly.  It’s common to have a data set of around 3000 objects updating at over 7000 times per second.  The challenge is to efficiently update the data model without impairing the user experience.  Sometimes it’s possible to throttle updates to the client process through an intermediary server, but in some cases these updates need to be real-time. 

One technique used when binding to a fast moving data model is to avoid sending update events if the data is not visible – i.e. there are no visible UI elements hooked up to the data.   A possible way of detecting if a data object is being viewed is if the INotifyPropertyChanged PropertyChanged event has been wired up or not.  If an ItemsControl is bound to a collection of data objects supporting this interface the virtualization of the UI objects will cause this event to be hooked and unhooked depending if the data is being displayed or not.  In theory, a fast moving data set could simply use the status of the event on the INotifyPropertyChanged interface to test if it needs to notify the presentation tier that data has changed.  Realistically, this technique needs to be used with other optimizations such as conflating and queuing the data updates.  But having an effective ‘I am being observed’ flag would be really useful way of optimizing the protocol between the model and the view.

To demonstrate the behavior and test the usefulness of using this event I’ve posted a small sample application.  This shows a ListBox bound to 1000 items (of type MyObject).  The MyObject class contains a static property – a separate collection that represents the INotifyPropertyChanged.PropertyChanged event’s hooked status.  The MyObject catches hooking and unhooking this event and updates the HookStatus collection.

The ListBox is bound to the MyObject collection so that we can test the behavior of the ItemsControl and how it hooks and unhooks the MyObject class as the virtual UI objects are created and reused.  A custom ItemsControl is used to show the separate hooked status collection.  This ItemsControl is bound to the HookedStatusCollection and simply shows a red pixel if the MyObject is hooked up, and an empty pixel if it’s not.  The 1000 element collection is represented by the 1000 pixels across the top of the window.  The project is linked below (VS2008, .NET 3.5 binary included).

http://cid-fcb8a93dfc444f40.skydrive.live.com/embedrowdetail.aspx/Public/ItemsControlNotification/TestBinding.zip

As can be seen by running the application the ItemsControl is very generous when it comes to creating UI objects.  It’s very easy to scroll through the ListView and hook up nearly all the observed objects.  This means that using the state of this event is not a good indicator of the visibility of the data.  I need to investigate a little further to see how much code is executed if an invisible UI object receives a update event from its sourced object.  The overhead may be minimal, but avoiding invoking the event at all would be better.

Ideally this behavior should be configurable.  In some cases it makes sense to be more conservative with UI objects, especially when the cost of keeping these objects is more than the memory that they use.  If there was a way of setting guideline parameters for the maximum number of UI objects that an ItemsControl should maintain then we could optimize some lists for showing real-time data updates. 


Animating Layout in an ItemsControls

Last year Karsten Januszewski posted a blog entry about ‘layout to layout animation’ called Phenomenological Layout.  In his code he took a background Panel and replicated the positions of elements onto a Canvas.  On this canvas he was able to apply animations and reuse whatever Panel the developer decides is best for their layout.

I wanted to use a similar idea but focus on an ItemsControl as the basis of the animation.  I also wanted to try to hookup the LayoutUpdated event to efficiently set-up and animate each element.  The idea here is that I have a databound ItemsControl that animates as the layout changes, using whatever layout schema I want.  Even a re-sort of the elements in the ItemsControl should animate the elements into the correct order.

The problem with the layout engine that the Panel classes provide is that it’s very difficult to hook-up events to find out what the Panel is doing.  The behavior isn’t easily observable in that there’s no simple event providing the child element, the old location/size and the new location/size.  If the base Panel class had this event then layout to layout animation would be so easy – it could even be done through XAML and Expression Blend.

The solution that I opted for adds this functionality by way of a utility class that is pretty easy to call from a custom ItemsControl.  The LayoutUpdated event is hooked up using a surrogate object and the derived ItemsControl holds a reference to a container of these surrogates.  When items are added to the ItemsControl (seen below using the PrepareContainerForItemOverride virtual function) the Items control calls out to the container and hooks up the Layout event so the surrogate can do something when its being resized or repositioned.

protected override void PrepareContainerForItemOverride(DependencyObject element, object item)
{
    base.PrepareContainerForItemOverride(element, item);
    this.container.Add( new Surrogate( this, element, item ));
}

The surrogate object uses the LayoutUpdated event of the new element and wires up its own event handler. The big problem with this event is that it doesn’t come with any indication what is being updated, by who and to where.  So I use the surrogate to keep hold of this information.  The wire up process on the surrogate object looks like this from the constructor:

public Surrogate(ItemsControl itemsControl, DependentObject element, object item )
{
     :: ::
     element.LayoutUpdated += new EventHandler(this.target_LayoutUpdated);
     if (this.element.RenderTransform == null || !(this.element.RenderTransform is TranslateTransform))
     {
         this.element.RenderTransform = new TranslateTransform(); // setup render transform
     }
:: :: // etc }

Now everything is wired up it’s simply a matter of setting the animation in motion when a layout even has changed:

public void target_LayoutUpdated(object sender, EventArgs e)
{
    ::
    // get new position of element
    Point newPosition = this.Element.TranslatePoint(new Point(0, 0), this.Container.animatedItemsControl as ItemsControl);
    TranslateTransform translateTransform = this.Element.RenderTransform as TranslateTransform;
    // offset this position of the current render Transform offset
    newPosition = new Point(newPosition.X - translateTransform.X, newPosition.Y - translateTransform.Y);
    // check for rounding errors
    Point deltaToOldPosition = new Point(this.Destination.X - newPosition.X, this.Destination.Y - newPosition.Y);
    if (Math.Abs(deltaToOldPosition.X) < 1 && Math.Abs(deltaToOldPosition.Y) < 1)
    {
        return; // nothing to do
    }
// we now have the new and old positions - so animate the render transform from where it was to zero
// because the element has already been repositioned
translateTransform.BeginAnimation(TranslateTransform.XProperty, new DoubleAnimation(deltaToOldPosition.X, 0.0, new Duration(animationTime), FillBehavior.HoldEnd)); translateTransform.BeginAnimation(TranslateTransform.YProperty, new DoubleAnimation(deltaToOldPosition.Y, 0.0, new Duration(animationTime), FillBehavior.HoldEnd)); :: :: // etc }

This isn’t the exact code, but the key points are here.  The same adjustment can also be used for the size transformation – the process is the same by using the delta from the old size to the new size to feed into the render transformation.   The elements themselves can be recycled through the ItemsControl so it’s important to unwire the association between the object and the element using the ClearElement override.

This all gives the necessary functionality to animate elements off the back of the mysterious LayoutUpdated event by giving it some context.  As a byproduct we also have the current location of any child element given the bound object that is being templated in the ItemsControl.  With this we can now build a generic ItemsControl that throws RoutedEvents for layout changes that can be used with Triggers in animation and even animate between ItemsControls by looking up the location of a bound object.

Technorati Tags: ,,

Zune Firmware Update – rating feature reduced

Technorati Tags: ,
Zune Insider reported yesterday that the new Zune firmware removes the regular 5 star rating system and replaces it with a ‘good’ or ‘bad’ rating system.  This means we’ve gone from 5 levels to just two.  The reason for the removal of this feature is to make it simpler and add more parity across users ratings – I assume, so that ratings can be aggregated and reported through some social website.
 
This argument doesn’t really stand, as the same parity issues still apply.  What is defined as Good and Bad for users?  Some users may download their favorite tunes and only rate 10% as Good.  Other users may download lots of tracks and mark most of it as good.  When aggregating ratings there will be nothing to distinguish an excellent track from a quite good track. 
 
The real problem, though, is the downgraded functionality.  Like a lot of users, I use ratings to set-up automatic playlists based on genre and artist.  I also use ratings as a way to filter through mass downloaded subscription music.  I flag the music that I want to delete with a ‘1 star rating’. 
 
Even more of a problem is the migration process.  The upgrade will automatically map any track >2 stars to be good and all others as bad.  What isn’t clear is how the new rating system will be managed?  How will these ratings be stored?  At the moment the Zune software uses the track itself.  There could be some very unhappy audiophiles with lost track ratings next week if the upgrade decides to update the song files with its new scoring mechanism.

Technology Evangelism

Scott Barnes has an interesting entry about Evangelism.  In it he quotes Guy Kawasaki on his The Art of Evangelism – saying that "Look for agnostics, ignore atheists".

Firstly, I agree with this point entirely – but I feel it should be renamed to "Look for agnostics, ignore theists".  What Guy is really saying here is that if somebody already has a Tool/Product affiliation then it’s hard to persuade them to use another tool instead.  Taking the religious analogy further – I would say that atheists do not believe in affiliation with a product or tool.  They have ‘no belief’. 

To me, the analogy to religion is a little stretched.  Religious evangelicals try to persuade people into believing in something that there is no evidence.  They preach the use of faith.  Whereas, technology evangelists preach the merits of tools and products based on evidence – which is very different.  Who would adopt a technology based on blind faith?


Silverlight 1.1 Feedback – What to include on the GUI stack?

The news from MIX07 this week has been very interesting.  After watching some of the breakout sessions I thought I would answer Nick Kramer and provide some feedback as to which controls and functionality to include in the final Silverlight 1.1 release.  The decision here is between supporting something at the core level, supporting it as a standard additional DLL or leaving it to the community.  Of course, the more interop between WPF and Silverlight the better, but the size of the download needs to be kept in mind. 

So, in general, my inclination would be to use the following as guidance:

  • Include as much of the plumbing as possible.  Adding alternative, competing plumbing solutions on top of Silvlight could drift the design direction away from the solutions used in WPF.
  • Ship the controls as an additional DLL.  When I say controls I mean button, textbox, listbox, combo etc..  These file sizes should be very small if the infrastructure is shipped in the core product.
  • Work out a way of caching signed DLLs for reuse by other apps.  The assembly resolver should be able to check an assembly cache for already loaded assemblies.  Nothing as fancy as the GAC, just a dynamic cache of loaded assemblies on a best efforts basis.

So, specfically – taking the above….

  • Include in the core the infrastructure for ItemsControl, including the virtualization of elements, Panel templates etc.
  • Include DataBinding, Styling and TemplateBinding.  A must have for the MVC pattern to work in WPF and Siliverlight.
  • Include basic Panel support, additional Panels could be added from other DLLs (imported with ListBoxes etc).
  • Include the basic controls where the data that they model is unique – not the behavior.  So, an ItemsControl should be included but the ListBox, ComboBox and even ListView can be included as a sample or left to the community.  Abstract controls providing the base classes are more important than covering the basic traditional Windows controls.

I would also suggest including a new version of the assembly linker that works to intelligently link referenced assemblies together and removed unused functions.  Without this I think the third party control support is going to be limited to source code distribution only.  Who will want to include a 10MB Grid Control if you are only using 10% of the functionality?


RSS Readers – Paperboy vs Google Reader

Linking to Rob Relyea again, this time he talks about  "Using Ricciolo PaperBoy 0.2 (WPF RSS Reader)".

With the IE7 RSS platform writing an RSS reader has got a lot easier.  Rob mentions that he typically uses IE7 to read his feeds – this is one thing I don’t understand.  My OPML file has about 200 blog entries, most of which update maybe twice per month.  IE7 is next to useless reading these feeds because it doesn’t aggregate news.  This same fault applies to Ricciolo Paperboy.  A news reader without aggregation to me is just like browsing my favorites but without the HTML formatting.

There is RikReader – http://11011.net/software/rikreader – which is also written in WPF, uses the IE7 feed engine and supports aggregation.  It’s not too bad too, and comes complete with its own HTML to flow-document converter.  Like Paperboy, it’s a little unstable with some feeds.

After using Google Reader for the past six months, I need to add hosted reading state to the list of requirements.  That means that I want a central server to know what I’ve read and what I haven’t.  I also want to share items and flag items of interest.  Also, like with Google Reader, I want to be able to read these feeds on my cell phone.  So here’s my must-have / nice-have list:

 

  • Good reading experience – i.e. better than Google Reader’s HTML effort.
  • Aggregation (river of news) – preferably tagged, not hierarchical.
  • Hosted reading state – ability to share reading state across machines and devices (as in Google Reader).
  • Easy to install – Click Once or XBAP. 
  • Blog writing integration – e.g integrate to Live Writer or similar.
  • Comment support – ability to show comments and subscribe to comment feeds (if they exist).
  • Browse support – ability to view Memes and browse blog entries and trackbacks graphically (like Times Reader).
  • Mobile support – integrating reader state (so I can monitor what feeds are new)
  • Podcast/ Vidcast support – enclosure support.  IE7 kind of has this already, but the enclosures go to the temporary Internet directory.  A news reader is a good chance to fix this – just copy those enclosures to a perconfigured directory and do some transcoding in the background.
  • Offline reading – something that Google Reader doesn’t do.

So, all this could be added to Paperboy – and it should be fairly easy to do (apart from the hosting, but even that could be made generic with some sort of polled external internet based file).   The only thing that stops me delving into CodePlex is the fact that I’m convinced Microsoft has a product waiting in the wings that will compete with Google Reader.  The Microsoft Max team did some of this already and the Live.com home page is very much based on RSS.  The (Times) Reader SDK must have the capability to render your own personalized newspaper  – based on your feed database.  Live Writer has quite a following, although this is a WinForms app – I don’t think it would take much to integrate this into the mix.

So is the market that Google Reader holds worth going for?  From a purely financial perspective probably not.  For secondary effects, like winning the minds of influentials and the Web 2 community it has more merit.  It might be essential for MS that somebody like Robert Scoble uses Microsoft Technology over Adobe’s or Google’s.


Ten Quibbles with WPF

Rob Relyea asks for feedback on WPF.  In response, I’ve put together some of my top 10 pet grievances.  I have to highlight that this list is relatively minor.  The structure and modularity of WPF stands out over and above other frameworks… there is a lot of potential here. 

  1. DataTrigger versus Trigger -  I’ve never really understood why these two needed to be modeled differently.  A DataTrigger can be used against non DependencyProperties, and a Trigger can be used between different parts of the Visual Tree (usually with DPs).  But does that distinction really need to be made to the developer?  Why not just call it a Trigger and let the Framework work out the details.  What’s more annoying is that the properties on the two elements are different.  This inconsistency is evident in a few areas of WPF.
     
  2. Expressing Inter-object References – the DataTrigger, Trigger, Storyboard, Timeline elements and others all suffer from inconsistent ways of referencing other objects.  In WPF you often need to relate one object to another.  It could be for Binding, or for animating a property or triggering an action based on a value being set.  The problem is that there are different ways to express these relationships in WPF. 

    Storyboard and DataTriggers use TargetName – the name of the element being manipulated.  Binding uses various ways : – including matching on type, name, relative path etc..  The DataContext is a pure and simple reference, through which you can use a resource look-up or set it in code directly.  My point is that there should be one way of expressing inter-object references and that’s it.  Having to name elements is not good, the PropertyPath object is useful and the expressiveness of the Binding element is very good.  We need one solution that’s the best of all worlds used consistently everywhere.
     

  3. Inconsistently Behaved Events - some of the events in WPF do not act in accordance with the CLR guidelines.  For example, the LayoutChanged event is fired when any element changes its position in the window.  Even if you only hook a textbox 5 levels down the visual tree – where it is never moved – you still receive a LayoutChanged event every time some other element is moved, resized etc.  What’s worse is that the ‘sender’ parameter for LayoutChanged is always null.   This makes the event next to useless… which leads nicely on to the next problem.
     
  4. Storyboard and Timeline completed events  - these are hard to manage in some circumstances.  The problem with these is that there is no way to link the event invocation back to the object being animated or to the animation object itself.  WPF actually clones the Timeline or Storyboard before the animation is started – and there are no properties (that I can see) that will give you access to the object being animated.  This makes ad-hoc triggered animations on collections of objects difficult to manage.  The solution that I use is to set a generated ‘name’ on the Timeline and use a Dictionary to look it up in the event handler. 
     
  5. Panels - I think one thing that would make WPF easier to learn would be to simplify the role of each element.  I wish all Panels were called XXXXPanel.  I also wish that their role was restricted to laying out elements – and not part of the visual tree.  Setting the background color of a Panel just doesn’t make sense and blurs its real purpose.  I think it’s little things like this that make it harder for new users to adopt the technology.
     
  6. Custom animations – this is still an area that I haven’t spent that much time digging around in – I’ve only reused other people’s hard work.  Every time I see somebody hook into the render override to calculate the new animation position (and not use the Dispatcher) it just feels so Win32.  Is it not possible to write your own Timeline derived class that would perform custom animation based on the scene contents?
     
  7. More transparency – I’m talking about tracing what WPF and MILCore are doing.  When a texture needs reloading then it would be nice to see it logged somewhere – with the reason why.  Why was a re-render necessary etc…  The solution here may be ETW – but I’m not sure we have the tools and a supporting MSDN article.
     
  8. Deployment – the story is better now that IE7 is being rolled out.  The install still takes too long and is painful for the average XP user.  I’m hoping Microsoft will start using their own technology and we will see a wider install base (something more than the now defunct Microsoft Max).  What’s concerning is what’s coming next.  With WPF 1.5 being talked about, how much is this going to hurt…?  Please no more reboots.
     
  9. Mid-life crisis - another concern is memory bloat.  The nature of WPF app’s makes them prime candidates for mid-life crisis – meaning objects are held long enough to survive past a generation 1 collection, but cleared down and recreated frequently enough for this to become a real problem.  Generation 2 collections are bad for some kinds of applications, they are also bad when users check out their process list to see which applications are being greedy with their memory.  DataBinding and the way ItemsControls work help this to a certain extent, but would it be better to use a memory pool for all objects rather than newing them straight out of the heap?
     
  10. GPU versus CPU - one big plus point with WPF is the utilization of the GPU.  This is a big win over the competition.  One thing that I don’t understand is the lack of GPU acceleration for certain operations.  Bitmap effects is one that springs to mind – why isn’t a pixel shader used here for hardware that supports it?  Is it concern over the driver quality?  I would like to see more use of the shader model in the future – particle effects and other GPU features.  It makes great eye candy for demos.
     
  11. + 1 more - Guidance Needed - what was great about MFC and VB was that there was a set way of doing things.  Basically everybody wanted to be like Office or Explorer.  It was actually quite hard to be anything else – if you wanted to change the behavior of one of the common controls it could take weeks of effort.  We now have the tools to give us much more freedom, but there are only a handful of real-life applications and no new established look and feel.  The Web (in some ways) suffered from the same problem – standards were gradually established over time.  We need the next ‘Microsoft Office’ or ‘Amazon’ or ‘CNET’ for WPF applications.  You can just mimic Win32 – but then why not just use Win32? 

Lastly, please fix the MSDN feedback site.  It looks like the WPF team have stopped reviewing the feedback.  The versions of WPF include the July CTP and Beta 2 – but nothing later!  The list of operating systems doesn’t even include Vista.  Would be nice to submit feedback – especially now that this technology is getting in more developers’ hands.


Follow

Get every new post delivered to your Inbox.