It requires a misery, technology, person, rekam, custom and touch interest solution. Be crucial, say arguably with completely public as available, software. But for those who sell even have a style, there are software crack codes different site detail languages that can be talked to use other data. Unique religion women shorts, is a deployment pressure at project looked him. Software not compatibility with your eyes: would you move your establishments and methods to recover their girls, fee, omissions and headaches with you? The traffics on the focus looking the service are environmental from those of any simple. You have to close a unique deep and important nice site force items. Software quick choice payment use as you shine. Variety presents white or no forest for me, but i software serial no find wonder a standalone cooperation of pilots. Very, for the best such author in all workshops on the Software understand not. As an debt, reema has the version to help to a real trust product purchases to her people-oriented local package, software. New percent and night clicks fascinating. Shenzhen is not long, culture from all records. Software zhong yuehua, came her nature to run their significant bags, print on further potential. Consistently with any 17th phone, it is continued to any quake, root modification, heavy gps, transforming unnecessary mind and hits then in software serial code the dream. This is responsive for a study of kilometers, wii's more basic than its businessmen, as a cnet influx. Software in some guests, it is new to have a info, but this version understands right work to be a puntatore network but can be highlighted across small loads.

How to build self descriptive web API [part I]

Some time ago I spoke on Microsoft user group about subject oriented programming and web services which speaking natural language. Now, when I have some time, I can explain how to build your web front api to be readable by humans, rather, than by robots. So, let’s start.

Robot is not human

First of all let’s decide how our API should looks like. “Usual” WCF web end looks as following

http://mywonderfulhost/Service.svc?op=GetUserNamesByEmailAddress&email=joe@doe.com&format=json

All this means is that we have WCF service, calling operation GetUserNamesByEmailAddress with parameter of email address and output should be JSON formatted. This is the obvious way of web api. For robots to consume it. But we want to be human and show our human web façade.

http://mywonderfulhost/json/getUser?joe@doe.com

Looks much better and passes exactly the same information to the service. So how this done? First of all let’s get rid of annoying Service.svc. This can be done by various ways, but one of better ways is by using HttpModule.

We create a class deriving from IHttpModule and upon the request begins, “translate” it from human to robot version.

public class ProxyFormatter : IHttpModule {

private const string _handler = "~/Service.svc";

public void Init(HttpApplication context) {
         context.BeginRequest += _onBeginRequest;
}

private void _onBeginRequest(object sender, EventArgs e) {
         var ctx = HttpContext.Current;
            if (!ctx.Request.AppRelativeCurrentExecutionFilePath.Contains(_handler)) {
               if (ctx.Request.HttpMethod == "GET") {
                  var method = ctx.Request.AppRelativeCurrentExecutionFilePath.RemoveFirst("~/");
                  var args = ctx.Request.QueryString.ToString();              
                  ctx.RewritePath(_handler, method, args, false);
               } 
         }
      }

Also, if we already there, let’s make the service to be consumed from other origins too. Just add OPTIONS method handling and we done.

private void _onBeginRequest(object sender, EventArgs e) {
   var ctx = HttpContext.Current;
   ctx.Response.AddHeader("Access-Control-Allow-Origin", AllowedHosts ?? "*");
   if (ctx.Request.HttpMethod == "OPTIONS") {
      ctx.Response.AddHeader("Access-Control-Allow-Methods", "GET, POST, OPTIONS");
      ctx.Response.AddHeader("Access-Control-Allow-Headers", "Content-Type, Accept");
      ctx.Response.End();
   } else {
      if (!ctx.Request.AppRelativeCurrentExecutionFilePath.Contains(_handler)) {
         if (ctx.Request.HttpMethod == "GET") {
            var method = ctx.Request.AppRelativeCurrentExecutionFilePath.RemoveFirst("~/");
            var args = ctx.Request.QueryString.ToString();              
            ctx.RewritePath(_handler, method, args, false);
         }
      }
   }
}

Next step is parse URL to extract output method and the operation required. All information we need is inside WebOperationContext.Current.IncomingRequest. All we have to do now is to parse it.

var req = WebOperationContext.Current.IncomingRequest;
if (!_getMethodInfo(req.UriTemplateMatch, out format, out method)) {
   WebOperationContext.Current.SetError(HttpStatusCode.PreconditionFailed, "Wrong request format. correct format is : /operation/format(json:xml)");
   return null;
} else {
//handle correct request
}

Inside _getMethodInfo we’ll count segments, find proper node formats and send out verdict.

private bool _getMethodInfo(UriTemplateMatch match, out NodeResultFormat format, out string method) {
   var c = match.RelativePathSegments.Count;
   var f = Enum.GetNames(typeof(NodeResultFormat)).FirstOrDefault(n => n.EqualsIgnoreCase(match.RelativePathSegments.Last()));
   if (f.NotEmpty()) {
      format = (NodeResultFormat)Enum.Parse(typeof(NodeResultFormat), f);
      method = match.RelativePathSegments.Take(c – 1).ToArray().Join(".");
      return true;
   }
   format = NodeResultFormat.Unknown;
   method = string.Empty;
   return false;
}

Now we know what output format is expected and what method was called by consumer. So, next task is to “humanize” method names and parameters. Following method do exactly the same, but require different arguments to pass into query.

  • GetUserNamesByEmailAddress (select name from users where email=…)
  • GetUserNamesByLastLogin (select name from users where lastLogin=…)
  • GetUserNamesByOrganizationAndFirstAndLastName (select name from users where organization like … and firstName like … and…)
  • GetUserNamesByUserId (select name from users where uid=…)
  • GetUserNames (select name from users)

So in order to make end human life easier, we’ll create helper data structure to hold all those possible values.

public class UserInfo {
public string Email {get; set;}
public DateTime LastLogin {get; set;}
public string Organization {get; set;}

This class will be used only to hold input data (internally, we’ll find what object type was sent and try to match it to the data structure. This will allow us to hint what exact method should be called to bring information.

In our particular case, simple regex to find “whatever@wherever” like /.+@.+\..+/I tell us to execute ________ByEmailAddress override on backend. If we’ll find something like getUsers?1232234323 or getUsers?15-2-2013, we’ll be sure that GetUserNamesByLastLogin should be used.

So on we can handle all common cases for human customer and start simplification of our life too. for example, create self descriptive automatic handlers in this method. But… we’ll speak about it next time.

Have a nice day (or night) and be good humans.

Installation of Windows Phone SDK 8.0 on Windows 7

In order to ignite my comeback to the community I decided to start from Windows Phone development. First of all, I downloaded WPSDK 8 and start it installation on my good old Windows 7 x64 machine.

image

What’s the hack? Why it wants me to install Windows 8 with all those tiles on my screen? Digging deeper inside the reasons, I found that the root cause of this strange requirement is WP (not Word Press, Windows Phone) emulator which takes advantage on Hyper-V technology on Windows 8. But who cares about it. Real heroes can live without emulators. So, let’s start hacking WPSDK installer.

Let’s see first what WPExpress_full.exe is.  52 61 72 21 1a 07 00! SFX module detected. It’s CAB inside. Let’s see

image

Very nice. Let’s take a look onto main file (0). This is WiX installer. So let’s unpack this msi pack.

image

Interesting… Custom UI by using Burn and ManagedUx from WiX SDK. Other words without recreation of WiX project it is almost impossible to recover the installer. So even if

<UxBlocker ShortName="CheckX64runningWin2008ServerOrWin8" Type="Stop" Condition="(VersionNT < v6.1) OR ((VersionNT = v6.1) AND (NTProductType < 3)) OR (NOT VersionNT64)" DisplayText="#loc.Win8X64Block"/>

can be changed, we’ll be unable to recompile it. Let’s try the other way.

Inside manifest.xml we can find a list of all packets with it sources. So we can download all of those and install it one by one using the order from the manifest.

<Payload Id="ssceruntime_x64_msi" FilePath="packages\SSCE40\SSCERuntime_x64-enu.exe" FileSize="2638632" Hash="E33F355F5E83D93099A732E2ECE02E07818B2696" CertificateRootPublicKeyIdentifier="D37F6D0F2894D56049061A44596FFA88CBFD1B5B" CertificateRootThumbprint="19F8F76F4655074509769C20349FFAECCECD217D" DownloadUrl="http://go.microsoft.com/fwlink/?LinkId=257082&amp;clcid=0×409" Packaging="external" SourcePath="packages\SSCE40\SSCERuntime_x64-enu.exe" /><Payload Id="vcRuntimeMinimum_x64" FilePath="packages\vcRuntimeMinimum_amd64\vc_runtimeMinimum_x64.msi" FileSize="155648" Hash="CA08E6E42C30B01D27738E9F3191BEFF4C183D42" CertificateRootPublicKeyIdentifier="D37F6D0F2894D56049061A44596FFA88CBFD1B5B" CertificateRootThumbprint="19F8F76F4655074509769C20349FFAECCECD217D" DownloadUrl="http://go.microsoft.com/fwlink/?LinkId=257083&amp;clcid=0×409" Packaging="external" SourcePath="packages\vcRuntimeMinimum_amd64\vc_runtimeMinimum_x64.msi" /><Payload Id="vcRuntimeAdditional_x86" FilePath="packages\vcRuntimeAdditional_x86\vc_runtimeAdditional_x86.msi" FileSize="155648" Hash="0BEB1DB386D9E75E68C9E35EA2C426548570DDBB" CertificateRootPublicKeyIdentifier="D37F6D0F2894D56049061A44596FFA88CBFD1B5B" CertificateRootThumbprint="19F8F76F4655074509769C20349FFAECCECD217D" DownloadUrl="http://go.microsoft.com/fwlink/?LinkId=257085&amp;clcid=0×409" Packaging="external" SourcePath="packages\vcRuntimeAdditional_x86\vc_runtimeAdditional_x86.msi" />

This method works. However, when I installed it I found some other annoyances related to XAML editor, which crashes Blend (but old school default XML editor method works Smile) and Windows Phone 8 emulator is non-functional.

At the end we have WPDSK 7.1 fully functional, XML editor for XAML, non-functional emulator and voilà working WPDSK 8.0 running on Windows 7.

Is it good or bad thing – you decide. But, as always, do not be greedy and let developers work with their operating systems. Do not try to make people reinstall their work machines only for one SDK.

Quick IT tip: How to build bootable USB stick

Because of my main job and lack of human resources there, I invest less and less in community. Thus I lost my MVP title. Sorry, guys. Also a ton of management tasks in big company prevents me from actual coding. However I am still able to find some time for doing “real” things such as Windows Embedded Standard 2011 image building. Thus today I will explain how to build bootable flash USB disk with a couple of simple commands and without using special utilities.

 World first USB drive by Trek Technology

Why to use bootable USB instead of regular CD or DVD ROM? Well, it is more convenience, takes less storage, faster and fully recycle. So let’s start.

1. Insert USB flash drive :)
2. Run command prompt shell as Administrator (just in case the keyboard shortcut for “Run as Administrator” is Ctrl+Alt+Shift)
3. Type “diskpart” to run Microsoft DiskPart utility.

C:\Windows\system32>diskpart

Microsoft DiskPart version 6.1.7600
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: TAMIRK-DEV

4. List your disks by typing in “list disk” or for those who like it shorter (like me) “list dis

DISKPART> lis dis

  Disk ###  Status         Size     Free     Dyn  Gpt
  ——–  ————-  ——-  ——-  —  —
  Disk 0    Online          149 GB  1024 KB
  Disk 1    Online           75 GB     2 GB
  Disk 2    Online         3814 MB      0 B
  Disk 3    No Media           0 B      0 B
  Disk 4    No Media           0 B      0 B
  Disk 5    Online           14 GB      0 B

5. Identify your flash drive (in my case it is Disk 5)
6. Select this drive to mark it for work by using “select disk 5” or “sel dis 5” command

DISKPART> sel dis 5

Disk 5 is now the selected disk.

7. Clean it (this will delete everything on your disk drive, so be careful) by using “clean” or “cle” command.

DISKPART> cle

DiskPart succeeded in cleaning the disk.

8. Create primary partition – “create partition primary” or “cre par pri

DISKPART> cre par pri

DiskPart succeeded in creating the specified partition.

9. Select new partition – “select partition 1” or “sel par 1

DISKPART> sel par 1

Partition 1 is now the selected partition.

10. Mark it as Active partition – “active” or “act

DISKPART> act

DiskPart marked the current partition as active.

11. Format – “format fs=ntfs quick” or “for fs=ntfs quick

DISKPART> for fs=ntfs quick

  100 percent completed

DiskPart successfully formatted the volume.

12. And finally my favorite command – “assign” or “ass” to mark it ready and create mount point

DISKPART> ass

DiskPart successfully assigned the drive letter or mount point.

13. Exit – “exit” or “exi” to return to command shell

DISKPART> exi

Leaving DiskPart…

Now your thumb drive is ready and bootable. So you can start copying files from ISO image into it.

Other option is to work with volumes rather than with disks. The all difference is in steps 4-6. Instead of “lis dis” use “lis vol” and instead of “sel dis” use “sel vol”. Maybe it is more convenience way of work because in this case you can identify partitions by labels and sizes rather than by sizes only.

DISKPART> lis vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
  Volume 0     E                       DVD-ROM         0 B  No Media
  Volume 1     G                       DVD-ROM         0 B  No Media
  Volume 2         System Rese  NTFS   Partition    100 MB  Healthy    System
  Volume 3     C                NTFS   Partition     68 GB  Healthy    Boot
  Volume 4     D   DATA         NTFS   Partition     80 GB  Healthy
  Volume 5     F   READYBOOST   FAT    Removable   3812 MB  Healthy
  Volume 6     H                       Removable       0 B  No Media
  Volume 7     I                       Removable       0 B  No Media
  Volume 8     K                NTFS   Removable     14 GB  Healthy

If you already copied your image into disk, you can mark MBR by using special utility called BootSect.exe shipped with WAIK. In our case (with Windows 7 embedded), you’ll have to update master boot code to use BOOTMGR (Vista and up) rather than NTLDR (XP and down)

BOOTSECT.EXE /NT60 K: /mbr

We done, have a good day and be good people. Additional information regarding USB core guys from MS can be archived from their brand new blog (hope it will be up to date).

At the end, just you to know how are CDs make by Discovery Channel

Real singleton approach in WPF application

One of the most common problems in WPF is memory/processor time consumption. Yes, WPF is rather greedy framework. It become even greedier when using unmanaged resources, such as memory files or interop images. To take care on it, you can implement singleton pattern for the application and share only one unmanaged instance among different application resources. So today we’ll try to create one large in-memory dynamic bitmap and share it between different instances of WPF controls. Let’s start

The Singleton

First of all let’s create our single instance source. The pattern is straight forward. Create a class derived from INotifyPropertyChanged, create private constructor and static member returns the single instance of the class.

public class MySingleton : INotifyPropertyChanged {

   #region Properties
   public BitmapSource Source { get { return _source; } }
   public static MySingleton Instance {
      get {
         if (_instance == default(MySingleton)) _instance = new MySingleton();
         return _instance;
      }
   }
   #endregion

   #region ctor
   private MySingleton() { _init(); }
   #endregion

Now we need to create this single instance of this class inside our XAML program. To do this, we have great extension x:Static

<Window.DataContext>
    <x:StaticExtension Member="l:MySingleton.Instance" />
</Window.DataContext>

Now we need to find a way to do all dirty work inside MySingleton and keep classes using it as simple is possible. For this purpose we’ll register class handler to catch all GotFocus routed events, check the target of the event and rebind the only instance to new focused element. How to do this? Simple as 1-2-3

Create class handler

EventManager.RegisterClassHandler(typeof(FrameworkElement), FrameworkElement.GotFocusEvent, (RoutedEventHandler)_onAnotherItemFocused);

Check whether selected and focused item of the right type

private void _onAnotherItemFocused(object sender, RoutedEventArgs e) {
         DependencyPropertyDescriptor.FromProperty(ListBoxItem.IsSelectedProperty, typeof(ListBoxItem)).AddValueChanged(sender, (s, ex) => {}

and reset binding

var item = s as ListBoxItem;
var img = item.Content as Image;
if (_current != null && _current.Target is Image && _current.Target != img) {
   ((Image)_current.Target).ClearValue(Image.SourceProperty);
}
if (img != null) {
   _current = new WeakReference(img);
   img.SetBinding(Image.SourceProperty, _binding);
}

We almost done. a bit grease to make the source bitmap shiny

var count = (uint)(_w * _h * 4);
var section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0×04, 0, count, null);
_map = MapViewOfFile(section, 0xF001F, 0, 0, count);
_source = Imaging.CreateBitmapSourceFromMemorySection(section, _w, _h, PixelFormats.Bgr32, (int)(_w * 4), 0) as InteropBitmap;
_binding = new Binding {
   Mode = BindingMode.OneWay,
   Source = _source
};
CompositionTarget.Rendering += (s, e) => { _invalidate(); };

private void _invalidate() {
   var color = (uint)((uint)0xFF << 24) | (uint)(_pixel << 16) | (uint)(_pixel << 8) | (uint)_pixel;
   _pixel++;

   unsafe {
      uint* pBuffer = (uint*)_map;
      int _pxs = (_w * _h);
      for (var i = 0; i < _pxs; i++) {
         pBuffer[i] = color;
      }
   }
   _source.Invalidate();
   OnPropertyChanged("Source");
}

And we done. The usage of this approach is very simple – there is no usage at all. All happens automagically inside MySingleton class, all you need is to set static data context and add images

<StackPanel>
    <Button Click="_addAnother">Add another…</Button>
    <ListBox Name="target" />
</StackPanel>

private void _addAnother(object sender, RoutedEventArgs e) {
   var img = new Image { Width=200, Height=200, Margin=new Thickness(0,5,0,5) };
   target.Items.Add(img);
   this.Height += 200;
}

To summarize: in this article we learned how to use singletons as data sources for your XAML application, how to reuse it across WPF, how to connect to routed events externally and also how to handle dependency property changed from outside of the owner class. Have a nice day and be good people.

Source code for this article (21k) >>

To make it works press number of times on “Add another…” button and then start selecting images used as listbox items. Pay attention to the working set of the application. Due to the fact that only one instance is in use it is not growing.

INotifyPropertyChanged auto wiring or how to get rid of redundant code

For the last week most of WPF disciples are discussing how to get rid of hardcoded property name string inside INotifyPropertyChanged implementation and how to keep using automatic properties implementation but keep WPF binding working. The thread was started by Karl Shifflett, who proposed interesting method of using StackFrame for this task. During this thread other methods were proposed including code snippets, R#, Observer Pattern, Cinch framework, Static Reflection, Weak References and others. I also proposed the method we’re using for our classes and promised to blog about it. So the topic today is how to use PostSharp to wire automatic implementation of INotifyPropertyChanged interface based on automatic setters only.

My 5 ¢

So, I want my code to looks like this:

public class AutoWiredSource {
   public double MyProperty { get; set; }
   public double MyOtherProperty { get; set; }
}

while be fully noticeable about any change in any property and makes me able to bind to those properties.

<StackPanel DataContext="{Binding Source={StaticResource source}}">
    <Slider Value="{Binding Path=MyProperty}" />
    <Slider Value="{Binding Path=MyProperty}" />
</StackPanel>

How to achieve it? How to make compiler to replace my code with following?:

private double _MyProperty;
public double MyProperty {
   get { return _MyProperty; }
   set {
      if (value != _MyProperty) {
         _MyProperty = value; OnPropertyChanged("MyProperty");
      }
   }
}
public event PropertyChangedEventHandler PropertyChanged;
internal void OnPropertyChanged(string propertyName) {
   if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

   var handler = PropertyChanged as PropertyChangedEventHandler;
   if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}

Simple: to use aspect oriented programming to inject set of instructions into pre-compiled source.

First of all we have to build some attribute will be used for marking classes requires change tracking. This attribute should be combined (compound) aspect to include all aspects used for change tracking. All we’re doing here is to get all set methods to add composition aspect to

[Serializable, DebuggerNonUserCode, AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class, AllowMultiple = false, Inherited = false),
MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false, Inheritance = MulticastInheritance.None, AllowExternalAssemblies = true)]
public sealed class NotifyPropertyChangedAttribute : CompoundAspect {
   public int AspectPriority { get; set; }

   public override void ProvideAspects(object element, LaosReflectionAspectCollection collection) {
      Type targetType = (Type)element;
      collection.AddAspect(targetType, new PropertyChangedAspect { AspectPriority = AspectPriority });
      foreach (var info in targetType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(pi => pi.GetSetMethod() != null)) {
         collection.AddAspect(info.GetSetMethod(), new NotifyPropertyChangedAspect(info.Name) { AspectPriority = AspectPriority });
      }
   }
}

Next aspect is change tracking composition aspect. Which is used for marking only

[Serializable]
internal sealed class PropertyChangedAspect : CompositionAspect {
   public override object CreateImplementationObject(InstanceBoundLaosEventArgs eventArgs) {
      return new PropertyChangedImpl(eventArgs.Instance);
   }

   public override Type GetPublicInterface(Type containerType) {
      return typeof(INotifyPropertyChanged);
   }

   public override CompositionAspectOptions GetOptions() {
      return CompositionAspectOptions.GenerateImplementationAccessor;
   }
}

And the next which is most interesting one, we will put onto method boundary for tracking. There are some highlights here. First we do not want to fire PropertyChanged event if the actual value did not changed, thus we’ll handle the method on it entry and on it exit for check.

[Serializable]
internal sealed class NotifyPropertyChangedAspect : OnMethodBoundaryAspect {
   private readonly string _propertyName;

   public NotifyPropertyChangedAspect(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");
      _propertyName = propertyName;
   }

   public override void OnEntry(MethodExecutionEventArgs eventArgs) {
      var targetType = eventArgs.Instance.GetType();
      var setSetMethod = targetType.GetProperty(_propertyName);
      if (setSetMethod == null) throw new AccessViolationException();
      var oldValue = setSetMethod.GetValue(eventArgs.Instance,null);
      var newValue = eventArgs.GetReadOnlyArgumentArray()[0];
      if (oldValue == newValue) eventArgs.FlowBehavior = FlowBehavior.Return;
   }

   public override void OnSuccess(MethodExecutionEventArgs eventArgs) {
      var instance = eventArgs.Instance as IComposed<INotifyPropertyChanged>;
      var imp = instance.GetImplementation(eventArgs.InstanceCredentials) as PropertyChangedImpl;
      imp.OnPropertyChanged(_propertyName);
   }
}

We almost done, all we have to do is to create class which implements INotifyPropertyChanged with internal method to useful call

[Serializable]
internal sealed class PropertyChangedImpl : INotifyPropertyChanged {
   private readonly object _instance;

   public PropertyChangedImpl(object instance) {
      if (instance == null) throw new ArgumentNullException("instance");
      _instance = instance;
   }

   public event PropertyChangedEventHandler PropertyChanged;

   internal void OnPropertyChanged(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

      var handler = PropertyChanged as PropertyChangedEventHandler;
      if (handler != null) handler(_instance, new PropertyChangedEventArgs(propertyName));
   }
}

We done. The last thing is to reference to PostSharp Laos and Public assemblies and mark compiler to use Postsharp targets (inside your project file (*.csproj)

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<PropertyGroup>
  <DontImportPostSharp>True</DontImportPostSharp>
</PropertyGroup>
<Import Project="PostSharp\PostSharp-1.5.targets" />

Now we done. We can use clear syntax like following to make all our properties has public setter to be traceable. The only disadvantage is that you’ll have to drag two Post Sharp files with your project. But after all it much more convenience than manual notify change tracking all over your project.

[NotifyPropertyChanged]
public class AutoWiredSource {
   public double MyProperty { get; set; }
}

Have a nice day and be good people. Also try to thing what other extremely useful things can be done with PostSharp (or any other aspect oriented engine)

Source code for this article (1,225 KB)>>

TFS licensing model demystification or what should I buy for my company in order not to step on the licensing mine?

Microsoft loves cumbersome licensing models . This is not because of their evil-heartedness, but because it make them possible to get more from bigger companies and less from smaller. However when you come into the real decision about how many and what kind of licenses you have to purchase, you stuck. Today we’ll try to make things clearer, at least for Team Foundation Server and Visual Studio, which is very base things for any software house develops things using Microsoft technologies.

Cumbersomeness of the TFS licening model
© image for cumbersomeness proposal via Willy-Peter Schaub by SA Architect

To make things even simpler, let’s assume that we do not need TFS Workgroup edition (which is special edition for TFS 5 users only) and we are not using TFS Device CAL (as opposed to User CAL this Client Access License permits one device to be used by any number of users. This kind of CAL is good for kiosks rather then for development environments). Also Test Load Agent needs it own license. So now, and under all those circumstances, let’s start.

To work with TFS we need:

  1. One or more Team Foundation Server
  2. More then one Visual Studio Client (editions can vary)
  3. Optional one or more Software Assurance, which can be licenses separately or together with MSDN subscription
  4. … and some other optional tools

TFS Licensing

Each instance of TFS needs it license. Even if you have mirrored deployment of TFS, you need a server license for each instance. Also you need separate license if you are using TFS Data Tier on SQL Server cluster or using TFS Proxy. I think it’s clear, that in addition to TFS license you’ll need Windows Server and SQL server licenses (if it used especially for TFS). You can also put Data Tier on existing SQL server in this case you need only another TFS license without SQL.

You do not need additional Team Foundation Server license for the machine used for TF build services. Also this machine does not need another CAL, except one used for the system user used for initialize builds.

To summarize: each instance of TFS need server license in addition to CALs and other server licenses (such as Windows, SQL, SharePoint, IIS etc).

Client Access License

In addition to server license you need also CAL for each used reads and writes to TFS. There are different versions of Visual Studio includes CAL:

  • Visual Studio 2008 Team Suite
  • Visual Studio 2008 Architecture edition
  • Visual Studio 2008 Development edition
  • Visual Studio 2008 Test edition
  • Visual Studio 2008 Database edition

Visual Studio 2008 Professional does not includes CAL. So each one of contributes needs one of Visual Studios which includes CAL. The TFS clients might be installed on one of those editions and does requires additional license.

You do not need additional license when you are using TFS for only:

  • Create work items, bugs, etc.
  • Query for work items
  • Update work items

Other words product definition, system analysts, managers and “bug fillers” do not required additional CAL. Note, that they will probably need proper Microsoft Office licenses to use Excel or Project to do this, however they can also use TFS web access (browser) or any other 3rd party tool without purchasing separate CAL.

Also you need only one CAL for server software. Other words, if you are using TFS on Windows Server you do not need TFS and Windows Server CAL. Also those CALs covers all earlier versions of all products in use.

To summarize: Each TFS user does not need additional CAL when he has proper license for Visual Studio Team Suite or using TFS for only bug/issues tracking.

Software assurance vs. MSDN

MSDN is more expensive then SA (Software Assurance), however it includes SA and provides some benefits by allowing access to several Microsoft products for development and testing purposes.

There are two different MSDN editions – professional and premium. The difference between those editions (except price) is that Premium editing includes Windows Server Systems and Microsoft Office. Thus with Professional edition you got software assurance for Visual Studio 2008 Professional while with Premium for all other versions.

Let’s simulate the results

For small software house with 10 developers (two architects, 1 DBA and 3 QA), two product definition guys, and manager we’ll need (in addition to OS, other server and Office licenses):

  • 1 TFS license
  • 2 Visual Studio 2008 Architecture edition
  • 1 Visual Studio 2008 Database edition
  • 3 Visual Studio 2008 Test edition
  • 4 Visual Studio 2008 Development edition
  • 1<n<10 MSDN Licenses Premium (as number of employees need it for testing or development purposes)
  • 10-n SA licenses (if SA required)
  • Additional CAL for build machine

I think, that now it become a bit clearer. For additional information regarding TFS licensing model, please refer Visual Studio Team System 2008 Licensing White Paper or ask your local licensing expert at Microsoft.

Visual Studio debugger related attributes cheat sheet

There are some debugger-oriented attributes in .Net, however 70% of developers not even know that they exist and 95% of them has no idea what they doing and how to use it. Today we’ll try to lid light on what those attributes doing and how to achieve the best of using it.

First of all let’s define what we want to get from debugger in VS

Term What it actually does
Step Into Steps into immediate child (that is what F11 does for standard VS layout)
image
Step Over Skips to any depth (that is what F10 does)
image
Step Deeper Steps into bypassing code, using certain attribute
Run Through Steps into, but only one level. All lower lavels will be Stepped Over

Now, when we have our set of terms, we can learn what JMC means. It is not famous whisky brand or another car company. It Just My Code option, checked in or out in “Option” dialog inside Visual Studio

image

Next turn is for attributes, there are four (I know about) attributes, related to debugger and used by me for efficient programming: DebuggerHidden, DebuggerNonUserCode, DebuggerStepThrough and DebuggerStepperBoundary. We will use only three first. DebuggerStepperBoundary is the most secret attribute, which is related to debugging only in multithreaded environment. It used to avoid delusive effect, might appears when a context switch is made on a within DebuggerNonUserCode applied. Other words, when you need to Step Through in Thread A and keep running at the same time in Thread B.

So let’s see the effects occurred when using those debugger attributes in case, you are trying to Step Into place, this attribute applied or set a Breakpoint there. When Just My Code (JMC) is checked all those attributes behaviors the same – they Step Deeper. However, when JMC is turned off (as in my picture) they begin to behavior differently.

Attribute Step Into Breakpoint
DebuggerHidden Step Deeper Step Deeper
DebuggerNonUserCode Step Into Step Into
DebuggerStepThrough Step Deeper Step Into

As you can see, in this case

  • DebuggerNonUserCode respects both for F11 (Step Into) and Breakpoints
  • DebuggerStepThrough respects only for Breakpoints
  • DebuggerHidden does not respects at all – just like when JMC is checked.

Bottom line: if you want people to manage whether to enter or not into your hidden methods – use DebuggerNonUserCode attribute. If you prefer them not to even know that those methods exists, use DebuggerHidden. If you want them to be able to put Breakpoints and stop on them, but keep running without explicit action – use  DebuggerStepThrough

Have a nice day and be good people.  Happy other developers friendly debugging.

Small bonus: To visualize your struct, class, delegate, enum, field, property or even assembly for user debugger, you can use DebuggerDisplay attribute (you need to put executable code into {} for example (“Value = {X}:{Y}”)]

Thanks to Boris for deep investigation

How to calculate CRC in C#?

First of all, I want to beg your pardon about the frequency of posts last time. I’m completely understaffed and have a ton of things to do for my job. This why, today I’ll just write a quick post about checksum calculation in C#. It might be very useful for any of you, working with devices or external systems.

BIOS CRC Error for old thinkpad

CRC – Cyclic Redundancy Check is an algorithm, which is widely used in different communication protocols, packing and packaging algorithms for assure robustness of data. The idea behind it is simple – calculate unique checksum (frame check sequence) for each data frame, based on it’s content and stick it at the end of each meaningful message. Once data received it’s possible to perform the same calculating and compare results – if results are similar, message is ok.

There are two kinds of CRC – 16 and 32 bit. There are also less used checksums for 8 and 64 bits. All this is about appending a string of zeros to the frame equal in number of frames and modulo two device by using generator polynomial containing one or more bits then checksum to be generated. This is very similar to performing a bit-wise XOR operation in the frame, while the reminder is actually our CRC.

In many industries first polynomial is in use to create CRC tables and then apply it for performance purposes. The default polynomial, defined by IEEE 802.3 which is 0xA001 for 16 bit and 0x04C11DB7 for 32 bit. We’re in C#, thus we should use it inversed version which is 0×8408 for 16 bit and 0xEDB88320 for 32 bit. Those polynomials we’re going to use also in our sample.

So let’s start. Because CRC is HashAlgorithm after all, we can derive our classes from System.Security.Cryptography.HashAlgorithm class.

public class CRC16 : HashAlgorithm {
public class CRC32 : HashAlgorithm {

Then, upon first creation we’ll generate hashtables with CRC values to enhance future performance. It’s all about values table for bytes from 0 to 255 , so we should calculate it only once and then we can use it statically.

[CLSCompliant(false)]
public CRC16(ushort polynomial) {
HashSizeValue = 16;
_crc16Table = (ushort[])_crc16TablesCache[polynomial];
if (_crc16Table == null) {
_crc16Table = CRC16._buildCRC16Table(polynomial);
_crc16TablesCache.Add(polynomial, _crc16Table);
}
Initialize();
}

[CLSCompliant(false)]
public CRC32(uint polynomial) {
HashSizeValue = 32;
_crc32Table = (uint[])_crc32TablesCache[polynomial];
if (_crc32Table == null) {
_crc32Table = CRC32._buildCRC32Table(polynomial);
_crc32TablesCache.Add(polynomial, _crc32Table);
}
Initialize();
}

Then let’s calculate it

private static ushort[] _buildCRC16Table(ushort polynomial) {
// 256 values representing ASCII character codes.
ushort[] table = new ushort[256];
for (ushort i = 0; i < table.Length; i++) {
ushort value = 0;
ushort temp = i;
for (byte j = 0; j < 8; j++) {
if (((value ^ temp) & 0×0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
} else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
return table;
}

private static uint[] _buildCRC32Table(uint polynomial) {
uint crc;
uint[] table = new uint[256];

// 256 values representing ASCII character codes.
for (int i = 0; i < 256; i++) {
crc = (uint)i;
for (int j = 8; j > 0; j–) {
if ((crc & 1) == 1)
crc = (crc >> 1) ^ polynomial;
else
crc >>= 1;
}
table[i] = crc;
}

return table;
}

The result will looks like this for 32 bits

        0x00, 0x31, 0x62, 0x53, 0xC4, 0xF5, 0xA6, 0x97,
        0xB9, 0x88, 0xDB, 0xEA, 0x7D, 0x4C, 0x1F, 0x2E,
        0x43, 0x72, 0x21, 0x10, 0x87, 0xB6, 0xE5, 0xD4,
        0xFA, 0xCB, 0x98, 0xA9, 0x3E, 0x0F, 0x5C, 0x6D,
        0x86, 0xB7, 0xE4, 0xD5, 0x42, 0x73, 0x20, 0x11,
        0x3F, 0x0E, 0x5D, 0x6C, 0xFB, 0xCA, 0x99, 0xA8,
        0xC5, 0xF4, 0xA7, 0x96, 0x01, 0x30, 0x63, 0x52,
        0x7C, 0x4D, 0x1E, 0x2F, 0xB8, 0x89, 0xDA, 0xEB,
        0x3D, 0x0C, 0x5F, 0x6E, 0xF9, 0xC8, 0x9B, 0xAA,
        0x84, 0xB5, 0xE6, 0xD7, 0x40, 0x71, 0x22, 0x13,
        0x7E, 0x4F, 0x1C, 0x2D, 0xBA, 0x8B, 0xD8, 0xE9,
        0xC7, 0xF6, 0xA5, 0x94, 0x03, 0x32, 0x61, 0x50,
        0xBB, 0x8A, 0xD9, 0xE8, 0x7F, 0x4E, 0x1D, 0x2C,
        0x02, 0x33, 0x60, 0x51, 0xC6, 0xF7, 0xA4, 0x95,
        0xF8, 0xC9, 0x9A, 0xAB, 0x3C, 0x0D, 0x5E, 0x6F,
        0x41, 0x70, 0x23, 0x12, 0x85, 0xB4, 0xE7, 0xD6,
        0x7A, 0x4B, 0x18, 0x29, 0xBE, 0x8F, 0xDC, 0xED,
        0xC3, 0xF2, 0xA1, 0x90, 0x07, 0x36, 0x65, 0x54,
        0x39, 0x08, 0x5B, 0x6A, 0xFD, 0xCC, 0x9F, 0xAE,
        0x80, 0xB1, 0xE2, 0xD3, 0x44, 0x75, 0x26, 0x17,
        0xFC, 0xCD, 0x9E, 0xAF, 0x38, 0x09, 0x5A, 0x6B,
        0x45, 0x74, 0x27, 0x16, 0x81, 0xB0, 0xE3, 0xD2,
        0xBF, 0x8E, 0xDD, 0xEC, 0x7B, 0x4A, 0x19, 0x28,
        0x06, 0x37, 0x64, 0x55, 0xC2, 0xF3, 0xA0, 0x91,
        0x47, 0x76, 0x25, 0x14, 0x83, 0xB2, 0xE1, 0xD0,
        0xFE, 0xCF, 0x9C, 0xAD, 0x3A, 0x0B, 0x58, 0x69,
        0x04, 0x35, 0x66, 0x57, 0xC0, 0xF1, 0xA2, 0x93,
        0xBD, 0x8C, 0xDF, 0xEE, 0x79, 0x48, 0x1B, 0x2A,
        0xC1, 0xF0, 0xA3, 0x92, 0x05, 0x34, 0x67, 0x56,
        0x78, 0x49, 0x1A, 0x2B, 0xBC, 0x8D, 0xDE, 0xEF,
        0x82, 0xB3, 0xE0, 0xD1, 0x46, 0x77, 0x24, 0x15,
        0x3B, 0x0A, 0x59, 0x68, 0xFF, 0xCE, 0x9D, 0xAC

Now, all we have to do is to upon request to lookup into this hash table for related value and XOR it

protected override void HashCore(byte[] buffer, int offset, int count) {

for (int i = offset; i < count; i++) {

ulong ptr = (_crc & 0xFF) ^ buffer[i];

_crc >>= 8;

_crc ^= _crc32Table[ptr];

}

}

new public byte[] ComputeHash(Stream inputStream) {

byte[] buffer = new byte[4096];

int bytesRead;

while ((bytesRead = inputStream.Read(buffer, 0, 4096)) > 0) {

HashCore(buffer, 0, bytesRead);

}

return HashFinal();

}

protected override byte[] HashFinal() {

byte[] finalHash = new byte[4];

ulong finalCRC = _crc ^ _allOnes;

finalHash[0] = (byte)((finalCRC >> 0) & 0xFF);

finalHash[1] = (byte)((finalCRC >> 8) & 0xFF);

finalHash[2] = (byte)((finalCRC >> 16) & 0xFF);

finalHash[3] = (byte)((finalCRC >> 24) & 0xFF);

return finalHash;

}

We done. Have a good time and be good people. Also, I want to thank Boris for helping me with this article. He promised to write here some day…

Source code for this article

WPF Line-Of-Business labs and Silverlight vs. Flash

Small update today (mostly interesting links)… During my last “Smart Client” session I was asked about WPF LOB application development labs. So, there are two full labs, I noticed about:

Both labs include WPF ribbon and DataGrid, Southridge also come with M-VV-M design sample and some other interesting features. As for me, it seemed, like some parts of those labs can be easily used “as-is” for production level applications, like it was done with SCE starter, which turned into TimesReader (by the way, it has free version again).

Line of Business Hands-On-Lab Material

For those, who still trying to consider what to use for their next killer app, I propose to read following article from Jordan, which compares between Silverlight and Flash. And then see composite application guidance to use Prism for Silverlight development. Here the video of it usage by Adam Kinney from Channel 9

Prism for Silverlight

Have a nice day and be good people

Slides and desks from Smart Client Development session

Great thank to everybody attended yesterday at “Smart Client development” session. As promises, please see slides and desks from this session

Recommended

 

Sponsor


Partners

WPF Disciples
Dreamhost
Code Project