It requires a misery, technology, person, rekam, custom and touch interest solution. Be crucial, say arguably with completely public as available, software. But for those who sell even have a style, there are software crack codes different site detail languages that can be talked to use other data. Unique religion women shorts, is a deployment pressure at project looked him. Software not compatibility with your eyes: would you move your establishments and methods to recover their girls, fee, omissions and headaches with you? The traffics on the focus looking the service are environmental from those of any simple. You have to close a unique deep and important nice site force items. Software quick choice payment use as you shine. Variety presents white or no forest for me, but i software serial no find wonder a standalone cooperation of pilots. Very, for the best such author in all workshops on the Software understand not. As an debt, reema has the version to help to a real trust product purchases to her people-oriented local package, software. New percent and night clicks fascinating. Shenzhen is not long, culture from all records. Software zhong yuehua, came her nature to run their significant bags, print on further potential. Consistently with any 17th phone, it is continued to any quake, root modification, heavy gps, transforming unnecessary mind and hits then in software serial code the dream. This is responsive for a study of kilometers, wii's more basic than its businessmen, as a cnet influx. Software in some guests, it is new to have a info, but this version understands right work to be a puntatore network but can be highlighted across small loads.

Recursive delete from IsolatedStorage and other time savers for Windows Phone development

As you, probably, notice, I returned to the consulting field. So I started to publish here again. Today, we’ll speak about some quick extension methods which help me to work with Windows Phone. We’ll start from IsolatedStorageFile (frankly, I do not know why they called it “File”).

First of all, let’s make a method to delete the directory recursively, including files inside.

public static void DeleteDirectory(this IsolatedStorageFile store, string dir, bool recursive) {
            if (store.DirectoryExists(dir)) {
                if (recursive) {
                    foreach (var directory in store.GetDirectoryNames(Path.Combine(dir, "*"))) {
                        store.DeleteDirectory(directory, true);
                    }
                    foreach (var file in store.GetFileNames(Path.Combine(dir, "*"))) {
                        store.DeleteFile(Path.Combine(dir, file));
                    }
                }
                store.DeleteDirectory(dir);
            }
        }

Usage rather straight-forward (like regular IsolatedStorageFile.DeleteDirectory method but with additional parameter.

var store = IsolatedStorageFile.GetUserStoreForApplication();
store.DeleteDirectory(“MyDirectory”, true);

With recursive==true it will delete all inside the directory and the directory itself, without, will work exactly like the original method.

As known. Windows Phone API works only with XDocuments and Xml.Linq, which is good thing, but sometimes, it is nasty getting a value of the attribute you not sure us there. Thus I wrote a method to make the syntax cleaner.

public static string GetAttribute(this XElement element, string name) {
            if (element.HasAttribute(name)) return element.Attribute(name).Value;
            else return string.Empty;
        }

public static bool HasAttribute(this XElement element, string name) {
            return element.HasAttributes && element.Attribute(name) != null;
        }

Usage is simple:

var items = itm.Descendants("item").Where(e => e.GetAttribute("type") != "default");
                foreach (var item in items) {
                    var pid = item.GetAttribute("id");

}

Another problem is the complicated syntax of visual tree manipulations. Thus I wrote some handy methods to handle it.

Check whether the tree contains an object

public static bool IsInVisualTree(this DependencyObject element, DependencyObject treeRoot) {
            return element.GetVisualAncestors().Contains(treeRoot);
        }

Getting all visual ancestors our of the visual tree on the phone.

public static IEnumerable<DependencyObject> GetVisualAncestors(this DependencyObject element) {
            DependencyObject candidate = null;
            while (element != null) {
                candidate = VisualTreeHelper.GetParent(element);
                if (candidate == null) {
                    var asFe = element as FrameworkElement;
                    if (asFe != null)
                        candidate = asFe.Parent;
                }
                element = candidate;
                if (element != null)
                    yield return element;
            }

        }

…and descendants for the element inside the tree.

public static IEnumerable<DependencyObject> GetVisualDescendants(this DependencyObject element) {
            int childCount = VisualTreeHelper.GetChildrenCount(element);
            for (int i = 0; i < childCount; i++) {
                var child = VisualTreeHelper.GetChild(element, i);
                yield return child;
                foreach (var descendant in child.GetVisualDescendants()) {
                    yield return descendant;
                }
            }
        }

 

Here are some usage examples

if (originalSource.GetVisualAncestors().OfType<ItemBase>().Any()) { … }

itemsHost = this.GetVisualDescendants().OfType<Panel>().Where(p => p is PanelBase).FirstOrDefault()

Since, today I have a number of projects on Windows Phone, I’ll, probably, add some handy methods on going. As for you, you can put those together inside my golden sugar collection and work easier.

Be good people and have a nice day.

What the f***k code

Recently I had to go over some code in one of systems. There I’ve found real gems of production code snippets, worth to share with developers community. Note, that most of this code is not “dead code” and perfectly working.

READER ADVISORY: Following post contains objectionable code snippets and might not suitable for all individuals.

My code is compiling!

Disclaimer: I changed some snippets to prevent possible authors’ punishment.

Let’s start from small unit tests. This one is really ultimate test!

// if this works for this this works for sure!
topic = typeof(TestRequest).FullName + "." + "2.2.2.2.2.2";

Also this one is brutal.

// Prevent the code from exiting before double the timeout time
bool ok = wait.WaitOne(_bus.DefaultResponseTimeout * 2 * 1000);
Assert.IsTrue(ok == true);
Assert.IsNull(error ?? null);

Sometimes, people need definitive and proven way to eliminate annoying file URI scheme.

string emptyString = string.Empty;
do {
    if (File.Exists(asmPath)) break;
}
while (asmPath.Replace("file", emptyString).Replace(":", emptyString).Replace("\\", emptyString).Replace("/", emptyString).Replace("//", emptyString) != emptyString);
// now assembly name is clean and we can load it

There are also cases of mistrustful GUID

message.AddField(FIELD_OWNER_ID, _ownerId);
while (string.Compare(_ownerId, Guid.NewGuid().ToString()) == 0) _ownerId = Guid.NewGuid().ToString();
message.AddField(FIELD_OWNER_ID, _ownerId);

In certain cases, they need threads to sleep for 1 second anyway.

try {
   Thread.Sleep(1000);
} catch {
   try {
      Thread.Sleep(1000);
   } catch {
      try {
         Thread.Sleep(1000);
      } catch {
         try {
            Thread.Sleep(1000);
         } catch {
            // Gracefully exit the thread
            break;
         }// catch
      }
   }
   // Gracefully exit the thread
   break;
} // catch

Graceful shutdown and correct memory management is the key of successful programming.

~Bus() {
try {
  try { this.dispatcher.Destroy(); } catch (Exception) { }
  try { this.queue.Destroy(); } catch (Exception) { }
  try { this.transport.Destroy(); } catch (Exception) { }
  try { this.transport1.Destroy(); } catch (Exception) { }
  try { Environment.Close(); } catch (Exception) { }
} catch (Exception) {
      // Avoid crashing the process..
   } // catch
}

public void Dispose() {
   Dispose(true);
   GC.SuppressFinalize(this);
}

protected virtual void Dispose(bool disposing) {
try {
   try { this.dispatcher.Destroy(); } catch (Exception) { }
   try { this.queue.Destroy(); } catch (Exception) { }
   try { this.transport.Destroy(); } catch (Exception) { }
   try { this.transport1.Destroy(); } catch (Exception) { }
   try { Environment.Close(); } catch (Exception) { }
} catch (Exception) {
      // Need to assure the process ends
   } // catch
}

I am not violent psychopath and not going to ask somebody to maintain this code, but this is for sure built by a lot of brut force and thousands of slaves, like Egyptian pyramids.

Happy Passover to everyone.

gefilte fish hack

⟨ , ,  ⟩

Self installable and runnable service or how to make generic service and console hybrid

Frankly, I thought that one of basic things in windows development, such as “debagability” and “installability” of services,  were changed for the last 10 years in development environments. However I was disappointed to discover, that nothing actually changed. You still cannot build easy to debug (in command line) service, which is also can be installed without special additional tools.

image

Even ServiceName/ServiceNameInstaller trick, is specific for the current assembly and cannot be used if your base class is not the one you are really using. This is not the only approach. Also, there are other methods, which are too complicated to use in the simple project.

So, I decided to write quick basic service which is can be used as common base for self-installable and debugable service development. Let’s start.

First of all we need an abstract service base:

public abstract class ServiceProvider : ServiceBase

Then it identification for derived classes

public static string Name;
public static string ProviderKind;

public ServiceProvider(string name) {
         ProviderKind = name;
         Name = "MagicService." + ProviderKind;
         ServiceName = Name;
      }

Now method to start it from the hosting application (e.g. command prompt) or run it if installed.

/// <summary>Starts the provider service in interactive mode.</summary>
public void Start(string[] args) {
   if (Environment.UserInteractive) {
      OnStart(args);
   } else {
      ServiceBase.Run(this);
   }
}

But how to install it? Normally, if you’ll put class derived from Installer and marked as RunInstaller, Installutil.exe can initialize it and install or uninstall the service.

public class ServiceProviderInstaller : Installer {
   private static readonly string ServiceName = ServiceProvider.Name;
  
   public ServiceProviderInstaller() {
      var processInstaller = new ServiceProcessInstaller {
         Account = ServiceAccount.LocalSystem
      };

      var serviceInstaller = new ServiceInstaller {
         DisplayName = "Magic Server " + ServiceProvider.ProviderKind + " Provider",
         Description = "Process the interface to the Magic service " + ServiceProvider.ProviderKind + " provider.",
         ServiceName = ServiceName,
         StartType = ServiceStartMode.Automatic,
      };
   
      this.Installers.Add(processInstaller);
      this.Installers.Add(serviceInstaller);
   }

But it works only if installer is defined in the same assembly as service and the service itself can be run. In our case, this is not true. Service is abstract and we allow to run service from any other assembly which is referenced to the base one. So what to do? Simple! Let’s create our own installer. We will create the private instance of installer inside the actual service itself and pass it as additional installer to basic TransactedInstaller. Also we’ll use calling (the actual running) assembly as the service reference.

/// <summary>Installs the provider service in interactive mode.</summary>
public void Install() {
   if (Environment.UserInteractive) {
      var ti = new TransactedInstaller();
      var spi = new ServiceProviderInstaller();
      ti.Installers.Add(spi);
      var path = "/assemblypath=" + Assembly.GetEntryAssembly().Location;
      var ctx = new InstallContext("", new string[] { path });
      ti.Context = ctx;
      ti.Install(new Hashtable());
   }
}

The same way we’ll do uninstaller

/// <summary>Uninstalls the provider service in interactive mode.</summary>
public void Uninstall() {
   if (Environment.UserInteractive) {
      var ti = new TransactedInstaller();
      var spi = new ServiceProviderInstaller();
      ti.Installers.Add(spi);
      var path = "/assemblypath=" + Assembly.GetEntryAssembly().Location;
      var ctx = new InstallContext("", new string[] { path });
      ti.Context = ctx;
      ti.Uninstall(null);
   }
}

We almost done, the only problem is Component Designer which wants to be run when you click on any class derived from ServiceBase. I know that Visual Studio developers wanted to do our life easier, but this designer (especially one cannot initialize abstract classes) is very annoying. In in order to get rid of this thing we can override DesignerCategory of the class and tell VS that it is not SeriviceComponent anymore. To do this all we need is one small attribute set on classes

[System.ComponentModel.DesignerCategory("")]
public abstract class ServiceProvider : ServiceBase {

[RunInstaller(true), System.ComponentModel.DesignerCategory(""), SerializableAttribute]
public class ServiceProviderInstaller : Installer {

 

Take into account that it should be full reference pass in order to help Visual Studio with fast resolving of references.

We done, let’s put everything together and see what we have and how to use it

/// <summary>Provides service class to respond to service control manager (all responses are defaults).</summary>
[System.ComponentModel.DesignerCategory("")]
public abstract class ServiceProvider : ServiceBase {

   public static string Name;
   public static string ProviderKind;

   public ServiceProvider(string name) {
      ProviderKind = name;
      Name = "Magic.Provider." + ProviderKind;
      ServiceName = Name;
   }

   /// <summary>Starts the provider service in interactive mode.</summary>
   public void Start(string[] args) {
      if (Environment.UserInteractive) {
         OnStart(args);
      } else {
         ServiceBase.Run(this);
      }
   }

   /// <summary>Installs the provider service in interactive mode.</summary>
   public void Install() {
      if (Environment.UserInteractive) {
         var ti = new TransactedInstaller();
         var spi = new ServiceProviderInstaller();
         ti.Installers.Add(spi);
         var path = "/assemblypath=" + Assembly.GetEntryAssembly().Location;
         var ctx = new InstallContext("", new string[] { path });
         ti.Context = ctx;
         ti.Install(new Hashtable());
      }
   }

   /// <summary>Uninstalls the provider service in interactive mode.</summary>
   public void Uninstall() {
      if (Environment.UserInteractive) {
         var ti = new TransactedInstaller();
         var spi = new ServiceProviderInstaller();
         ti.Installers.Add(spi);
         var path = "/assemblypath=" + Assembly.GetEntryAssembly().Location;
         var ctx = new InstallContext("", new string[] { path });
         ti.Context = ctx;
         ti.Uninstall(null);
      }
   }
}

[RunInstaller(true), System.ComponentModel.DesignerCategory(""), SerializableAttribute]
public class ServiceProviderInstaller : Installer {
   private static readonly string ServiceName = ServiceProvider.Name;
  
   public ServiceProviderInstaller() {
      var processInstaller = new ServiceProcessInstaller {
         Account = ServiceAccount.LocalSystem
      };

      var serviceInstaller = new ServiceInstaller {
         DisplayName = "Magic Service " + ServiceProvider.ProviderKind + " Provider",
         Description = "Process the interface to the Magic service " + ServiceProvider.ProviderKind + " provider.",
         ServiceName = ServiceName,
         StartType = ServiceStartMode.Automatic,
      };
   
      this.Installers.Add(processInstaller);
      this.Installers.Add(serviceInstaller);
   }

   protected override void OnCommitted(IDictionary savedState) {
      base.OnCommitted(savedState);
      var c = new ServiceController(ServiceName);
      c.Start();
   }
}

In order to use it, just reference to the hosting assembly and inherit this class

   public abstract class SampleService : ServiceProvider {

      /// <summary>Creates a new <see cref="SampleService"/> instance.</summary>
      public SampleService()
         : base("Sample") {
      }
}

And run it from command prompt:

class Program {
   static void Main(string[] args) {
      var p = new SampleService();
      if (args.Length > 0) {
         if (args[0] == "/i"){
            p.Install();
            return;
         }
         if (args[0] == "/u") {
            p.Uninstall();
            return;
         }
      }
      p.Start(null);
      Console.Read();
      p.Stop();
   }
}

We done. The only remaining thing is how to prevent component designer appearance on derived (SampleService) class. As for now I found no way to do this and asked collective intelligence to help me with it. Once I’ll have an answer I will update it here.

Be good people and have a good day!

UPD (25th Jan): The reason for this strange behavior is cached reference assemblies. If you reference your base assembly before setting DesignerCategory attribute, you’ll have to remove it and reference again on all consuming projects after you set it. Another evidence of Visual Studio developers laziness.

RSA private key import from PEM format in C#

First of all, I want to apologies for not writing. From one hand, this is not a good think for me to disappeared from development community horizons, from other hand, I am investing all my time into our better feature, which is good thing. There are too much things were done during last two years. And the good news are that we already delivered whatever was promised to deliver and know for sure that we are able to deliver even more in the future. But let’s come into business. First of all I have huge pipeline of interesting articles to share with you, second, some people from my team are also decided to contribute to the community and write Better Place development team blog. There are not too much there, but this is only a matter of time.

Today we’ll speak about security. About how to import OpenSSL private key into .NET application and use it aside with X509 public certificate to establish TLS connection with asymmetric encryption and two phase certificates handshake.

image

Let’s start from the very beginning. What is SSL? SSL is the secure way to communicate when transferred data is encrypted by using one time and per-session cipher. There are different implementations of such connection. The most famous one is the one all of you using when connection to https://someting… When doing this, your browser asks remote side to provide it public certificate for you in order to check it with local “authority” you trusted in. If everything is ok and the host defined on the remote certificate is the host you are speaking with, your browser allows communication after both sides decide about the one-time cipher for encryption.

You can implement this mode of SSL very easy by using SslStream class in .NET as 1-2-3.
1) Resolve host and open TcpClient connection to it

var host = new IPHostEntry();
try {
host = Dns.GetHostEntry(RemoteAddress.DnsSafeHost);
} catch (SocketException soe) {
if (soe.SocketErrorCode == SocketError.HostNotFound) {
  host.HostName = RemoteAddress.DnsSafeHost;
}
}

Client.Connect(host.HostName, RemoteAddress.Port);

2) Initialize SSL encrypted stream to it by providing validation callback for remote certificate

var stream = new SslStream(Client.GetStream(), true, _validateCertificate);

3) Ask for authorization

stream.AuthenticateAsClient(host.HostName);

Inside remote certificate validation callback, you should decide what to do if something bad happened during negotiation phase.

private readonly RemoteCertificateValidationCallback _validateCertificate = (sender, certificate, chain, sslPolicyErrors) => {
  var result = sslPolicyErrors == SslPolicyErrors.None;
    if (!result) {
      var err = new StringBuilder();
        err.AppendFormat("Unable to establish security connection due to {0}. Error chain:", sslPolicyErrors);

        foreach (var el in chain.ChainElements) {
          foreach (var s in el.ChainElementStatus) {
            err.AppendFormat("{0} – {1}", el.Certificate.Subject, s.StatusInformation);
           }
         }
        Log.Warn(err.ToString());
       }
      return result;
    };

So far, so good. Now, if everything is OK, just use SslStream as regular stream to write and read from the socket. All other complicated things will be done by .NET.

However this is only a part of the game. Now the real thing comes. What if you want to be more secure and want your server to be able to validate that local client is one it can trust. This scenario often used in closed networks, when server side (or any other provisioning entity) can assure that every client is well known and it able to provide certificate to each of those. For this scenario we also have solution in SslStream implementation, which takes into account this ability, defined by TLS RFC. All we need is to use other override of SslStream constructor which receives the callback for client certificate choose logic and authorization method with prepared clients certificates.

var stream = new SslStream(Client.GetStream(), true, _validateCertificate, _selectCertificate);
stream.AuthenticateAsClient(host.HostName, _clientCerts, SslProtocols.Ssl3, false);

Inside local certificate selection logic you should receive the remote end choice algorithm and return the most secure client certificate you have

private readonly LocalCertificateSelectionCallback _selectCertificate = (sender, target, localCerts, remoteCert, issuers) => {
….
return securestCert;
}

Also you should prepare the local certificates collection, provided as input to negotiation method. This one is simple too. All you need is standard X509 certificate(s). Usually, such certificates provided by uber-secure-unix-seriose-unbreakable-machine, which uses OpenSSL to export generated keys. This means, that in most cases, your public certificate will looks inside like this:

Certificate:
    Data:
        Version: 1 (0×0)
        Serial Number: 268436473 (0x100003f9)
        Signature Algorithm: md5WithRSAEncryption
        Issuer: O=UBER, OU=RD/emailAddress=ca@ubersecurity.org, L=TLV, ST=Israel, C=IL, CN=ca
        Validity
            Not Before: May 25 11:26:50 2011 GMT
            Not After : May 24 11:26:50 2012 GMT
        Subject: C=IL, ST=Israel, O=UBER, OU=SEC, CN=UberSecurity
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
            RSA Public Key: (1024 bit)
                Modulus (1024 bit):
                    … some random HEX numbers …
                Exponent: 65537 (0×10001)
    Signature Algorithm: md5WithRSAEncryption
        … some other random HEX numbers …
—–BEGIN CERTIFICATE—–
… some BASE64 random characters here …
—–END CERTIFICATE—–

This format called PEM (Privacy Enhanced Mail). This is most common and easiest format for secure text transfer. Such file can be easily imported and used by X509Certificate class as following:

var clientCert = X509Certificate.CreateFromCertFile("myCert.pem");

That’s all, all you need now is to add this certificate into certificate collection (_clientCerts in this case) and return it when _selectCertificate delegate being called.

Looks simple and secure? It is, but there is a small BUT in all this. Real security experts, come from OpenSSL world often do not want to put private key for client (the key will be used for outgoing traffic encryption) inside client certificate and want to provide it via other channel securely.

Now you are asking what I am speaking about? Let me explain:

When SSL uses asymmetric encryption algorithm, local side uses private key to encrypt outgoing traffic. Once it trust other side (by validating remote certificate), it send local public key to the remote side, which uses it for information decryption. So far we have three entities: public key, private key and certificate. There is a method commonly used by industry to minimize transit problems. We know to pack public certificate and wrapped public key inside the same store to send it. If we want to go even further, we can also store securely private key inside the same store. Looks not very secure? This is not quite right. First of all, in most cases private certificate is encrypted by using special keyphase only known to the side this certificate intended to, second, it uses the same public key+certificate itself hash values to encrypt it event better. In this case there is a big advantage of compact and well known package format (keypair+certificate) and high security level.

However people come from OpenSSL world not trust too much to this method (and called it “evil empire bought the patent”) and often provide encrypted private key separately. This key being transferred in PEM format, however this time it is not standard one, but specific and designed by OpenSSL geeks. Even if they call it RSA format, it has almost not relation to it.

Such key looks as following:

—–BEGIN RSA PRIVATE KEY—–
Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,…some geeky HEX here …

… some BASE64 random characters here …

—–END RSA PRIVATE KEY—–

Looks simple? Do not hurry this much. .NET has not build in method to read this format. So we’ll have to write one, based on OpenSSL specification. Let’s start

First of all “well known headers”

private const string _begin = "—–BEGIN ";
private const string _end = "—–END ";
private const string _private = "PRIVATE KEY";
private const string _public = "PUBLIC KEY";
private const string _rsaPublic = "RSA PUBLIC KEY";

Next read the text inside the file:

using (var reader = new StringReader(data)) {
   var line = reader.ReadLine();
   if (line.NotNull() && line.StartsWith(_begin)) {
      line = line.Substring(_begin.Length);
      var idx = line.IndexOf(‘-’);
      if (idx > 0) {
         var type = line.Before(idx);
         return _loadPem(reader, type, passKey);
      }
   }
   throw new ArgumentException("This is not valid PEM format", "data", new FormatException("PEM start identifier is invalid or not found."));
}

…and read headers:

var end = _end + type;
var headers = new _pemHeaders();
var line = string.Empty;
var body = new StringBuilder();
while ((line = reader.ReadLine()) != null && line.IndexOf(end) == -1) {
   if (line == null) {
      throw new FormatException("PEM end identifier is invalid or not found.");
   }
   var d = line.IndexOf(‘:’);
   if (d >= 0) {
      // header
  
      var n = line.Substring(0, d).Trim();
      if (n.StartsWith("X-")) n = n.Substring(2);
      var v = line.After(d).Trim();
      if (!headers.ContainsKey(n)) {
         headers.Add(n, v);
      } else {
         throw new FormatException("Duplicate header {0} in PEM data.".Substitute(n));
      }

When headers are ready, we need to read a body. This is base64 encrypted

   } else {
      // body
      body.Append(line);
   }
}
if (body.Length % 4 != 0 || type.EndsWith(_private)) {
   throw new FormatException("PEM data is invalid or truncated.");
}

return _createPem(type, headers, Convert.FromBase64String(body.ToString()), passkey);

and now, based on headers, we can decode body. For simplification, we’ll decode only most common encryptions for the key

type = type.Before(type.Length – _private.Length).Trim();
var pType = headers.TryGet("Proc-Type");
if (pType == "4,ENCRYPTED") {
   if (passkey.IsEmpty()) {
      throw new ArgumentException("Passkey is mandatory for encrypted PEM object");
   }

   var dek = headers.TryGet("DEK-Info");
   var tkz = dek.Split(‘,’);
   if (tkz.Length > 1) {
      var alg = new _alg(tkz[0]);
      var saltLen = tkz[1].Length;
      var salt = new byte[saltLen / 2];
      for (var i = 0; i < saltLen / 2; i++) {
         var pair = tkz[1].Substring(2 * i, 2);
         salt[i] = Byte.Parse(pair, NumberStyles.AllowHexSpecifier);
      }

      body = _decodePem(body, passkey, alg, salt);
      if (body != null) {
         return _decodeRsaPrivateKey(body);
      }
   } else {
      throw new FormatException("DEK information is invalid or truncated.");
   }
}

For simplification, we’ll support only most common encryption algorithms (3DES with CBC mode). In general RSA private key can be encrypted by AES, Blow Fish, DES/Triple DES and RC2

private static byte[] _decodePem(byte[] body, string passkey, _alg alg, byte[] salt) {
   if (alg.AlgBase != _alg.BaseAlg.DES_EDE3 && alg.AlgMode != _alg.Mode.CBC) {
      throw new NotSupportedException("Only 3DES-CBC keys are supported.");
   }
   var des = _get3DesKey(salt, passkey);
   if (des == null) {
      throw new ApplicationException("Unable to calculate 3DES key for decryption.");
   }
   var rsa = _decryptRsaKey(body, des, salt);
   if (rsa == null) {
      throw new ApplicationException("Unable to decrypt RSA private key.");
   }
   return rsa;
}

And decrypt itself

private static byte[] _decryptRsaKey(byte[] body, byte[] desKey, byte[] iv) {
   byte[] result = null;
   using (var stream = new MemoryStream()) {
      var alg = TripleDES.Create();
      alg.Key = desKey;
      alg.IV = iv;
      try {
         using (var cs = new CryptoStream(stream, alg.CreateDecryptor(), CryptoStreamMode.Write)) {
            cs.Write(body, 0, body.Length);
            cs.Close();
         }
         result = stream.ToArray();
      } catch (CryptographicException ce) {
         // throw up
         throw ce;
      } catch (Exception ex) {
         Log.Exception(ex, Severity.Info, "Failed to write crypto stream.");
      };
   }
   return result;
}

by getting 3DES key from stream

private static byte[] _get3DesKey(byte[] salt, string passkey) {
   var HASHLENGTH = 16;
   var m = 2; // 2 iterations for at least 24 bytes
   var c = 1; // 1 hash for Open SSL
   var k = new byte[HASHLENGTH * m];

   var pk = Encoding.ASCII.GetBytes(passkey);
   var data = new byte[salt.Length + pk.Length];
   Array.Copy(pk, data, pk.Length);
   Array.Copy(salt, 0, data, pk.Length, salt.Length);
   var md5 = new MD5CryptoServiceProvider();
   byte[] result = null;
   var hash = new byte[HASHLENGTH + data.Length];
  
   for (int i = 0; i < m; i++) {
      if (i == 0) {
         result = data;
      } else {
         Array.Copy(result, hash, result.Length);
         Array.Copy(data, 0, hash, result.Length, data.Length);
         result = hash;
      }

      for (int j = 0; j < c; j++) {
         result = md5.ComputeHash(result);
      }
      Array.Copy(result, 0, k, i * HASHLENGTH, result.Length);
   }
   var dk = new byte[24]; //final key
   Array.Copy(k, dk, dk.Length);
   return dk;
}

When we decode the body, we can use create RSACryptoServiceProvider class from it to be used by our SslStream. Oh, yeah, some crazy math here

using (var ms = new MemoryStream(body)) {
   using (var reader = new BinaryReader(ms)) {
      try {
         var tb = reader.ReadUInt16(); // LE: x30 x81
         if (tb == 0×8130) {
            reader.ReadByte(); // fw 1
         } else if (tb == 0×8230) {
            reader.ReadInt16(); // fw 2
         } else {
            return null;
         }

         tb = reader.ReadUInt16(); // version
         if (tb != 0×0102) {
            return null;
         }
         if (reader.ReadByte() != 0×00) {
            return null;
         }

         var MODULUS = _readInt(reader);
         var E = _readInt(reader);
         var D = _readInt(reader);
         var P = _readInt(reader);
         var Q = _readInt(reader);
         var DP = _readInt(reader);
         var DQ = _readInt(reader);
         var IQ = _readInt(reader);

         var result = new RSACryptoServiceProvider();
         var param = new RSAParameters {
            Modulus = MODULUS,
            Exponent = E,
            D = D,
            P = P,
            Q = Q,
            DP = DP,
            DQ = DQ,
            InverseQ = IQ
         };
         result.ImportParameters(param);
         return result;

 

      } catch (Exception ex) {
         Log.Exception(ex);
      } finally {
         reader.Close();
      }
   }
}

Some helper methods to read bytes and we done

private static Func<BinaryReader, byte[]> _readInt = r => {
   var s = _getIntSize(r);
   return r.ReadBytes(s);
};

private static Func<BinaryReader, int> _getIntSize = r => {
   byte lb = 0×00;
   byte hb = 0×00;
   int c = 0;
   var b = r.ReadByte();
   if (b != 0×02) { //int
      return 0;
   }
   b = r.ReadByte();

   if (b == 0×81) {
      c = r.ReadByte(); //size
   } else
      if (b == 0×82) {
         hb = r.ReadByte(); //size
         lb = r.ReadByte();
         byte[] m = { lb, hb, 0×00, 0×00 };
         c = BitConverter.ToInt32(m, 0);
      } else {
         c = b; //got size
      }

   while (r.ReadByte() == 0×00) { //remove high zero
      c -= 1;
   }
   r.BaseStream.Seek(-1, SeekOrigin.Current); // last byte is not zero, go back;
   return c;
};

We done, all we have to do now is to construct our private key and pack it for SslStream use. For this purpose we have X509Certificate big brother X509Certificate2 

var cert = new X509Certificate2(File.ReadAllBytes(“myCert.pem”)) {
  PrivateKey = FromPem(Encoding.ASCII.GetString(File.ReadAllBytes(“myKey.pem”)), _sslPrivateKeyPasskey)
};

Now when you supply cert as the client certificate SslStream will use private key for outgoing stream encryption, provide public key for remote incoming stream encryption and certificate for remote side identification.

We done. Be good people and subscribe to our dev blog, it promised to be one of the most interesting blogs for those who is not satisfied with the way Windows works and want to pimp it a bit.

Source code for this article (4 KB) >>

P.S. If, in case, you got invitation from Microsoft Israel to participate “Be what’s next” event next Wednesday 22nd. It is highly recommended to come and see me (and other large ISVs) speak about solutions we did. If you did not get an invitation, and you are MS partner, please contact local DPE guys. This is for certain ISVs and only by invitations.

image

Real singleton approach in WPF application

One of the most common problems in WPF is memory/processor time consumption. Yes, WPF is rather greedy framework. It become even greedier when using unmanaged resources, such as memory files or interop images. To take care on it, you can implement singleton pattern for the application and share only one unmanaged instance among different application resources. So today we’ll try to create one large in-memory dynamic bitmap and share it between different instances of WPF controls. Let’s start

The Singleton

First of all let’s create our single instance source. The pattern is straight forward. Create a class derived from INotifyPropertyChanged, create private constructor and static member returns the single instance of the class.

public class MySingleton : INotifyPropertyChanged {

   #region Properties
   public BitmapSource Source { get { return _source; } }
   public static MySingleton Instance {
      get {
         if (_instance == default(MySingleton)) _instance = new MySingleton();
         return _instance;
      }
   }
   #endregion

   #region ctor
   private MySingleton() { _init(); }
   #endregion

Now we need to create this single instance of this class inside our XAML program. To do this, we have great extension x:Static

<Window.DataContext>
    <x:StaticExtension Member="l:MySingleton.Instance" />
</Window.DataContext>

Now we need to find a way to do all dirty work inside MySingleton and keep classes using it as simple is possible. For this purpose we’ll register class handler to catch all GotFocus routed events, check the target of the event and rebind the only instance to new focused element. How to do this? Simple as 1-2-3

Create class handler

EventManager.RegisterClassHandler(typeof(FrameworkElement), FrameworkElement.GotFocusEvent, (RoutedEventHandler)_onAnotherItemFocused);

Check whether selected and focused item of the right type

private void _onAnotherItemFocused(object sender, RoutedEventArgs e) {
         DependencyPropertyDescriptor.FromProperty(ListBoxItem.IsSelectedProperty, typeof(ListBoxItem)).AddValueChanged(sender, (s, ex) => {}

and reset binding

var item = s as ListBoxItem;
var img = item.Content as Image;
if (_current != null && _current.Target is Image && _current.Target != img) {
   ((Image)_current.Target).ClearValue(Image.SourceProperty);
}
if (img != null) {
   _current = new WeakReference(img);
   img.SetBinding(Image.SourceProperty, _binding);
}

We almost done. a bit grease to make the source bitmap shiny

var count = (uint)(_w * _h * 4);
var section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0×04, 0, count, null);
_map = MapViewOfFile(section, 0xF001F, 0, 0, count);
_source = Imaging.CreateBitmapSourceFromMemorySection(section, _w, _h, PixelFormats.Bgr32, (int)(_w * 4), 0) as InteropBitmap;
_binding = new Binding {
   Mode = BindingMode.OneWay,
   Source = _source
};
CompositionTarget.Rendering += (s, e) => { _invalidate(); };

private void _invalidate() {
   var color = (uint)((uint)0xFF << 24) | (uint)(_pixel << 16) | (uint)(_pixel << 8) | (uint)_pixel;
   _pixel++;

   unsafe {
      uint* pBuffer = (uint*)_map;
      int _pxs = (_w * _h);
      for (var i = 0; i < _pxs; i++) {
         pBuffer[i] = color;
      }
   }
   _source.Invalidate();
   OnPropertyChanged("Source");
}

And we done. The usage of this approach is very simple – there is no usage at all. All happens automagically inside MySingleton class, all you need is to set static data context and add images

<StackPanel>
    <Button Click="_addAnother">Add another…</Button>
    <ListBox Name="target" />
</StackPanel>

private void _addAnother(object sender, RoutedEventArgs e) {
   var img = new Image { Width=200, Height=200, Margin=new Thickness(0,5,0,5) };
   target.Items.Add(img);
   this.Height += 200;
}

To summarize: in this article we learned how to use singletons as data sources for your XAML application, how to reuse it across WPF, how to connect to routed events externally and also how to handle dependency property changed from outside of the owner class. Have a nice day and be good people.

Source code for this article (21k) >>

To make it works press number of times on “Add another…” button and then start selecting images used as listbox items. Pay attention to the working set of the application. Due to the fact that only one instance is in use it is not growing.

INotifyPropertyChanged auto wiring or how to get rid of redundant code

For the last week most of WPF disciples are discussing how to get rid of hardcoded property name string inside INotifyPropertyChanged implementation and how to keep using automatic properties implementation but keep WPF binding working. The thread was started by Karl Shifflett, who proposed interesting method of using StackFrame for this task. During this thread other methods were proposed including code snippets, R#, Observer Pattern, Cinch framework, Static Reflection, Weak References and others. I also proposed the method we’re using for our classes and promised to blog about it. So the topic today is how to use PostSharp to wire automatic implementation of INotifyPropertyChanged interface based on automatic setters only.

My 5 ¢

So, I want my code to looks like this:

public class AutoWiredSource {
   public double MyProperty { get; set; }
   public double MyOtherProperty { get; set; }
}

while be fully noticeable about any change in any property and makes me able to bind to those properties.

<StackPanel DataContext="{Binding Source={StaticResource source}}">
    <Slider Value="{Binding Path=MyProperty}" />
    <Slider Value="{Binding Path=MyProperty}" />
</StackPanel>

How to achieve it? How to make compiler to replace my code with following?:

private double _MyProperty;
public double MyProperty {
   get { return _MyProperty; }
   set {
      if (value != _MyProperty) {
         _MyProperty = value; OnPropertyChanged("MyProperty");
      }
   }
}
public event PropertyChangedEventHandler PropertyChanged;
internal void OnPropertyChanged(string propertyName) {
   if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

   var handler = PropertyChanged as PropertyChangedEventHandler;
   if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));
}

Simple: to use aspect oriented programming to inject set of instructions into pre-compiled source.

First of all we have to build some attribute will be used for marking classes requires change tracking. This attribute should be combined (compound) aspect to include all aspects used for change tracking. All we’re doing here is to get all set methods to add composition aspect to

[Serializable, DebuggerNonUserCode, AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class, AllowMultiple = false, Inherited = false),
MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false, Inheritance = MulticastInheritance.None, AllowExternalAssemblies = true)]
public sealed class NotifyPropertyChangedAttribute : CompoundAspect {
   public int AspectPriority { get; set; }

   public override void ProvideAspects(object element, LaosReflectionAspectCollection collection) {
      Type targetType = (Type)element;
      collection.AddAspect(targetType, new PropertyChangedAspect { AspectPriority = AspectPriority });
      foreach (var info in targetType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(pi => pi.GetSetMethod() != null)) {
         collection.AddAspect(info.GetSetMethod(), new NotifyPropertyChangedAspect(info.Name) { AspectPriority = AspectPriority });
      }
   }
}

Next aspect is change tracking composition aspect. Which is used for marking only

[Serializable]
internal sealed class PropertyChangedAspect : CompositionAspect {
   public override object CreateImplementationObject(InstanceBoundLaosEventArgs eventArgs) {
      return new PropertyChangedImpl(eventArgs.Instance);
   }

   public override Type GetPublicInterface(Type containerType) {
      return typeof(INotifyPropertyChanged);
   }

   public override CompositionAspectOptions GetOptions() {
      return CompositionAspectOptions.GenerateImplementationAccessor;
   }
}

And the next which is most interesting one, we will put onto method boundary for tracking. There are some highlights here. First we do not want to fire PropertyChanged event if the actual value did not changed, thus we’ll handle the method on it entry and on it exit for check.

[Serializable]
internal sealed class NotifyPropertyChangedAspect : OnMethodBoundaryAspect {
   private readonly string _propertyName;

   public NotifyPropertyChangedAspect(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");
      _propertyName = propertyName;
   }

   public override void OnEntry(MethodExecutionEventArgs eventArgs) {
      var targetType = eventArgs.Instance.GetType();
      var setSetMethod = targetType.GetProperty(_propertyName);
      if (setSetMethod == null) throw new AccessViolationException();
      var oldValue = setSetMethod.GetValue(eventArgs.Instance,null);
      var newValue = eventArgs.GetReadOnlyArgumentArray()[0];
      if (oldValue == newValue) eventArgs.FlowBehavior = FlowBehavior.Return;
   }

   public override void OnSuccess(MethodExecutionEventArgs eventArgs) {
      var instance = eventArgs.Instance as IComposed<INotifyPropertyChanged>;
      var imp = instance.GetImplementation(eventArgs.InstanceCredentials) as PropertyChangedImpl;
      imp.OnPropertyChanged(_propertyName);
   }
}

We almost done, all we have to do is to create class which implements INotifyPropertyChanged with internal method to useful call

[Serializable]
internal sealed class PropertyChangedImpl : INotifyPropertyChanged {
   private readonly object _instance;

   public PropertyChangedImpl(object instance) {
      if (instance == null) throw new ArgumentNullException("instance");
      _instance = instance;
   }

   public event PropertyChangedEventHandler PropertyChanged;

   internal void OnPropertyChanged(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

      var handler = PropertyChanged as PropertyChangedEventHandler;
      if (handler != null) handler(_instance, new PropertyChangedEventArgs(propertyName));
   }
}

We done. The last thing is to reference to PostSharp Laos and Public assemblies and mark compiler to use Postsharp targets (inside your project file (*.csproj)

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<PropertyGroup>
  <DontImportPostSharp>True</DontImportPostSharp>
</PropertyGroup>
<Import Project="PostSharp\PostSharp-1.5.targets" />

Now we done. We can use clear syntax like following to make all our properties has public setter to be traceable. The only disadvantage is that you’ll have to drag two Post Sharp files with your project. But after all it much more convenience than manual notify change tracking all over your project.

[NotifyPropertyChanged]
public class AutoWiredSource {
   public double MyProperty { get; set; }
}

Have a nice day and be good people. Also try to thing what other extremely useful things can be done with PostSharp (or any other aspect oriented engine)

Source code for this article (1,225 KB)>>

How to calculate CRC in C#?

First of all, I want to beg your pardon about the frequency of posts last time. I’m completely understaffed and have a ton of things to do for my job. This why, today I’ll just write a quick post about checksum calculation in C#. It might be very useful for any of you, working with devices or external systems.

BIOS CRC Error for old thinkpad

CRC – Cyclic Redundancy Check is an algorithm, which is widely used in different communication protocols, packing and packaging algorithms for assure robustness of data. The idea behind it is simple – calculate unique checksum (frame check sequence) for each data frame, based on it’s content and stick it at the end of each meaningful message. Once data received it’s possible to perform the same calculating and compare results – if results are similar, message is ok.

There are two kinds of CRC – 16 and 32 bit. There are also less used checksums for 8 and 64 bits. All this is about appending a string of zeros to the frame equal in number of frames and modulo two device by using generator polynomial containing one or more bits then checksum to be generated. This is very similar to performing a bit-wise XOR operation in the frame, while the reminder is actually our CRC.

In many industries first polynomial is in use to create CRC tables and then apply it for performance purposes. The default polynomial, defined by IEEE 802.3 which is 0xA001 for 16 bit and 0x04C11DB7 for 32 bit. We’re in C#, thus we should use it inversed version which is 0×8408 for 16 bit and 0xEDB88320 for 32 bit. Those polynomials we’re going to use also in our sample.

So let’s start. Because CRC is HashAlgorithm after all, we can derive our classes from System.Security.Cryptography.HashAlgorithm class.

public class CRC16 : HashAlgorithm {
public class CRC32 : HashAlgorithm {

Then, upon first creation we’ll generate hashtables with CRC values to enhance future performance. It’s all about values table for bytes from 0 to 255 , so we should calculate it only once and then we can use it statically.

[CLSCompliant(false)]
public CRC16(ushort polynomial) {
HashSizeValue = 16;
_crc16Table = (ushort[])_crc16TablesCache[polynomial];
if (_crc16Table == null) {
_crc16Table = CRC16._buildCRC16Table(polynomial);
_crc16TablesCache.Add(polynomial, _crc16Table);
}
Initialize();
}

[CLSCompliant(false)]
public CRC32(uint polynomial) {
HashSizeValue = 32;
_crc32Table = (uint[])_crc32TablesCache[polynomial];
if (_crc32Table == null) {
_crc32Table = CRC32._buildCRC32Table(polynomial);
_crc32TablesCache.Add(polynomial, _crc32Table);
}
Initialize();
}

Then let’s calculate it

private static ushort[] _buildCRC16Table(ushort polynomial) {
// 256 values representing ASCII character codes.
ushort[] table = new ushort[256];
for (ushort i = 0; i < table.Length; i++) {
ushort value = 0;
ushort temp = i;
for (byte j = 0; j < 8; j++) {
if (((value ^ temp) & 0×0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
} else {
value >>= 1;
}
temp >>= 1;
}
table[i] = value;
}
return table;
}

private static uint[] _buildCRC32Table(uint polynomial) {
uint crc;
uint[] table = new uint[256];

// 256 values representing ASCII character codes.
for (int i = 0; i < 256; i++) {
crc = (uint)i;
for (int j = 8; j > 0; j–) {
if ((crc & 1) == 1)
crc = (crc >> 1) ^ polynomial;
else
crc >>= 1;
}
table[i] = crc;
}

return table;
}

The result will looks like this for 32 bits

        0x00, 0x31, 0x62, 0x53, 0xC4, 0xF5, 0xA6, 0x97,
        0xB9, 0x88, 0xDB, 0xEA, 0x7D, 0x4C, 0x1F, 0x2E,
        0x43, 0x72, 0x21, 0x10, 0x87, 0xB6, 0xE5, 0xD4,
        0xFA, 0xCB, 0x98, 0xA9, 0x3E, 0x0F, 0x5C, 0x6D,
        0x86, 0xB7, 0xE4, 0xD5, 0x42, 0x73, 0x20, 0x11,
        0x3F, 0x0E, 0x5D, 0x6C, 0xFB, 0xCA, 0x99, 0xA8,
        0xC5, 0xF4, 0xA7, 0x96, 0x01, 0x30, 0x63, 0x52,
        0x7C, 0x4D, 0x1E, 0x2F, 0xB8, 0x89, 0xDA, 0xEB,
        0x3D, 0x0C, 0x5F, 0x6E, 0xF9, 0xC8, 0x9B, 0xAA,
        0x84, 0xB5, 0xE6, 0xD7, 0x40, 0x71, 0x22, 0x13,
        0x7E, 0x4F, 0x1C, 0x2D, 0xBA, 0x8B, 0xD8, 0xE9,
        0xC7, 0xF6, 0xA5, 0x94, 0x03, 0x32, 0x61, 0x50,
        0xBB, 0x8A, 0xD9, 0xE8, 0x7F, 0x4E, 0x1D, 0x2C,
        0x02, 0x33, 0x60, 0x51, 0xC6, 0xF7, 0xA4, 0x95,
        0xF8, 0xC9, 0x9A, 0xAB, 0x3C, 0x0D, 0x5E, 0x6F,
        0x41, 0x70, 0x23, 0x12, 0x85, 0xB4, 0xE7, 0xD6,
        0x7A, 0x4B, 0x18, 0x29, 0xBE, 0x8F, 0xDC, 0xED,
        0xC3, 0xF2, 0xA1, 0x90, 0x07, 0x36, 0x65, 0x54,
        0x39, 0x08, 0x5B, 0x6A, 0xFD, 0xCC, 0x9F, 0xAE,
        0x80, 0xB1, 0xE2, 0xD3, 0x44, 0x75, 0x26, 0x17,
        0xFC, 0xCD, 0x9E, 0xAF, 0x38, 0x09, 0x5A, 0x6B,
        0x45, 0x74, 0x27, 0x16, 0x81, 0xB0, 0xE3, 0xD2,
        0xBF, 0x8E, 0xDD, 0xEC, 0x7B, 0x4A, 0x19, 0x28,
        0x06, 0x37, 0x64, 0x55, 0xC2, 0xF3, 0xA0, 0x91,
        0x47, 0x76, 0x25, 0x14, 0x83, 0xB2, 0xE1, 0xD0,
        0xFE, 0xCF, 0x9C, 0xAD, 0x3A, 0x0B, 0x58, 0x69,
        0x04, 0x35, 0x66, 0x57, 0xC0, 0xF1, 0xA2, 0x93,
        0xBD, 0x8C, 0xDF, 0xEE, 0x79, 0x48, 0x1B, 0x2A,
        0xC1, 0xF0, 0xA3, 0x92, 0x05, 0x34, 0x67, 0x56,
        0x78, 0x49, 0x1A, 0x2B, 0xBC, 0x8D, 0xDE, 0xEF,
        0x82, 0xB3, 0xE0, 0xD1, 0x46, 0x77, 0x24, 0x15,
        0x3B, 0x0A, 0x59, 0x68, 0xFF, 0xCE, 0x9D, 0xAC

Now, all we have to do is to upon request to lookup into this hash table for related value and XOR it

protected override void HashCore(byte[] buffer, int offset, int count) {

for (int i = offset; i < count; i++) {

ulong ptr = (_crc & 0xFF) ^ buffer[i];

_crc >>= 8;

_crc ^= _crc32Table[ptr];

}

}

new public byte[] ComputeHash(Stream inputStream) {

byte[] buffer = new byte[4096];

int bytesRead;

while ((bytesRead = inputStream.Read(buffer, 0, 4096)) > 0) {

HashCore(buffer, 0, bytesRead);

}

return HashFinal();

}

protected override byte[] HashFinal() {

byte[] finalHash = new byte[4];

ulong finalCRC = _crc ^ _allOnes;

finalHash[0] = (byte)((finalCRC >> 0) & 0xFF);

finalHash[1] = (byte)((finalCRC >> 8) & 0xFF);

finalHash[2] = (byte)((finalCRC >> 16) & 0xFF);

finalHash[3] = (byte)((finalCRC >> 24) & 0xFF);

return finalHash;

}

We done. Have a good time and be good people. Also, I want to thank Boris for helping me with this article. He promised to write here some day…

Source code for this article

Making TFS better or what is TITS?

Those days me and my team work very hard toward new version of “The System”. This includes massive refactoring of all solutions, hard work with TFS (which not restricted to only adding files, but also deleting, moving, etc. other words, all stuff, which TFS is not really love). Because of this, we need a bunch of handy tools to make our dreams come true and to decrease unnecessary number of clicks inside Team System Explorer and Visual Studio. You do not really think, that we have no tools to make our everyday job easier. We have. However, we never package and release it. Let me introduce “TITS” – Tools, Invaluable for Team System. This suite I’m planning to release as another open source project within couple of months.

TITS - Tools, Invaluable for Team System

What “TITS” includes? First of all –

“QOF” – Quick Open File

QOF - Quick Open File

This tools is absolutely invaluable if you have big solutions. While all it knows to do is to search. But, wait, what’s wrong with build-in search of Visual Studio? First of all, it does not search Solution items and files, are in solution directory, but not in project. Also it cannot fix your typos and errors. Also it does not know to move you quickly to found solution item in Solution Explorer or in Source Editor.

Basic set of QOF features:

  • No mouse – open any file
  • No mouse – locate any file in solution explorer
  • Highlighting found items
  • Multiple files open
  • Filter by source files only, resources, owner or any other kind of filters
  • Search inside TFS, including history, changesets, shelves (either private and public)
  • …and much much more

Next tool is:

“WIBREW” – Who Is Breaking What

WIBREW - Who is breaking what

Absolutely invaluable tool to know who actually breaking what file inside TFS. For example, I do not want to lock files, while I still want to know who holds what file. TFS provides such feature out-of-the-box, however from command prompt only. You can add it even as macro. Like this:

WIBREW for poor people

However it not user friendly and impossible for use, ‘cos it looks as following:

WIBREW for poor people in action

You do not know what actually developer doing, where and why. With “WIBREW”, you can know:

  • When developer started to break files
  • What exactly he’s doing
  • Is the breaking file locked or now
  • Where the developer breaks it (workspace and computer name of the user)
  • …and much much more

Another tool is:

“WITCH” – What I have To Check-in

If you ever worked with Team Force, you know what this tool is doing. It shows you a preview of all changed files, you’ll check-in. For some reason, TFS has no such feature. Let’s imagine, that your work method is to check out everything, change something and check-in only changed files. Until here TFS does everything, however if you want to preview changeset (for example in order to compare with “WIBREW” output), you can not. Here “WITCH” comes to help.

[Here should be a screenshot of “WITCH”, but it looks exactly the same as “WIBREW” with shameless blurring]

Another invaluable tool is:

“VOCUS” – VOid CUstom Settings for check in

This tool is absolutely UI-less. It allows developers to work with their own custom settings in Visual Studio, while for check-in and check-out it format all documents, according predefined custom settings (for example indentation). How many times, you tried to merge files, when all the difference is indentation it tab size? Well, this tool solves this problem.

VOCUS – VOid CUstom Settings for check in

It stores custom settings for each user (BTW, it also makes able for each developer to restore his settings fluently in any computer) and reformat documents on check-in action toward corporate settings, when on check-out toward custom developer’s setting.

“SHMOC” – SHow MOre Code

This is not actually tool, works with TFS. It rather works with your Visual Studio Development Environment. It’s UI-less as well and makes able to hide and restore all docking windows in VS. It makes you able to write in “Dark Room” mode (which is full screen, distraction free environment) and return to Visual Studio within one button press. It can also change VS color scheme, if required.

“SHMOC” – SHow MOre Code

There are some other tools should be inside this suite, however, I still have no names for them :) Also, if you have something interesting, and you want to contribute it to this suite, you’re highly welcome.

PS: This blog is about code, but this post is 6th in row without even one line of code, so I have to fix it as soon as possible. Thus, I’ll example how WIBREW works under the hood. Other words, small example of how to work with TFS API from Visual Studio plugin.

First of all, as in any VS plugin, you need to acquire DTE2 application object:

_applicationObject = (DTE2)application;
_addInInstance = (AddIn)addInInst;

When you have it, you need to detect what TFS server you’re working with and what are user credentials for this session. The common problem of WIBREW for poor men, was how to work with this tool over VPN (when your connected session is only inside VS). So each time, you tried to run it, you had to enter your domain credentials – very inconvenience way of work.

In order to prevent it, let’s ask your environment about Team Foundation information:

private TeamFoundationServerExt _tfsExt;

_tfsExt = (TeamFoundationServerExt)_applicationObject.GetObject("Microsoft.VisualStudio.TeamFoundation.TeamFoundationServerExt");

Also, you can be notified when your work project context was changed. To do this, just subscribe to ProjectContextChanged event and handle it inside:

_tfsExt.ProjectContextChanged += OnProjectContextChanged;

public void OnProjectContextChanged(object sender, EventArgs e) {
         if (!string.IsNullOrEmpty(_tfsExt.ActiveProjectContext.ProjectName)) {

Now when we know, that we have out active project context, all we have to do is to ask about changes

private VersionControlExt _vcExt;

_vcExt = (VersionControlExt)_applicationObject.GetObject("Microsoft.VisualStudio.TeamFoundation.VersionControl.VersionControlExt");

Inside VersionControlExt object you have following self-descriptive properties and methods: FindChangeSet, History, PendingChanges, SolutionWorkspace etc. however it works only with TFS solution explorer. To handle pending changes for the project without tickling TFS, we can use it internal methods. All the difference is with references. To work with Visual Studio TFS explorer methods, you should reference:
Microsoft.VisualStudio.TeamFoundation.dll, Microsoft.VisualStudio.TeamFoundation.Client.dll and Microsoft.VisualStudio.TeamFoundation.VersionControl.dll, while working with TFS API directly, use Microsoft.TeamFoundation.dll, Microsoft.TeamFoundation.Client.dll and Microsoft.TeamFoundation.VersionControl.dll from [PROGRAM FILES]\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\. Just like this:

VersionControlServer _vcs

_vcs = (VersionControlServer)_server.GetService(typeof(VersionControlServer));

var _sets = _vcs.QueryPendingSets( new[] { new ItemSpec(serverPath, RecursionType.Full) }, null, null);

foreach (PendingSet set in sets) {
… //Get everything you need here

We done. It’s very easy to work with Team System from inside Visual Studio. Also it’s very easy to build useful tools, not built by Microsoft for some reason :)

Have a nice day, be good people and wait for me to beatify sources before releasing as another Open Source application.

Nifty time savers for WPF development

I just published an article on Code Project, that explains how to use my latest FM USB library for building real world software radio receiver with WPF. There I referenced to some nifty WPF time savers, I’m using for everyday development. So, today I want to share those code pieces with you.

Software radio reciever screenshot

Binding time savers

Want to use following syntax for set binding in code?

Presets.SetBinding(ListBox.ItemsSourceProperty, _device, "Presets");

This piece of code sets binding to Preset DependencyObject, which is Listbox and connects ListBox.ItemsSource dependency property with “Presets” property of CLR object _device. How it done? Simple, as usual

[DebuggerStepThrough]
public static BindingExpressionBase SetBinding(this DependencyObject target, DependencyProperty dp, object source, string path) {
   Binding b = new Binding(path);
   b.Source = source;
   return BindingOperations.SetBinding(target, dp, b);
}

But what to do when we need a converter? Simple:

[DebuggerStepThrough]
public static BindingExpressionBase SetBinding(this DependencyObject target, DependencyProperty dp, object source, string path, IValueConverter converter) {
   Binding b = new Binding(path);
   b.Source = source;
   b.Converter = converter;
   return BindingOperations.SetBinding(target, dp, b);
}

However to use this method, we need to create special object, which implements IValueConverter. Whey not to do it generically? Like this:

SignalTransform.SetBinding(ScaleTransform.ScaleYProperty, _device.RDS,"SignalStrength", new ValueConverter<byte, double>(b => { return 1-(b / 36d); }));

But we need this special handy ValueConverter class. What’s the problem? Here come the king:

public class ValueConverter<TIN, TOUT> : IValueConverter {

   public ValueConverter(Func<TIN, TOUT> forwardConversion) {
      ForwardConversion = forwardConversion;
   }

   public ValueConverter(Func<TIN, TOUT> forwardConversion, Func<TOUT, TIN> reverseConversion) {
      ForwardConversion = forwardConversion;
      ReverseConversion = reverseConversion;
   }

   public Func<TIN, TOUT> ForwardConversion { get; set; }

   public Func<TOUT, TIN> ReverseConversion { get; set; }
   public object Convert(object value, Type targetType, object parameter, CultureInfo culture) {
      try {
         var in1 = Object.ReferenceEquals(value, DependencyProperty.UnsetValue) ? default(TIN) : (TIN)value;
         return ForwardConversion(in1);
      } catch {
         return Binding.DoNothing;
      }
   }

   public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) {
      try {
         var out1 = Object.ReferenceEquals(value, DependencyProperty.UnsetValue) ? default(TOUT) : (TOUT)value;
         return ReverseConversion(out1);
      } catch {
         return Binding.DoNothing;
      }
   }

}

Isn’t it really simple? But what to do with ugly App.Current.Dispatcher.BeginInvoke((SendOrPostCallback)delegate(object o)…? Use dispatcher time savers.

Dispatcher time savers

Don’t you ever want to do this in order to make context switching between UI thread and other application thread in WPF?

this.Dispatch(() => {… DO SOMETHING IN UI THREAD …};

Now you can (with default and preset DispatcherPriority:

public static DispatcherOperation Dispatch(this DispatcherObject sender, Action callback) { return sender.Dispatch(DispatcherPriority.Normal, callback); }

public static DispatcherOperation Dispatch(this DispatcherObject sender,  DispatcherPriority priority, Action callback) {
   if (sender.Dispatcher == null) return null;
   if (sender.Dispatcher.CheckAccess()) {
      callback();
      return null;
   } else {
      return sender.Dispatcher.BeginInvoke(priority, callback);
   }
}

Nice, isn’t it? But what to do if we do not want to set binding, but we do want to be notified about property change of dependency objects?

Bindingless handlers time saver

Let’s assume, that we have “Tune” UIelement, which has Angle property, but not exposes PropertyChanged event (like it done with Rotary custom control by Expression Blend team… Designers, you know… :)

However I want to be able to add handler for Angle dependency property changed event and do something when it changed. Like this:

Tune.AddValueChanged(RotaryControl.RotaryControl.AngleProperty, (s, ex) => {
   _device.Tune(Tune.Angle > _prevTune);
   _prevTune = Tune.Angle;
});

Here comes our time saver for this purpose:

public static void AddValueChanged(this DependencyObject sender, DependencyProperty property, EventHandler handler) {
   DependencyPropertyDescriptor.FromProperty(property, sender.GetType()).AddValueChanged(sender, handler);
}

But if we add handler we should be able to remove it too.

public static void RemoveValueChanged(this DependencyObject sender, DependencyProperty property, EventHandler handler) {
   DependencyPropertyDescriptor.FromProperty(property, sender.GetType()).RemoveValueChanged(sender, handler);
}

Now we done with some of my nifty helpers. And last, but not the least:

All times question: how to scale ranges

Here is how :)

public static double ToRange(this double value, double minSource, double maxSource, double minTarget, double maxTarget) {
   var sr = maxSource – minSource;
   var tr = maxTarget – minTarget;
   var ratio = sr / tr;
   return minTarget+(value / ratio);
}

Now we can connect them and get something like this:

Volume.AddValueChanged(RotaryControl.RotaryControl.AngleProperty, (s, ex) => {
   DirectSoundMethods.Volume = (int)Volume.Angle.ToRange(Volume.CounterClockwiseMostAngle, Volume.ClockwiseMostAngle, -4000, 0);
});

Isn’t it brilliant?

Have a good day and be sure to read and rate my last article on Code Project :) Be good people.

Source code for Silverlight 2 controls

Too much exciting news today. Shortly after announced about Windows 7 beta download, I found, that Joe Stegman, Seema Ramchandani, Andre Michaud, Jon Sheller and other guys from Silverlight team released the source code of managed Silverlight controls, included in System.Windows.dll, System.Windows.Controls.dll, and System.Windows.Controls.Data.dll. Get it, you have a lot of thing to learn from this package.

image

Download Silverlight 2.0 controls source code >>

Recommended

 

Sponsor


Partners

WPF Disciples
Dreamhost
Code Project