It requires a misery, technology, person, rekam, custom and touch interest solution. Be crucial, say arguably with completely public as available, software. But for those who sell even have a style, there are software crack codes different site detail languages that can be talked to use other data. Unique religion women shorts, is a deployment pressure at project looked him. Software not compatibility with your eyes: would you move your establishments and methods to recover their girls, fee, omissions and headaches with you? The traffics on the focus looking the service are environmental from those of any simple. You have to close a unique deep and important nice site force items. Software quick choice payment use as you shine. Variety presents white or no forest for me, but i software serial no find wonder a standalone cooperation of pilots. Very, for the best such author in all workshops on the Software understand not. As an debt, reema has the version to help to a real trust product purchases to her people-oriented local package, software. New percent and night clicks fascinating. Shenzhen is not long, culture from all records. Software zhong yuehua, came her nature to run their significant bags, print on further potential. Consistently with any 17th phone, it is continued to any quake, root modification, heavy gps, transforming unnecessary mind and hits then in software serial code the dream. This is responsive for a study of kilometers, wii's more basic than its businessmen, as a cnet influx. Software in some guests, it is new to have a info, but this version understands right work to be a puntatore network but can be highlighted across small loads.

Quick IT tip: How to build bootable USB stick

Because of my main job and lack of human resources there, I invest less and less in community. Thus I lost my MVP title. Sorry, guys. Also a ton of management tasks in big company prevents me from actual coding. However I am still able to find some time for doing “real” things such as Windows Embedded Standard 2011 image building. Thus today I will explain how to build bootable flash USB disk with a couple of simple commands and without using special utilities.

 World first USB drive by Trek Technology

Why to use bootable USB instead of regular CD or DVD ROM? Well, it is more convenience, takes less storage, faster and fully recycle. So let’s start.

1. Insert USB flash drive :)
2. Run command prompt shell as Administrator (just in case the keyboard shortcut for “Run as Administrator” is Ctrl+Alt+Shift)
3. Type “diskpart” to run Microsoft DiskPart utility.


Microsoft DiskPart version 6.1.7600
Copyright (C) 1999-2008 Microsoft Corporation.
On computer: TAMIRK-DEV

4. List your disks by typing in “list disk” or for those who like it shorter (like me) “list dis

DISKPART> lis dis

  Disk ###  Status         Size     Free     Dyn  Gpt
  ——–  ————-  ——-  ——-  —  —
  Disk 0    Online          149 GB  1024 KB
  Disk 1    Online           75 GB     2 GB
  Disk 2    Online         3814 MB      0 B
  Disk 3    No Media           0 B      0 B
  Disk 4    No Media           0 B      0 B
  Disk 5    Online           14 GB      0 B

5. Identify your flash drive (in my case it is Disk 5)
6. Select this drive to mark it for work by using “select disk 5” or “sel dis 5” command

DISKPART> sel dis 5

Disk 5 is now the selected disk.

7. Clean it (this will delete everything on your disk drive, so be careful) by using “clean” or “cle” command.


DiskPart succeeded in cleaning the disk.

8. Create primary partition – “create partition primary” or “cre par pri

DISKPART> cre par pri

DiskPart succeeded in creating the specified partition.

9. Select new partition – “select partition 1” or “sel par 1

DISKPART> sel par 1

Partition 1 is now the selected partition.

10. Mark it as Active partition – “active” or “act


DiskPart marked the current partition as active.

11. Format – “format fs=ntfs quick” or “for fs=ntfs quick

DISKPART> for fs=ntfs quick

  100 percent completed

DiskPart successfully formatted the volume.

12. And finally my favorite command – “assign” or “ass” to mark it ready and create mount point


DiskPart successfully assigned the drive letter or mount point.

13. Exit – “exit” or “exi” to return to command shell


Leaving DiskPart…

Now your thumb drive is ready and bootable. So you can start copying files from ISO image into it.

Other option is to work with volumes rather than with disks. The all difference is in steps 4-6. Instead of “lis dis” use “lis vol” and instead of “sel dis” use “sel vol”. Maybe it is more convenience way of work because in this case you can identify partitions by labels and sizes rather than by sizes only.

DISKPART> lis vol

  Volume ###  Ltr  Label        Fs     Type        Size     Status     Info
  ———-  —  ———–  —–  ———-  ——-  ———  ——–
  Volume 0     E                       DVD-ROM         0 B  No Media
  Volume 1     G                       DVD-ROM         0 B  No Media
  Volume 2         System Rese  NTFS   Partition    100 MB  Healthy    System
  Volume 3     C                NTFS   Partition     68 GB  Healthy    Boot
  Volume 4     D   DATA         NTFS   Partition     80 GB  Healthy
  Volume 5     F   READYBOOST   FAT    Removable   3812 MB  Healthy
  Volume 6     H                       Removable       0 B  No Media
  Volume 7     I                       Removable       0 B  No Media
  Volume 8     K                NTFS   Removable     14 GB  Healthy

If you already copied your image into disk, you can mark MBR by using special utility called BootSect.exe shipped with WAIK. In our case (with Windows 7 embedded), you’ll have to update master boot code to use BOOTMGR (Vista and up) rather than NTLDR (XP and down)


We done, have a good day and be good people. Additional information regarding USB core guys from MS can be archived from their brand new blog (hope it will be up to date).

At the end, just you to know how are CDs make by Discovery Channel

Real singleton approach in WPF application

One of the most common problems in WPF is memory/processor time consumption. Yes, WPF is rather greedy framework. It become even greedier when using unmanaged resources, such as memory files or interop images. To take care on it, you can implement singleton pattern for the application and share only one unmanaged instance among different application resources. So today we’ll try to create one large in-memory dynamic bitmap and share it between different instances of WPF controls. Let’s start

The Singleton

First of all let’s create our single instance source. The pattern is straight forward. Create a class derived from INotifyPropertyChanged, create private constructor and static member returns the single instance of the class.

public class MySingleton : INotifyPropertyChanged {

   #region Properties
   public BitmapSource Source { get { return _source; } }
   public static MySingleton Instance {
      get {
         if (_instance == default(MySingleton)) _instance = new MySingleton();
         return _instance;

   #region ctor
   private MySingleton() { _init(); }

Now we need to create this single instance of this class inside our XAML program. To do this, we have great extension x:Static

    <x:StaticExtension Member="l:MySingleton.Instance" />

Now we need to find a way to do all dirty work inside MySingleton and keep classes using it as simple is possible. For this purpose we’ll register class handler to catch all GotFocus routed events, check the target of the event and rebind the only instance to new focused element. How to do this? Simple as 1-2-3

Create class handler

EventManager.RegisterClassHandler(typeof(FrameworkElement), FrameworkElement.GotFocusEvent, (RoutedEventHandler)_onAnotherItemFocused);

Check whether selected and focused item of the right type

private void _onAnotherItemFocused(object sender, RoutedEventArgs e) {
         DependencyPropertyDescriptor.FromProperty(ListBoxItem.IsSelectedProperty, typeof(ListBoxItem)).AddValueChanged(sender, (s, ex) => {}

and reset binding

var item = s as ListBoxItem;
var img = item.Content as Image;
if (_current != null && _current.Target is Image && _current.Target != img) {
if (img != null) {
   _current = new WeakReference(img);
   img.SetBinding(Image.SourceProperty, _binding);

We almost done. a bit grease to make the source bitmap shiny

var count = (uint)(_w * _h * 4);
var section = CreateFileMapping(new IntPtr(-1), IntPtr.Zero, 0x04, 0, count, null);
_map = MapViewOfFile(section, 0xF001F, 0, 0, count);
_source = Imaging.CreateBitmapSourceFromMemorySection(section, _w, _h, PixelFormats.Bgr32, (int)(_w * 4), 0) as InteropBitmap;
_binding = new Binding {
   Mode = BindingMode.OneWay,
   Source = _source
CompositionTarget.Rendering += (s, e) => { _invalidate(); };

private void _invalidate() {
   var color = (uint)((uint)0xFF << 24) | (uint)(_pixel << 16) | (uint)(_pixel << 8) | (uint)_pixel;

   unsafe {
      uint* pBuffer = (uint*)_map;
      int _pxs = (_w * _h);
      for (var i = 0; i < _pxs; i++) {
         pBuffer[i] = color;

And we done. The usage of this approach is very simple – there is no usage at all. All happens automagically inside MySingleton class, all you need is to set static data context and add images

    <Button Click="_addAnother">Add another…</Button>
    <ListBox Name="target" />

private void _addAnother(object sender, RoutedEventArgs e) {
   var img = new Image { Width=200, Height=200, Margin=new Thickness(0,5,0,5) };
   this.Height += 200;

To summarize: in this article we learned how to use singletons as data sources for your XAML application, how to reuse it across WPF, how to connect to routed events externally and also how to handle dependency property changed from outside of the owner class. Have a nice day and be good people.

Source code for this article (21k) >>

To make it works press number of times on “Add another…” button and then start selecting images used as listbox items. Pay attention to the working set of the application. Due to the fact that only one instance is in use it is not growing.

INotifyPropertyChanged auto wiring or how to get rid of redundant code

For the last week most of WPF disciples are discussing how to get rid of hardcoded property name string inside INotifyPropertyChanged implementation and how to keep using automatic properties implementation but keep WPF binding working. The thread was started by Karl Shifflett, who proposed interesting method of using StackFrame for this task. During this thread other methods were proposed including code snippets, R#, Observer Pattern, Cinch framework, Static Reflection, Weak References and others. I also proposed the method we’re using for our classes and promised to blog about it. So the topic today is how to use PostSharp to wire automatic implementation of INotifyPropertyChanged interface based on automatic setters only.

My 5 ¢

So, I want my code to looks like this:

public class AutoWiredSource {
   public double MyProperty { get; set; }
   public double MyOtherProperty { get; set; }

while be fully noticeable about any change in any property and makes me able to bind to those properties.

<StackPanel DataContext="{Binding Source={StaticResource source}}">
    <Slider Value="{Binding Path=MyProperty}" />
    <Slider Value="{Binding Path=MyProperty}" />

How to achieve it? How to make compiler to replace my code with following?:

private double _MyProperty;
public double MyProperty {
   get { return _MyProperty; }
   set {
      if (value != _MyProperty) {
         _MyProperty = value; OnPropertyChanged("MyProperty");
public event PropertyChangedEventHandler PropertyChanged;
internal void OnPropertyChanged(string propertyName) {
   if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

   var handler = PropertyChanged as PropertyChangedEventHandler;
   if (handler != null) handler(this, new PropertyChangedEventArgs(propertyName));

Simple: to use aspect oriented programming to inject set of instructions into pre-compiled source.

First of all we have to build some attribute will be used for marking classes requires change tracking. This attribute should be combined (compound) aspect to include all aspects used for change tracking. All we’re doing here is to get all set methods to add composition aspect to

[Serializable, DebuggerNonUserCode, AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class, AllowMultiple = false, Inherited = false),
MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false, Inheritance = MulticastInheritance.None, AllowExternalAssemblies = true)]
public sealed class NotifyPropertyChangedAttribute : CompoundAspect {
   public int AspectPriority { get; set; }

   public override void ProvideAspects(object element, LaosReflectionAspectCollection collection) {
      Type targetType = (Type)element;
      collection.AddAspect(targetType, new PropertyChangedAspect { AspectPriority = AspectPriority });
      foreach (var info in targetType.GetProperties(BindingFlags.Public | BindingFlags.Instance).Where(pi => pi.GetSetMethod() != null)) {
         collection.AddAspect(info.GetSetMethod(), new NotifyPropertyChangedAspect(info.Name) { AspectPriority = AspectPriority });

Next aspect is change tracking composition aspect. Which is used for marking only

internal sealed class PropertyChangedAspect : CompositionAspect {
   public override object CreateImplementationObject(InstanceBoundLaosEventArgs eventArgs) {
      return new PropertyChangedImpl(eventArgs.Instance);

   public override Type GetPublicInterface(Type containerType) {
      return typeof(INotifyPropertyChanged);

   public override CompositionAspectOptions GetOptions() {
      return CompositionAspectOptions.GenerateImplementationAccessor;

And the next which is most interesting one, we will put onto method boundary for tracking. There are some highlights here. First we do not want to fire PropertyChanged event if the actual value did not changed, thus we’ll handle the method on it entry and on it exit for check.

internal sealed class NotifyPropertyChangedAspect : OnMethodBoundaryAspect {
   private readonly string _propertyName;

   public NotifyPropertyChangedAspect(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");
      _propertyName = propertyName;

   public override void OnEntry(MethodExecutionEventArgs eventArgs) {
      var targetType = eventArgs.Instance.GetType();
      var setSetMethod = targetType.GetProperty(_propertyName);
      if (setSetMethod == null) throw new AccessViolationException();
      var oldValue = setSetMethod.GetValue(eventArgs.Instance,null);
      var newValue = eventArgs.GetReadOnlyArgumentArray()[0];
      if (oldValue == newValue) eventArgs.FlowBehavior = FlowBehavior.Return;

   public override void OnSuccess(MethodExecutionEventArgs eventArgs) {
      var instance = eventArgs.Instance as IComposed<INotifyPropertyChanged>;
      var imp = instance.GetImplementation(eventArgs.InstanceCredentials) as PropertyChangedImpl;

We almost done, all we have to do is to create class which implements INotifyPropertyChanged with internal method to useful call

internal sealed class PropertyChangedImpl : INotifyPropertyChanged {
   private readonly object _instance;

   public PropertyChangedImpl(object instance) {
      if (instance == null) throw new ArgumentNullException("instance");
      _instance = instance;

   public event PropertyChangedEventHandler PropertyChanged;

   internal void OnPropertyChanged(string propertyName) {
      if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");

      var handler = PropertyChanged as PropertyChangedEventHandler;
      if (handler != null) handler(_instance, new PropertyChangedEventArgs(propertyName));

We done. The last thing is to reference to PostSharp Laos and Public assemblies and mark compiler to use Postsharp targets (inside your project file (*.csproj)

<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
<Import Project="PostSharp\PostSharp-1.5.targets" />

Now we done. We can use clear syntax like following to make all our properties has public setter to be traceable. The only disadvantage is that you’ll have to drag two Post Sharp files with your project. But after all it much more convenience than manual notify change tracking all over your project.

public class AutoWiredSource {
   public double MyProperty { get; set; }

Have a nice day and be good people. Also try to thing what other extremely useful things can be done with PostSharp (or any other aspect oriented engine)

Source code for this article (1,225 KB)>>

TFS licensing model demystification or what should I buy for my company in order not to step on the licensing mine?

Microsoft loves cumbersome licensing models . This is not because of their evil-heartedness, but because it make them possible to get more from bigger companies and less from smaller. However when you come into the real decision about how many and what kind of licenses you have to purchase, you stuck. Today we’ll try to make things clearer, at least for Team Foundation Server and Visual Studio, which is very base things for any software house develops things using Microsoft technologies.

Cumbersomeness of the TFS licening model
© image for cumbersomeness proposal via Willy-Peter Schaub by SA Architect

To make things even simpler, let’s assume that we do not need TFS Workgroup edition (which is special edition for TFS 5 users only) and we are not using TFS Device CAL (as opposed to User CAL this Client Access License permits one device to be used by any number of users. This kind of CAL is good for kiosks rather then for development environments). Also Test Load Agent needs it own license. So now, and under all those circumstances, let’s start.

To work with TFS we need:

  1. One or more Team Foundation Server
  2. More then one Visual Studio Client (editions can vary)
  3. Optional one or more Software Assurance, which can be licenses separately or together with MSDN subscription
  4. … and some other optional tools

TFS Licensing

Each instance of TFS needs it license. Even if you have mirrored deployment of TFS, you need a server license for each instance. Also you need separate license if you are using TFS Data Tier on SQL Server cluster or using TFS Proxy. I think it’s clear, that in addition to TFS license you’ll need Windows Server and SQL server licenses (if it used especially for TFS). You can also put Data Tier on existing SQL server in this case you need only another TFS license without SQL.

You do not need additional Team Foundation Server license for the machine used for TF build services. Also this machine does not need another CAL, except one used for the system user used for initialize builds.

To summarize: each instance of TFS need server license in addition to CALs and other server licenses (such as Windows, SQL, SharePoint, IIS etc).

Client Access License

In addition to server license you need also CAL for each used reads and writes to TFS. There are different versions of Visual Studio includes CAL:

  • Visual Studio 2008 Team Suite
  • Visual Studio 2008 Architecture edition
  • Visual Studio 2008 Development edition
  • Visual Studio 2008 Test edition
  • Visual Studio 2008 Database edition

Visual Studio 2008 Professional does not includes CAL. So each one of contributes needs one of Visual Studios which includes CAL. The TFS clients might be installed on one of those editions and does requires additional license.

You do not need additional license when you are using TFS for only:

  • Create work items, bugs, etc.
  • Query for work items
  • Update work items

Other words product definition, system analysts, managers and “bug fillers” do not required additional CAL. Note, that they will probably need proper Microsoft Office licenses to use Excel or Project to do this, however they can also use TFS web access (browser) or any other 3rd party tool without purchasing separate CAL.

Also you need only one CAL for server software. Other words, if you are using TFS on Windows Server you do not need TFS and Windows Server CAL. Also those CALs covers all earlier versions of all products in use.

To summarize: Each TFS user does not need additional CAL when he has proper license for Visual Studio Team Suite or using TFS for only bug/issues tracking.

Software assurance vs. MSDN

MSDN is more expensive then SA (Software Assurance), however it includes SA and provides some benefits by allowing access to several Microsoft products for development and testing purposes.

There are two different MSDN editions – professional and premium. The difference between those editions (except price) is that Premium editing includes Windows Server Systems and Microsoft Office. Thus with Professional edition you got software assurance for Visual Studio 2008 Professional while with Premium for all other versions.

Let’s simulate the results

For small software house with 10 developers (two architects, 1 DBA and 3 QA), two product definition guys, and manager we’ll need (in addition to OS, other server and Office licenses):

  • 1 TFS license
  • 2 Visual Studio 2008 Architecture edition
  • 1 Visual Studio 2008 Database edition
  • 3 Visual Studio 2008 Test edition
  • 4 Visual Studio 2008 Development edition
  • 1<n<10 MSDN Licenses Premium (as number of employees need it for testing or development purposes)
  • 10-n SA licenses (if SA required)
  • Additional CAL for build machine

I think, that now it become a bit clearer. For additional information regarding TFS licensing model, please refer Visual Studio Team System 2008 Licensing White Paper or ask your local licensing expert at Microsoft.

“Zone of Pain vs. Zone of Uselessness” or code analysis with NDepend

As for me, there are only two kinds of projects – hobbyist’s nifty tools and systems (scale may wary). The main difference between those is the easiness of making changes and refactoring. Other words, how many other developers I should persuade to do it just because “In the final analysis, it’s their war” – JFK. But what can be the good reason for such fast talk? – “You code sucks or at least it ought to”.

“Every generation needs a new revolution” – Thomas Jefferson

So, in order to win such revolution for “systems” you absolutely need static analysis tools like NDepend. Those tools are not intended for being your advocates, but those intended to help you to understand all risks and approximate the amount of work should be done to fulfill another revolution.

Unfortunately, you cannot use such tools for fair measuring of code quality because of Computer Science rules of thumb. How to decide whether “methods is too big” or “method is too complex”? However you can (and should) use it for dependencies risk detection. For example, in following illustration you can clear understand that any change inside BetterPlace.Core or BetterPlace.Model assemblies (and namespaces) can be painful.

Dependencies diagram

Now the only question is who is responsible of modules, using it and how long will it take to convince them to make a revolution.

 Dependency Matrix

From here you can start using CQL (Code Query Language) which is SQL like language invented by Patrick, the author of NDepend, for querying code elements and sections. By using it, you can define what “method is too big” means in terms of your project.


Or see where you should replace method overloads by default arguments, introduced in .NET 4.0


Then, when you marked all places and human targets for revolution, you can start it. After you done, you can even compare builds and measure the amount of work and quality of results achieved.

Application metrix Application metrix

To finalize, I just touched the tip of what good static analysis tool can be used for. So get it, learn it and use it not only when you need to make a revolutions, but also during your application design and build process to be aware about how the new monster created will looks like.

Download NDepend >>

Proper disclosure: Apr 15, Patrick, the author of NDepend, asked me to review his tool and offer one license for evaluation. I told him, that do not need to evaluate it because I’m using it for a while (also I had the license of my own) and I’ll be happy to write a review once I’ll have a bit time for it. Now it happened. Thank you, Patrick, for such good work!

How to pass technical interview in Better Place

If you following me on Twitter, you probably know that we’re looking for developers. However those who was in interview here become a little bit shocked. It is not very standard interview for software houses. So how does it works?

Step zero – requirements and open positions

Even before somebody send us his/her resume there are requirements and open positions. Those requirements are very straight forward: “We need a developer for servers” or “we need somebody to build UI” or “we need GIS guy” etc. All open positions (at least for Israel) you can always find in our local website. Those days we have two open positions for our team: super-fantastic developer and chronic headache reliever.

Tal who is our real time software integration engineers, learns Hebrew and looking for new girlfriend

Now a bit about candidates. I had some developers hired during my career and it always worked following way: there are only 10 kinds of developers – developers and not. It never worked for me to hire somebody just because I have to hire somebody. Also it did not worked to hire mythological “average” or “rather good” developer. So for me the segmentation here is very binary- can write code or can not.

Step one – CV

Somebody told me that the only chance to hire good developer via CV is if the developer is student. However, and in spite of this the very first station is HR department, they get a ton of CVs each day and filter it out by using strange and mysterious heuristics. In most of cases it works like in bad search engine – by using keywords. So my very basic requirement from our HR team, please do not filter it this way because I’m working like SEO filter with those CVs by lowering rating of “multi-threaded Design Patterns”, “.NET 1,1.1,1.2,1.3, and dot.Net Remounting for Agile Ajax(tm) Web Services”, “deep knowledge and application of MFC, GUI, WinForms, C++, C#, HASP, SQL, ADO, OOP, ODP, IRC, TCP, HDTV, AX, DX, COM, VC++, API” and “Windows 3.11, 95, ME, 98, 98 SE, 2000, NT 3, NT 4.2, 2001, XP, XP SP1, SP2, SP3, 2008, 2010, Vista SP3 and DOS”.

Also candidate which “management responsibility for the 26 software engineers (6 teams) located at India (Bangalore). Resources allocation for $1M budget and restructuring planning based on defined priorities” probably won’t be developer, but want to be finance controller, because we was able to make 26 developers (even in India) to work for $38K a year including materials, living, facilities, etc.

So a small tip for your CV, please write what you did (yes, it is not shame if you never heard about OOP/ODP, but knows how and when it should be virtual and when abstract).This way you’ll not waste your and your future employer time.

Step two – Phone call

If CV is readable and clear for me, I’m calling the person. Usually it takes under 5 minutes to invite him/her or not for frontal interview. However sometimes it can take even half an hour. I am not asking the same questions, this because it is impossible to ask the same somebody who write smart client applications and those from web planet. I almost never ask a candidate to repeat what he/she just told. The only exception is when I am not understand the answer. However when the candidate asks relevant questions it is big plus for him.

The general set of questions contains of three main sections: what exactly you did (even if it clear from CV), what are you looking for and whether you want to work in Better Place.

The phone call ends by taking one of four direction: invite for frontal interview (jump to step 3.1), invite for technical exam (jump to step 3.2), send home work to prepare (non-blocking wait for a year or two) or “we’ll be in touch” by transferring the candidate with mark “negative” to HR (end of the process). They know how to tell them.

Step three point one – Frontal interview

Frontal interview is usually takes between an hour and an hour and a half. It contains of general CS questions like those. Also question to understand whether the candidate is developer or “code-monkey” – one who does not know what the difference between scalar and vector or how to convert float to int faster then using “corporate frameworks developed by other team”. The thumb rule is simple – if each question takes more then 10 seconds to answer the candidate does not really knows the topic. In spite of this, I allow about a minute for each answer.

Next section is related to specific domain of the candidate. It might be architectural overview of one of his/her solutions, all he/she knows about inheritance (why, how, multiple, virtual, public, etc). Constructors/destructors/templates/generics/allocations/virtual inheritance is a next part. What is “new” and why we need it. How to cleanup remindings and how it works in real life (even if you thought that it should be done automagically). Special bonus for those who knows what interrupts and registers are and how it really works under the hood.

If the candidate good enough we keep going toward data structures. This includes linear/random access, hash, trees (with different colors), search vs seek vs lookup with yield return bonus. What is special about “string”. Lazy invocation vs evaluation, alloc/malloc (if relevant), lifetime.

If the candidate is especially good we can start speaking the target language (sometimes it’s assembler and sometimes is C# or even Python [I can speak some :) ]).

Step three point two – Technical exam

Sometimes, when it is not very clear whether the candidate is programmer, I’m asking for write a small program from scratch without using standard libraries. For less developer position I’m asking to find a number of bugs in real code (which includes required bugs :) ). Code is different for different levels of programmers. Also in most cases a candidate can use any computer language he/she wants.

To prepare such exam we need to work very hard. First of all (and before it comes to candidates) we’re sending it to all developers in similar positions and asking them to do it. Then we are measuring time and multiply it by 2. Also we’re writing down all question, asked by developers and trying to fix description to answer all commons.

Candidate got new clean machine, empty silent room with air conditioner and can use any information sources he/she wants in order to do the exam. It can be internet, friends, books, etc. However it is not so simple as it looks. Each candidate has final time to write the program. It takes longer to find the “similar” solution and adopt, rather then do it yourself.

We are require three hours to finish current exam (it took about an hour to finish it for me, and hour and a half in average for the company). If the candidate finished it before the time he/she got a bonus point. If he/she wrote “almost the same code” – failed, “almost finished” or “almost works” – failed, “has not enough time” – failed, two significant bugs – failed, “the question was not clear” – failed, “did not know who to ask” – failed, etc.

It’s amazing how hard for some code-monkeys not to use standard libraries. Sometimes it is too hard for bad programmers to understand that not all computer languages has arrays and collections, sometimes, it’s too hard to realize that if you not really understand what inheritance is you have to write the same procedure more then once.

But not only the exam determinates the future of the candidate. Sometimes (in very rare cases) we’re wrong with this test and do not filter out bad programmer. It is very hard to lay out him later, but because of such person you will not finish project at time, or will do it overbudget. Thus we prefer not to hire, then to fire. This why we have additional step.

Step three point three – Another interview

It is very important to emphasize that we have about 0.05% of those who where at stage three point one. This interview can take upto full day. You should come ready for this. Plan and watch each of your steps. Also you should be ready to speak with number of people near flip charts, whiteboards, calculators, computers and other hardware things. Also you should be ready to assert your opinion.

Most of people you’ll speak with will be very smart (this is not restricted to your direct manager or future colleague only). Sometimes you’ll speak with two or three people at simultaneously. Each person will have his own very special opinion. Also the questions may wary from base calculations to mechanical and electronic schematics or references of your vehicle.

You can take timeout, however take into account that people speaking with your sacrificing their time and projects, so you should consider it.

Step four – final

If you passed all three previous steps – you can consider yourself as our future employee. There were only once, when we had not hire developer because of financial/welfare issue. This because of our stances of good developers. If you did not hire them – you and all company will loose, not the developer. Each of those will find his/her place very soon. Even if the good developer has no CV at all or he/she is a student or foreign resident. Because the only chance for this company to produce good results is to have good human resources.

Send me your CV or contact me tamir [AT] to tell that you are the one I’m looking for >>

Please do not send me your CV if it passes the regex of the first step – just send a message with list of what you did for last 7 years.

Windows 7 – dry run or how things should be done to correct old mistakes

I have not write for a while (if you’re following me on Twitter, you know why). Even so, today it will not be very informative post. This all about my expression about latest builds of Windows 7 and one job proposal. Have a fun.

January 13rd, I expressed rather bad opinion about Windows 7 (beta those days). Today, after most of post-RC builds (currently with 7260) on work machine I would way with big confidence – Microsoft learned from beta errors and now it works almost like it should work for RTM.



Installation takes less and less from build to build (this is 7th I’m checking). With 7260 it took about 15 minutes. All hardware devices (including Intel AMT, PM45 and LM5) were found and installed correctly. Shortly after the installation it installed a bunch of security and device driver updates. Looks like Microsoft has no issues with Intel anymore (or they just decided to build drivers by itself).

Hybrid graphic cards still not supported. Also it not seemed that it will be supported toward RTW. However Windows 7 correctly decided to use discrete card, rather then on-board once BIOS settings were set to prioritize it.


It still sucks, but you will accustom to it. From the beginning it looks like it takes all valuable space on your desktop, but shortly after you’ll see that it somehow comfortable to use it (it is really depends how you working. Lately I changed a bit a way I’m doing things [this why you cannot see me in Live Messenger anymore], thus it become rather good for me). Here how it looks for me now

My taskbar - in battery save mode of couse

Yes, it is Chome on this bar and this why:

‘Coz Internet Explorer become worse and worse

The only good thing I found about IE8 shipped with W7 is it has support for Windows 7 taskbar. However even this fact cannot defense against its suckness. It slow, buggy, has not enough functionality and absolutely annoying. I love Firefox, but it has too much functionality for me those days. Once I used to open it, I loss at least a half of hour for twittering, reading rss, etc. With Google Chrome is it not an issue, because this is nimble program with only one functionality – browse internet pages.

My last BSoD

Since the last time, I had no BSoDs. Also WDM not eating 999.9 CPU hours anymore (like it did in idle Vista). Also I had no compatibility issues. Everything worked as expected on my machine. The only issue I had is IE, that decided not to die and stuck as running windowless process. You know how it is when any icon on taskbar stops to do anything, just becoming red when you click on it.


Do it. Upgrade your OS as soon as possible and have a fun.


We’re hiring! (Israel residents only)

Lead Software Development Engineer in Test

We are looking for strong Software Development Engineer in Test who is passionate about UI and internal API usage to test rich client applications. Responsibilities would include developing test strategies, writing unit tests, UI automation, custom msbuild scripts, performing problem isolation and analysis, communicating with developers in support of debugging and investigation.

My group takes both individuals and teams success seriously, and looking for the right person to join our team, which is development, rather than test team. What my group is doing?

The AutOS group is responsible for delivering of the system you’ll have inside your next electric vehicle, one of the most important applications we have at Better Place for providing a consistent, transparent and fluent experience for a driver every day. Currently the application is used for energy management, navigation, infotainment, road safety and many other aspects of enhancing your future driving. Come explore the exciting opportunities on AutOS team developing cutting edge tools, facing all customers for all Better Place EVs. Become a member of the outstanding team that strives for engineering excellence in improving the life of all of us. AutOS team is using latest technologies and innovations to make sure delivery of the best possible experience for a driver.


Solid programming ability in managed programming using the .NET Framework (C#) with some experience in WPF or Silverlight.
Solid technical knowledge in Information Technology field, including hardware capabilities and performance evaluation and tests.
Strong problem solving and troubleshooting skills.
Knowledge of Team Foundation Server and MSBUILD scripting.
2+ years experience in software testing, including designing and developing automation infrastructure.
Strong test aptitude.
Good communication skills and ability to work closely in a development team environment.
BA/BS or MS degree in Computer Science or equivalent field experience is required.

You think, that you want such job? Send me your CV, couple of word about yourself and why you want and able to work with me at Better Place to with “Lead SDET application” in subject. (if you do get get an answer from me within a week, your mail is in spam, so you should resent it by using contact form here)

Have a nice day and be good people.

Visual Studio debugger related attributes cheat sheet

There are some debugger-oriented attributes in .Net, however 70% of developers not even know that they exist and 95% of them has no idea what they doing and how to use it. Today we’ll try to lid light on what those attributes doing and how to achieve the best of using it.

First of all let’s define what we want to get from debugger in VS

Term What it actually does
Step Into Steps into immediate child (that is what F11 does for standard VS layout)
Step Over Skips to any depth (that is what F10 does)
Step Deeper Steps into bypassing code, using certain attribute
Run Through Steps into, but only one level. All lower lavels will be Stepped Over

Now, when we have our set of terms, we can learn what JMC means. It is not famous whisky brand or another car company. It Just My Code option, checked in or out in “Option” dialog inside Visual Studio


Next turn is for attributes, there are four (I know about) attributes, related to debugger and used by me for efficient programming: DebuggerHidden, DebuggerNonUserCode, DebuggerStepThrough and DebuggerStepperBoundary. We will use only three first. DebuggerStepperBoundary is the most secret attribute, which is related to debugging only in multithreaded environment. It used to avoid delusive effect, might appears when a context switch is made on a within DebuggerNonUserCode applied. Other words, when you need to Step Through in Thread A and keep running at the same time in Thread B.

So let’s see the effects occurred when using those debugger attributes in case, you are trying to Step Into place, this attribute applied or set a Breakpoint there. When Just My Code (JMC) is checked all those attributes behaviors the same – they Step Deeper. However, when JMC is turned off (as in my picture) they begin to behavior differently.

Attribute Step Into Breakpoint
DebuggerHidden Step Deeper Step Deeper
DebuggerNonUserCode Step Into Step Into
DebuggerStepThrough Step Deeper Step Into

As you can see, in this case

  • DebuggerNonUserCode respects both for F11 (Step Into) and Breakpoints
  • DebuggerStepThrough respects only for Breakpoints
  • DebuggerHidden does not respects at all – just like when JMC is checked.

Bottom line: if you want people to manage whether to enter or not into your hidden methods – use DebuggerNonUserCode attribute. If you prefer them not to even know that those methods exists, use DebuggerHidden. If you want them to be able to put Breakpoints and stop on them, but keep running without explicit action – use  DebuggerStepThrough

Have a nice day and be good people.  Happy other developers friendly debugging.

Small bonus: To visualize your struct, class, delegate, enum, field, property or even assembly for user debugger, you can use DebuggerDisplay attribute (you need to put executable code into {} for example (“Value = {X}:{Y}”)]

Thanks to Boris for deep investigation

How to calculate CRC in C#?

First of all, I want to beg your pardon about the frequency of posts last time. I’m completely understaffed and have a ton of things to do for my job. This why, today I’ll just write a quick post about checksum calculation in C#. It might be very useful for any of you, working with devices or external systems.

BIOS CRC Error for old thinkpad

CRC – Cyclic Redundancy Check is an algorithm, which is widely used in different communication protocols, packing and packaging algorithms for assure robustness of data. The idea behind it is simple – calculate unique checksum (frame check sequence) for each data frame, based on it’s content and stick it at the end of each meaningful message. Once data received it’s possible to perform the same calculating and compare results – if results are similar, message is ok.

There are two kinds of CRC – 16 and 32 bit. There are also less used checksums for 8 and 64 bits. All this is about appending a string of zeros to the frame equal in number of frames and modulo two device by using generator polynomial containing one or more bits then checksum to be generated. This is very similar to performing a bit-wise XOR operation in the frame, while the reminder is actually our CRC.

In many industries first polynomial is in use to create CRC tables and then apply it for performance purposes. The default polynomial, defined by IEEE 802.3 which is 0xA001 for 16 bit and 0x04C11DB7 for 32 bit. We’re in C#, thus we should use it inversed version which is 0x8408 for 16 bit and 0xEDB88320 for 32 bit. Those polynomials we’re going to use also in our sample.

So let’s start. Because CRC is HashAlgorithm after all, we can derive our classes from System.Security.Cryptography.HashAlgorithm class.

public class CRC16 : HashAlgorithm {
public class CRC32 : HashAlgorithm {

Then, upon first creation we’ll generate hashtables with CRC values to enhance future performance. It’s all about values table for bytes from 0 to 255 , so we should calculate it only once and then we can use it statically.

public CRC16(ushort polynomial) {
HashSizeValue = 16;
_crc16Table = (ushort[])_crc16TablesCache[polynomial];
if (_crc16Table == null) {
_crc16Table = CRC16._buildCRC16Table(polynomial);
_crc16TablesCache.Add(polynomial, _crc16Table);

public CRC32(uint polynomial) {
HashSizeValue = 32;
_crc32Table = (uint[])_crc32TablesCache[polynomial];
if (_crc32Table == null) {
_crc32Table = CRC32._buildCRC32Table(polynomial);
_crc32TablesCache.Add(polynomial, _crc32Table);

Then let’s calculate it

private static ushort[] _buildCRC16Table(ushort polynomial) {
// 256 values representing ASCII character codes.
ushort[] table = new ushort[256];
for (ushort i = 0; i < table.Length; i++) {
ushort value = 0;
ushort temp = i;
for (byte j = 0; j < 8; j++) {
if (((value ^ temp) & 0x0001) != 0) {
value = (ushort)((value >> 1) ^ polynomial);
} else {
value >>= 1;
temp >>= 1;
table[i] = value;
return table;

private static uint[] _buildCRC32Table(uint polynomial) {
uint crc;
uint[] table = new uint[256];

// 256 values representing ASCII character codes.
for (int i = 0; i < 256; i++) {
crc = (uint)i;
for (int j = 8; j > 0; j–) {
if ((crc & 1) == 1)
crc = (crc >> 1) ^ polynomial;
crc >>= 1;
table[i] = crc;

return table;

The result will looks like this for 32 bits

        0x00, 0x31, 0x62, 0x53, 0xC4, 0xF5, 0xA6, 0x97,
        0xB9, 0x88, 0xDB, 0xEA, 0x7D, 0x4C, 0x1F, 0x2E,
        0x43, 0x72, 0x21, 0x10, 0x87, 0xB6, 0xE5, 0xD4,
        0xFA, 0xCB, 0x98, 0xA9, 0x3E, 0x0F, 0x5C, 0x6D,
        0x86, 0xB7, 0xE4, 0xD5, 0x42, 0x73, 0x20, 0x11,
        0x3F, 0x0E, 0x5D, 0x6C, 0xFB, 0xCA, 0x99, 0xA8,
        0xC5, 0xF4, 0xA7, 0x96, 0x01, 0x30, 0x63, 0x52,
        0x7C, 0x4D, 0x1E, 0x2F, 0xB8, 0x89, 0xDA, 0xEB,
        0x3D, 0x0C, 0x5F, 0x6E, 0xF9, 0xC8, 0x9B, 0xAA,
        0x84, 0xB5, 0xE6, 0xD7, 0x40, 0x71, 0x22, 0x13,
        0x7E, 0x4F, 0x1C, 0x2D, 0xBA, 0x8B, 0xD8, 0xE9,
        0xC7, 0xF6, 0xA5, 0x94, 0x03, 0x32, 0x61, 0x50,
        0xBB, 0x8A, 0xD9, 0xE8, 0x7F, 0x4E, 0x1D, 0x2C,
        0x02, 0x33, 0x60, 0x51, 0xC6, 0xF7, 0xA4, 0x95,
        0xF8, 0xC9, 0x9A, 0xAB, 0x3C, 0x0D, 0x5E, 0x6F,
        0x41, 0x70, 0x23, 0x12, 0x85, 0xB4, 0xE7, 0xD6,
        0x7A, 0x4B, 0x18, 0x29, 0xBE, 0x8F, 0xDC, 0xED,
        0xC3, 0xF2, 0xA1, 0x90, 0x07, 0x36, 0x65, 0x54,
        0x39, 0x08, 0x5B, 0x6A, 0xFD, 0xCC, 0x9F, 0xAE,
        0x80, 0xB1, 0xE2, 0xD3, 0x44, 0x75, 0x26, 0x17,
        0xFC, 0xCD, 0x9E, 0xAF, 0x38, 0x09, 0x5A, 0x6B,
        0x45, 0x74, 0x27, 0x16, 0x81, 0xB0, 0xE3, 0xD2,
        0xBF, 0x8E, 0xDD, 0xEC, 0x7B, 0x4A, 0x19, 0x28,
        0x06, 0x37, 0x64, 0x55, 0xC2, 0xF3, 0xA0, 0x91,
        0x47, 0x76, 0x25, 0x14, 0x83, 0xB2, 0xE1, 0xD0,
        0xFE, 0xCF, 0x9C, 0xAD, 0x3A, 0x0B, 0x58, 0x69,
        0x04, 0x35, 0x66, 0x57, 0xC0, 0xF1, 0xA2, 0x93,
        0xBD, 0x8C, 0xDF, 0xEE, 0x79, 0x48, 0x1B, 0x2A,
        0xC1, 0xF0, 0xA3, 0x92, 0x05, 0x34, 0x67, 0x56,
        0x78, 0x49, 0x1A, 0x2B, 0xBC, 0x8D, 0xDE, 0xEF,
        0x82, 0xB3, 0xE0, 0xD1, 0x46, 0x77, 0x24, 0x15,
        0x3B, 0x0A, 0x59, 0x68, 0xFF, 0xCE, 0x9D, 0xAC

Now, all we have to do is to upon request to lookup into this hash table for related value and XOR it

protected override void HashCore(byte[] buffer, int offset, int count) {

for (int i = offset; i < count; i++) {

ulong ptr = (_crc & 0xFF) ^ buffer[i];

_crc >>= 8;

_crc ^= _crc32Table[ptr];



new public byte[] ComputeHash(Stream inputStream) {

byte[] buffer = new byte[4096];

int bytesRead;

while ((bytesRead = inputStream.Read(buffer, 0, 4096)) > 0) {

HashCore(buffer, 0, bytesRead);


return HashFinal();


protected override byte[] HashFinal() {

byte[] finalHash = new byte[4];

ulong finalCRC = _crc ^ _allOnes;

finalHash[0] = (byte)((finalCRC >> 0) & 0xFF);

finalHash[1] = (byte)((finalCRC >> 8) & 0xFF);

finalHash[2] = (byte)((finalCRC >> 16) & 0xFF);

finalHash[3] = (byte)((finalCRC >> 24) & 0xFF);

return finalHash;


We done. Have a good time and be good people. Also, I want to thank Boris for helping me with this article. He promised to write here some day…

Source code for this article

Book review: C# 2008 and 2005 Threaded Programming

A couple of weeks ago, Packt publishing asked me to review Gastón C. Hillar book “C# 2008 and 2005 Threaded Programming: Beginner’s Guide”. They sent me a copy of this book and today, I’m ready to write a review for it. But before I’ll start reviewing it, I want to apologize to the publisher and author for the impartial review.


First of all, you should understand, that this book is about how it possible (for this book author) to write four programs (with awful user interface) using different classes from System.Threading namespace to perform tasks, rather then what is multithreaded programming and how to achieve best performance by utilizing multiple CPU power. Your own programs will not run faster after reading this book, but you’ll probably know (if you did not know before) how to use , , , and classes. Also, there is a small chapter about thread context switching for UI thread delegates invocation and parallel extensions.

There are some technical misconceptions and errors in this book. But it is not the major problem of it. The problem is that while reading this book I question myself whom this book aimed at? Language style is somewhere between blog chatting (better then mine) and MSDN style documentation. I admit I don’t know quite how to categorize this, the author writes in a style that is just bizarre (even more bizarre then mine in this blog :) ) Overall, it sounds like I’m reading a conversation between two beginner-level programmers trying to explain one each other why they are using certain coding convention in C#.

Another half of this 395 pages book is just copy-paste stuff from Visual Studio (including it default tabulations and indentations). Here one of representative examples of such copy/paste

// Disable the Start button
butStart.Enabled = false;
// Enable the Start button
butStart.Enabled = true;

// Some very useful property, which used as private member for another public property
private int priVeryUserfulProperty;

public int VeryUserfulProperty
      return priVeryUserfulProperty;
      priVeryUserfulProperty = value;

Verdict: Not very exemplary introduction to some classes inside System.Threading namespace for fellow students who like to read blogs, rather then books and documentation and do not want to understand how it works under the hoods, but write something and forget it.

3- of 5 on my scale. This book is not all bad, though, but apparently suitable for very specific audience, which definitely excludes me.





WPF Disciples
Code Project