Behaviour Driven Development (BDD) & tools for .Net

I had read about the behaviour driven development (BDD) quite some time back. However it was more in conceptual stage at that time. So I was looking forward to some new and actionable information to act on it.

For those who are new to BDD here are few links to get up to speed.

  1. BDD Wiki Page
  2. BehaviourDriven.Org home page
  3. Dan North’s article on introduction to BDD

What caused me write this post was one of the recent tweets by the @ScottGu pointing to an article by Rajesh Pillai (BDD using SpecFlow on ASP.NET MVC Application) and also while searching for similar stuff, I found another article with same title but little more theorotical content by Steve Sanderson (Behavior Driven Development (BDD) with SpecFlow and ASP.NET MVC).

Continue reading “Behaviour Driven Development (BDD) & tools for .Net”

Universal Phonetic Script Needed

I tweeted just a few days back (twitter:@hemant_sathe) about a Tamil minister changing spelling of his name by removing the zhagram and replacing it with l. Azhagiri to Alagiri. The primary reason for this is that there are three different pronunciations of L in Tamil and there is no way this can be represented in English. I should rather say Latin script. So the Tamilian people started using zh for the third pronunciation which is not so intuitive for a non-Tamil person to understand. This resulted in most of the country pronouncing the Tamil names with “zh” differently. It is an all together different matter that Tamil/ South Indian people spell most of their names differently than rest of India. I had a friend whose father’s name was spelled as Santhirasekaran which otherwise would have been spelled as Chandrashekharan. I have seen four different spelling of Tyagaraya (or the famous T. Nagar in Chennai) on the four nameplates of shops in same building. The zhagram problem is however quite genuine.

Are the people crazy or is the script crazy

I first was laughing at these crazy ways of spelling names but then I thought is it really Tamil people who are funny or is it the script in which they have to write their names? The answer is obvious. Despite being one of the most popular language in the world, English language and for that matter many European languages use the same old Latin script to express. This script has serious limitations. Though some languages like German are using it quite strictly and much like a phonetic script, you can not escape the fact that the script has limitations. Most Indian scripts on the other hand are almost fully phonetic. You speak what you write. In English on the other hand all these spellings have almost same pronunciation – red, raid, read, greyed – for the “red” part.

Non phonetic scripts are a big hurdle in mobile computing

Speech to text is still an emerging technology and is not as widely popular as it should have been by now primarily because most of the effort/ intelligence on scripts like Latin is wasted on finding the right context of the word and then use appropriate spelling. The tool needs a huge set of dictionary to parse through. By having a phonetic script for English and other languages with same problem can take today’s computing to new heights. Imagine – no more spellings. No more spelling bee contests. Just vacate a large portion of your brain to do something more important than remembering spellings. What is more that it will give a tremendous boost to manage all the computers across the world using verbal commands. This will remove the need for keyboards making the gadgets more portable than ever. It is not a simple journey but it is achievable. Despite all the attempts to popularize computers and internet access it was mobile phone and that too a voice call that made revolution in today’s India.

One universal script or multiple scripts

Are all phonetic scripts good enough for today’s computing needs? Not really. Scripts like Devnagari need to reverse order of glyphs for short “ee” and short “oo”. Some languages need the glyphs to be written one below the other. I would also go on to say that scripts like Chinese or Japanese are way too crammed. For today’s modern languages we need script which is as loose as Latin and as phonetic as possible. We need to come up with some new script or modify current ones. Is it possible to have a universal script? Yes it may be. But it is very difficult to implement a universal script because come what may, there will be a condition where in some part of the world a particular letter may have a variant not covered by the script or there are different directions in which the script is written e.g. L2R, R2L, vertical and we may also end up having thousands of sound glyphs. We can restrict the number of scripts and can ask multiple languages to start moving towards unified script. In history such attempts have been done successfully across the world. Some language fanatics may get hurt in this process but remember we are not discarding the language but just a script. We can also club scripts based on the style and satisfy egos. E.g. Devnagari, Gujrati, Bengali etc. are similar scripts. We can have a combined set of glyphs from these languages. Same can be done for all four south Indian scripts and we have have just two to three scripts across India but still retain all 20+ languages.

So I am eager to see this funny script I am using disappear from the world and replaced by a better script. How about you?

Developing WBS – Effort vs Result

When we start working at a project we are very focused on the process that we follow. For most of IT projects following waterfall model, phases of the project are quite the same. Requirements, Design, Development, System Testing, Acceptance. Even in large projects, we have a time sequence in forms of iterations, phases and within each phase we follow similar patterns.

Estimations are done before hand based on the high level requirements. Which get refined after the details of requirements in first phase are done. But rarely adjusted for actual work. Glen Alleman writes about confusion between effort and results. This is a good article which explains how to move the focus from effort to actual delivery of the project. What he suggests is to create the WBS based on completing a requirement rather than for a phase of the project. One of the advantage of this is that you can actually get a pretty good view of how much work you have really completed. This also helps you work out the dependencies between various features.

My experience with project management suggests use of UML for requirements management. I used Enterprise Architect UML modeling tool for the same some time back. One of the great use of this tool is to be able to model the requirements and be able to map them with actual code components. This gives us a clear traceability between various components and requirements. So when a change in system is required it is much easier to find the impacted areas.

However, I do not agree with some of the exclusion Glen speaks about. Some WBS activities are required to ensure that the product is delivered and the people issues are also well taken care of. We do need to have a holistic view of the product in the early work. We can not break the WBS in set of features before we have addressed some key architectural issues. Also the tooling required for the project is indicated by the design. If the tool has a wider effect on the project, it needs to be considered in the WBS. But apart from these all other points are very valuable for delivering the product.