Saturday, August 27, 2005

      Action Speaks Louder Than Words

We often try to argue our way through; failing to realise that it is not what we say that speaks about ourself, it is what we do. In fact, it is what we do or have done that matters. It is what has actually happened and that is a creditable claim to itself.

In an interview, a candidate will say all sorts of good things to credit himself, just so that he can get the job. But how does he really perform under pressure? how well does he manage a team? What acts did he perform before to substantiate his words?

As a bystander, we often criticise others of making a wrong judgement or decision; but fail to realise that we could make the same mistakes ourselves when faced with similiar circumstances.

Today, I was asked to choose between professional ethics and my job. My friend said that there isn't a choice. In fact, there is; we (humans) always have a choice. In the 1st Habit, 'Being Proactive' Stephen R. Covey states that between external circumstances and a person's reaction, we always have a choice to choose our reaction. However, sadly, in most cases, many allow the external circumstances to dictate their reaction, thinking that they "don't have a choice". Hence, they are left feeling inadequate and they end up blaming the whole world for the undesireable outcomes; all except themselves, failing to see that they have a responsibility for the outcomes themselves (consiously or sub-consiously).

When Anakin Skywalker was swayed to the 'Dark Side', he excused himself that he had to do it because he had to save his wife, Padame. He lamented he didn't have a choice. When in fact, he did. Between righteousness and his fear of losing his loved ones, he forsaked rightousness and chose to fall to his fear of losing his loved ones. He fell to the extend that he killed mercilessly, even kids. In doing so, he forsaked humanity. He betrayed his fellow Jedis and his mentor; in doing so, he forsaked friendship and integrity. He did make a choice. He chose fear over all that he forsaked. Implicitly, it meant that these were not as important as his fear of losing his loved ones. No matter how loudly Anakin argued his helplessnes, he DID make a choice and acted upon it. Hence, it is not what one says that matters, it is what one does that matters more. For underlying each of those actions, is a decision; whether it is done implicitly or explicitly.

"A Proactive person takes responsibility for their circumstances, their decisions and their reactions. In doing so, they CAN decide on their reactions". Yes we can decide on one thing which is within our control; ourselves. Can we stop George Bush from attacking Iraq? No. Can we stop the oil price hike? No. Can we decide on how we wish to react to these events? Yes.

Everyday, we are faced with situations requiring us to decide between 2 choices. Again, many times, these decisions are made implictly. So what influences these subtle decisions? It is the underlying values beneath each person's character that has the strongest influences. For a family-oriented man, he will choose to have career opportunities slipped by, just so that he can spend more time with his family. Whereas a career-oriented man will sacrifices his family time for buidling a career. Choices are made everyday, in every subtle ways; and these choices manifests themselves as actions that speaks for what a person is, despite however much he can say it is otherwise.

Back to my dilemna; action does speak louder than words. I have to make a choice now. I don't need to argue the case to anyone. My actions will speak for itself. In my case, my action will have to speak for what I believe in, my values.




Friday, August 19, 2005

      The Case For Code Reuse

The case for code reuse is multiple, and we have tried to do so since the early days of programming. In procedural languages, we used functions / procedures as a means of reuse; then there was include tags in asp scripts. Rather rudimentary, if you ask me. These means did work to a certain extent, but also introduced many issues of their own.

This article 'Code Reuse in the Enterprise' states reuse as beyond reusing of code, and I quote:
"... reuse has move beyond 'code reuse' to include a wide range of assets that can be used in multiple applications and projects. An organization's software assets could include any artifact related to the software development life cycle such as code components, Web services, patterns, models, frameworks, architectural guidelines, and process templates"

Yes, and in fact, that is the true value of reuse; not only at the code level, but in the way we go about architecting and designing applications also; The benefits that I see are threefold:
  • Increased development speed - as the development team overcomes the initial steep learning curve in adopting any architecture frameworks, it becomes easier for them to 'reuse' these knowledge onto subsequent projects to achieve the same functionalities e.g. security, logging etc. In fact, the .NET Framework is such a case, albeit on a more general and broader context. Enterprises or software vendors can come up with the own frameworks, building on the .NET Framework, customizing it into their context of application. An example of such a framework would be the MS Enterprise Library.
  • Product stability - as the frameworks matures over time, reuse will ensure that the functionalities they provide are 'tried-and-tested', making sure bugs and issues are already identified and rectified. This results in lesser-bugs per line of code and yet lesser time for development.
  • Consistency and Standardization - As design patterns are applied over and over again across projects, it becomes 2nd nature for developers to adhere to a consistent coding standards, which then forms a de-facto enterprise level coding guideline. This consistency makes it easier for new developers to take over existing projects, and increase the maintainability of the programs. Again, the .NET Framework does a very good work of applying these design patterns that developers building applications on top of the framework finds it easy to adopt knowledge from using one part of the framework onto another part. A good example as highlighted in
    'Discover the Design Patterns You're Already Using in the .NET Framework', is the Observer Pattern which is applied throughout the framework, allowing the decoupling of the event source and the event handler. It is consistently used in event handling behind UI triggers, both for Windows and Web applications.

I guess there are more benefits, but these are some that are most striking to me. However, be mindful that reuse is no 'silver bullet' to the software developments woes; overrun projects, late delivery, requirements mismatch etc. Nonetheless, adoption of reuse is necessarily one of the key driver in many of the success stories of software projects.




Friday, August 12, 2005

      Solution for Deployment of Enterprise Library with Builtin Instrumentation

As already mentioned before, the EL now comes with builtin instrumentation and performance monitoring, by default. Funny thing is that during installation of EL, the performance counters required are NOT registered by default. That's only on the development platform. To compound to this issue, the same steps of registering the performance counters must be performed on the deployment platform, and of course, administrative rights is required. Duh...!
In searching for solutions to the built-in instrumentation within EL, I found the following blogs that are very useful:

In instances where deploying with administrative rights is not an option i.e. externally hosted solutions, the EL must be recompiled to disable the built-in instrumentation via removing certain compiler directives.

this blog explains the steps required to achieve this, and I quote it here for ease of reference:

  • Open up the EnterpriseLibrary.sln and modify the Configuration Properties\Build\Conditional Constants of the EnterpriseLibrary.Common project.
  • Remove the USEWMI;USEEVENTLOG;USEPERFORMANCECOUNTER constants. By removing these constants, all of the internal Enterprise Library instrumentation will be disabled.
  • Recompile.

Of course, with this solution, we introduce additional configuration management overheads. Which application works with instrumentation? Which one don't? In addition, 2 versions of EL assemblies must be maintained; with instrumentation, and without.

Headache...




Saturday, August 06, 2005

      Multithreading with System.Collections.Hashtable

I have been working on multithreading recently and I will be blogging about issues pertaining to this over the next few blogs.

The Hashtable collection in .NET framework is used for storing key-values pairs as a collection. It also supports the IEnumerable interface so that we can use C#'s convenient 'foreach' to iterate through its elements. Under the hood, the 'foreach' uses the IEnumerator.MoveNext() to iterate through the collection.

In using a Hashtable within a multi-threaded scenario. MSDN documentation states the following for thread safety:
"Thread Safety

To support one or more writers, all operations on the Hashtable must be done through the wrapper returned by the Synchronised method.

Enumerating through a collection is intrinsically not a thread-safe procedure. Even when a collection is synchronized, other threads could still modify the collection, which causes the enumerator to throw an exception. To guarantee thread safety during enumeration, you can either lock the collection during the entire enumeration or catch the exceptions resulting from changes made by other threads."

Now what this really means is this; The Hashtable supports multiple concurrent read operations, however, if the collection is modified during these iterations i.e. Hashtable.Add() or Hashtable.Remove(), then an InvalidOperationException will be generated.

For example:

public class TestMultithreadingHashtable{

private Hashtable table;

public void Main()
{
Hashtable table = new Hashtable();
//code to insert a lot of elements.

Thread readThread = new Thread(new ThreadStart(ReadFromHashtable));
Thread writeThread = new Thread(new ThreadStart(WriteToHashtable));

readThread.Start();
Thread.CurrentThread.Sleep(500);
writeThread.Start();

}

public void WriteToHashtable()
{
table.Add(key, element);
//this will invalidate any existing IEnumerator.
// no exception when this is executed on write thread.

}

public void ReadFromHashtable()
{
IEnumerator enumerator = table.GetEnumerator();
//use this to iterate through elements.
while (enumerator.MoveNext())
//InvalidOperationException is thrown immediately after
//write thread has executed Add() as IEnumerator is no longer valid.
{
//code to display elements.
}
}

}//end of class.

Upon a more thorough digging about in MSDN on IEumerator, the following details clarifies things a little.

"An enumerator remains valid as long as the collection remains unchanged. If changes are made to the collection, such as adding, modifying or deleting elements, the enumerator is irrecoverably invalidated and the next call to MoveNext or Reset throws an InvalidOperationException. If the collection is modified between MoveNext and Current, Current will return the element that it is set to, even if the enumerator is already invalidated.

The enumerator does not have exclusive access to the collection; therefore, enumerating through a collection is intrinsically not a thread-safe procedure. Even when a collection is synchronized, other threads could still modify the collection, which causes the enumerator to throw an exception... "

So to sum it up, multiple concurrent read operations on a Hashtable is fine. If there is an operation that modifies the content of the table, it must be synchronised against the read operations.

To achieve this, we can use the simple 'lock' (C#) or 'synclock' (VB.NET) mechanism, or for better performance and fine-grained control, use ReadWriterLock class. Maybe I will elaborate a little more on this the next time.