WWW Yariv Hammer's Code Site

Tuesday, February 28, 2006

Transactions in Enterprise Services

One of the most important features in COM+ is distributed transactions. A local transaction is a transaction in which we open a connection, perform a series of actions, and close the connection. In a distributed transaction we have several connections we need to synchronize in order to be able to perform several related actions simultaneously.
An example: We have two bank accounts. We need to move 100$ from one account to the other. In order to achieve that we need to decrease the amount of money in the first account and increase the amount of money in the second account. But what if one of the operations fails? There could have been a network problem, an electrical power interruption or a malfunction in the database. If we managed to increase the money in the second account but failed to decrease the money in the first account, we will loose money. We need a way to perform both operations as one unit - if one of them fails - they both fails. Only if both succeeds - they all succeed. Transactions can help in those situations.

Two Phases Commit
DTC of Microsoft works in an algorithm called Two Phases Commit. In a One Phase Commit we will perform some operations in a serial way, and at the end, if we succeeded, we commit the changes. In our case, each transaction is split into sevral local transaction. At the first action COM+ handles the transaction. First we perform the Prepare Phase - each component must indicate that it is prepared to commit. There is no actual Commit yet. If one of the component aborted the transaction, we do not even enter the second phase. If all the components voted that they are prepared, we perform the Commit Phase. The DTC waits for a vote of Commit from all the component. After Fail or Timeout, all the components are told to Roll-back the transaction, and return to the state from before the transaction.

In COM+ we use DTC. SQL Server and MSMQ are examples for services which support Two Phase Commit, and thus are considered as valid Resource Managers. For other systems, such as Oracle, which doesn't support Two Phase Commit, we have the CRM (Compensating Resource Manager) mechanism in COM+. It helps us to implement a class that will manage the resource in the COM+ way.

The Requirement of Transactions - ACID
Atomicity - The transaction perform several operations that are considered as one unit - all succeed or all fail.
Consistency - Information is not lost during the transaction. If the transaction was commited, we are at a stable consistent state. If the transaction rolls-back we are at the state prior to the transaction.
Isolated - There are no outside interference during the transaction. Components which do not participate in the transaction cannot access the resources while the transaction is in action. In order to achieve this, we must of some synchronization mechanism (we have COM+ of course for that too).
Durable - Failures can be recovered. In case of a failue the system, once working again, should be at a consistent state. DTC uses a logging mechanism in order to achieve that.

Setting Up Transactions Using COM+ Configuration Tool
Each component, in the Transaction tab, has the following options:
Disabled - The component is not built in a technology in which transactions are supported.
Not Supported - The component does not participate in any transactions.
Supported - The component will participate in a transaction only if the creating component is in a transaction
Required - The component will always participate in a transaction. If the calling component is in a transaction, the component will exist in that transaction. If the caller do not participate in a transaction, the component will start its own transaction.
Required New - The component always start a new transaction.

When you program a transaction, the first step would be to take a pen or pencil (or your favourite case tool), and draw the components and their relation. Lets do an example: We have a BankManager which has a method MoveFunds. The method should take money from one account into a second account. So Account will have AddFunds and RemoveFunds methods. We will set the BankManager to be Require New, and the Account to be Supported. This way, whenever we call MoveFunds we start a new transaction. The class will use Accounts, and the AddFunds and RemoveFunds will participate in the transaction. If one of the methods will fail, the whole transaction will fail, and the money will be restored into the first account.

The costs of the transactions are management of flags. JITA (Just-In-Time-Activation) is on if you use transaction. Each objects is created when it is called in the trasaction, and is released when it is not used. This insures isolation. Another mandatory feature is synchronization.
In order to support JITA there is a Done flag. In order to support consistency there is Consistent and Abort flags. There are also costs in performance. It is better not to include too many components in the same transaction.

Programming Transactions
When you want your component to participate in a transaction, you mark it with attribute [Transaction]. The properties of this attribute correspond with the Transaction Tab in the COM+ Configuratin Tool: TransactionOptions contains enumerations of all transaction support section (Requirted, Supported, etc). Take for example the case of RequiredNew: any call to any method of the class will start a new transaction. You can also set Isolation and Timeout.

In the beginning of transaction the Consistent flag of all components is set to true. Once we do a change in state which makes it no loger consistent, we set the flag to false. The Done flag is set to false in every component. Once we call the SetComplete method in the end of the operation the Done flag is set to true. If we call the SetAbort method, the Done flag will be true, but the Consistent flag will be false, and the whole transaction will be rolled back. Once every component called the SetComplete method, the transaction succeeds. It takes only one component to abort for the whole transaction to fail.
We can have an intermediate state, on which there is a problem, but the caller component might be able to fix it. For example, the database failed to respond, but the caller can start it up and the transaction can continue. In this case we can call the DisableCommit which leaves both the Done and the Consistent flags in false. The object still lives, and at any other point the component will be able to call the SetComplete method.

The ContextUtil class has static properties and methods that can help us manage the transactions.
- SetComplete - Done = true Consistent = false. Commit transaction.
- SetAbort - Done = true Consistent = true. Abort Transaction.
- EnableCommit - Done = false Consistent = true. The transaction can be commited. The object cannot be deactivated
- DisableCmmit - Done = false Consistent = false. The transaction should not commit (but it can be changed).
- DeactivateOnReturn - Controls the Done flag
- MyTransactionVote - Controls the Consistent flag

class Account:ServicedComponent
public class AdjustBalance(int account, decimal amount)
You can have a shortcut by using the [AutoComplete] attribute. The SetComplete will automatically be called for you when the method is over, unless an exception is thrown, and in this case SetAbort will be called for you:
public class AdjustBalance(int account, decimal amount)
if (fail)
throw new exception("Insufficient funds");

Any component that is instanciated in the methods PrepareTransfer or ExecuteTransfer with transaction set to Supported or Required will be in the same transaction, need to vote SetComplete in order for the transaction to work, and will be created by the JITA rules.

I showed how transactions work, what are the costs, how to configure them in the COM+ Configuration Tool, and how to program them.
I showed the Transaction attribute attached to the ServicedComponent. I showed the usage of ContextUtil in order to vote Complete or Abort. I showed the usage of the AutoComplete attribute as a shortcut.

Sunday, February 26, 2006

Getting Started With Enterprise Services

COM+ components are called Serviced Components in .NET.
In this article I will show how to create a very simple Serviced Component, how to use this component from a .NET application, and how to configure the component from the COM+ Configuration Tool.
In addition, I will discuss some of the aspects of the usage of COM+ from whithin the .NET framework.

Your First Enterprise Service
- Open VS.NET 2003.
- Create a new class library (I will use C# in this tutorial), called HelloEnterpriseServices.
- Add a reference to System.EnterpriseServices.dll
- We must sign the assembly with a strong name (it is not mandatory to place it in the GAC):
In the command prompt of .NET, in the foder of the class library, type "sn -k keys.snk"
In the AssemblyInfo.cs file change the following:
[assembly: AssemblyKeyFile("..\\..\\keys.snk")]
Build the class library.
- Add a class called HelloService as follows:
using System.EnterpriseServices;
namespace HelloEnterpriseServices
public class HelloService:ServicedComponent
public string SayHello(string name)
return "Hello " + name;
- Build the project.

Congratulations. You have just created your first enterprise service.

Some Explanations:
In our class library we can place as many classes as we want. Some of the classes may inherit from the ServicedComponent class. All the classes are managed, and act the same as any other .NET class. The classes which derive from ServicedComponent will be registered as COM+ components, and the class library will be registered as a COM+ application.

If you will check the COM+ Configuration Tool you still can't see the new application.

Next we will fix some security issues. COM+ provide a service for role-based security. It is a good practice to use it, but if you do not wish to do so, you should open the AssemblyInfo file and add the following:
- Add using System.EnterpriseServices in the beginning of the file.
- Add [Assembly: ApplicationAccessControl(false)] at the end.
- Build again.
In the COM+ Configuration Tool, in the application properties, in the Security tab, you can see a checkbox "Enforce access checks for this application". You don't want this to be checked, unless you set up the security, because the application will throw an exception when loaded by clients.

Consuming the Serviced Component
Add a new Windows Application to the solution, called HelloEnterpriseServicesApp.
First add a reference to the previous HelloEnterpriseServices library. You must also add a reference to System.EnterpriseServices.

Next we will use our HelloService class. It is used as any other .NET class.
- Add a button to Form1. Double-click on it.
- Add using
to the form code.
- Add the following code to the button click event handler:

private void button1_Click(object sender, System.EventArgs e)
HelloService service = new HelloService();

- Set the windows application project to be the startup project of the solution.

Run the solution. The form shows right away. Clicking on the button will take a couple of seconds. At this time the COM+ component is first registered. Clicking again on the button shows the message box much faster. This kind of registration is called "Lazy Registration". I will show another method soon.

If you check the COM+ Configuration Tool (right click and select Refresh), you will see HelloEntrpriseServices. It is a Library application (as you can see by the icon). If you expand the application, and the Components, you will see the HelloService component. Expnad it further, and the Interfaces and you will not find the SayHello method. That is because we did not define a .NET interface for the service.
If you right-click on the HelloService component, and select properties, you will see 2 GUIDs (in the General tab) - one is CLSID and the other is for the Application. You must understand that by using COM+ from whithin .NET you go back to the time of COM and "DLL Hell" - The classes are registered to the registry, you cannot do side-by-side deployment, and cannot easily version the component. The .NET assembly, however, has all the benefits of .NET assemblies.

Improving the Component
Consider the following steps a MUST-DO. You will save yourself some headaches.
First we will add an interface to our component:
using System.EnterpriseServices;
namespace HelloEnterpriseServices
public interface IHello
string SayHello(string name);
public class HelloService:ServicedComponent,IHello
public string SayHello(string name)
return "Hello " + name;
If you wish to do so, you can place the interface in a different assembly. You can also define multiple interfaces to the assembly.

Next step will be to put the GUIDs ourselves. It is a good practice to force COM+ to use our GUIDs, in order to make sure that the applications that are installed are indeed ours.
In Visual Studio .NET, select Tools menu, and Create GUID tool. Select Registry Format, and Copy.
Add the following attribute to the interface:
[Guid("B906C925-D72F-4d8d-B8D5-672D7C2E595A")] .
The GUID here is just an example. Be sure to remove the curly brackets ({})
You can do the same process to add a GUID to the class (A different GUID).

Generate another GUID, and in the AssemblyInfo file add the following attribute:
[assembly: ApplicationID("Guid here")]
This will set the Application GUID.

Other useful attributes you can use:
[assembly: ApplicationName("Hello Application")] - for a user friendly name.
[assembly: ApplicationActivation(ActivationOption.Library)] - To set Library or Server application
[assembly: Description("Here is a nice description you will see in the COM+ Configuration Tool")]

Rebuild the class library.
Before we continue, delete the previous HelloEnterpriseServices application (right-click on it and select "delete").

I will show you how to register the application from command prompt:
go to the folder of the dll (using "cd fullpath") and type
regsvcs .dll
Later, when you update your component, you can call
regsvcs .dll /reconfig
The regsvcs tool wrapps the .NET dll with COM, register it in the registry and creates a tlb file you can call from a native language (as COM)

In the COM+ Configuration Tool (after you refresh) you will see in the application properties the new application ID, the description, and the use-friendly application name. When you expand the interface of the HelloService component, you will see the IHello interface, and underneath the SayHello method.

The attribute we placed in the AssemblyInfo file (such as ApplicationActivation to set Server/Library) are only relevant when we deploy. After deployment anyone can reconfig the application to their choosing (that's the whole point of COM+).

When you create newer versions you do not need to create new GUIDs. Unless of course you change the interface (which is not recommended as you will need to support old clients). Remember that there is no side-by-side activation of com components (unlike .NET assemblies).

Each ServicedComponent inherits from ContextBoundObject. When an object tries, for example, to participate in a transaction, it must know the transaction ID. This parameter is a part of the context of the object. Each feature we use in COM+, we make the context of the object more heavy, with more parameters. The object context is the set of services provided by COM+ to the object.
For some objects the context is more similar than other objects. In this case the interaction will be more smooth. In cases where the contexts differ significantly, there will be an interception - a context switch. Your goal is to keep the contexts similar as much as possible, which will cause less interception, and improve performance.

The handling of contexts is completely transparent in .NET. Any interaction you will do inside your code with COM+ is done through a class called ContextUtil. This stateless class has only static methods and properties. You will need to use this class when you will program transactions, object pooling and other COM+ services.

COM+ Applications vs. .NET Assemblies
The COM+ application is a frame for logical management of components. It is not a single unit of deployment, as opposed to the .NET dll. You can create new COM+ applications with components from different .NET assemblies. If you design your component to be reusable blackboxes correctly, you will have a very powerful way of using the same components in different COM+ applications.
In the COM+ Configuration Tool, right click on COM+ Applications, and select New Application. This will start the COM+ Application Installation Wizard. You can install a pre-built application, so you can choose an msi file you exported using the Export option. Or you could create an empty application. You will enter a name "HelloEnterpriseServices2" and select the type of application (server or library). Select InteractiveUser at this point. You will see the empty application under COM+ application. Now you can add a Component (select New Component to show another wizard). You can install new components from dll or tlb files, or you can use one of the components already installed in your computer. When you are done, you can Export it all, and create an msi file to install it on one shot in another computer. Note that you will need to do proper ID management.

There are very few services that will force you to use a Server application (for example: Queued Component must be in a different process). It is usually more efficient to use Library applications.

There might be some issues when using several versions or copies of .NET assemblies. Although it is not mandatory, placing the .NET assembly in the GAC will tell everybody to use the same copy of the assembly, which help might help you when debugging and deploying. You will always know the exact copy of the assembly the COM+ application is using.

Another issue you should consider is the usage of properties. COM does not support properies, as .NET does. So each property will be translated into a get and a set method. I would suggest using Getter and Setter methods in the first lace, so you will not have different interfaces to managed and native components.

In this article I showed how to build the minimum enterprise service possible. I showed the usage of GUID and interfaces. I showed how to consume the enterprise service in managed code. I discussed some deployment issues.

In the next article I will show how to use transactions in managed code.

Monday, February 20, 2006

Introduction To Enterprise Services (COM+) In .NET

COM+ is a service provided by Windows (it is not a .NET feature), similarly to other services such as IIS. Despite common belief, COM+ is not COM.
The main goal of COM+ is to provide a management environment for processes. The services provided by COM+ are required in distributed application (such as Client/Server application), especially with numerous calls.
By using COM+ in the .NET environment, or as it is called in .NET - "Enterprise Services", we benefit from the advantages of both the CLR and managed environment and the native environment of COM+, including integration with old legacy applications.

Almost all the services provided by COM+ is not supplied by the .NET framework, and without COM+ you will need to program them yourself. You don't need to know anything about COM development, and the use of Enterprise Services in .NET is quite easy.

A Short History Lesson
COM is a protocol for development component prior to .NET. When needs for distributed application have emerged, it was upgraded into DCOM (Distributed COM), which is a protocol for running in different machines or processes. For example, you could write an ActiveX Exe, which is an application that could be called from remote clients. The interface was done with DCOM. This protocol is lightweighted, in a binary format, without a lot of overhead. It is suitable for local networks, especially when the computers on the network are well known in advance.

However, some new services were required which provide long term stability, correct management of resources, and usage of resources by demand. For example: you could set up a server application, on which every call created a new process in the server - problem with scalability, and the syncronization of clients. Another required service is distributed transactions (For example: The server updates a database in a local transaction. There could be a dependency on another server which updates a database in another local transaction. The problem is that updating a database is an action which strongly depend on a connection, which is a resource the server has no control over. There might be a failure, and both updates should be rolled back to the previous state). Windows has a service called DTC (Distributed Transaction Coordinator), that can be called in order to request the transaction service.

In Win-NT a new service was created, called MTS (Microsoft Transaction Server), that wrapped DTC. On request, MTS supplies a new process, dllhost, that can host other dlls. Instead of writing an ActiveX Exe, we write an ActiveX dll (which is a COM component), and when it is hosted in dllhost process it automatically works in DCOM. It is possible to connect remotely to dll that is hosted by dllhost, and dllhost manages the resources. One of the services is distributed transactions.

In Win2000 the MTS was built-in, and some other services were added, all wrapped together in COM+.

COM+ in .NET
In the .NET environment we create a managed library (dll). If we wish to use COM+ in the dll, the dll is wrapped (automatically) with COM, in order for native components to be able to connect to it.
Note: As I wrote above, you can use COM+ in order to communicate between processes, and even between machines, using DCOM. But in .NET we have other technologies to perform interprocess communications (Remoting, Web Services, etc). It is not recommended to use COM+ just to perform RPC. A good architecture will be to call the server using Web Services (for example), and utilizing the COM+ services in the server locally, without the client knowledge.
In fact, when you call a Web Service, it is hosted in the IIS, which is a server. This is similar to COM+ approach - the application is hosted in the dllhost process. For communication, IIS is a better choice.

In .NET a COM+ component is called a Serviced Component or a Configured Component. When the component is already active, maybe even installed on a client, it can be configured administratively using a user friendly tool, supplied with Windows. For example: COM+ has a feature called Just-In-Time Activation (JITA), and we wrote a Serviced Component using this feature. We installed the software on the client, and now we can configure turn the feature on and off using the COM+ Configuration tool, by checking a checkbox. No code required. No Xml or other configuration files. No registry or environment variables. It all comes in the box.

The COM+ Configuration Tool
Enough with the dry discussions. Time to see it in action.
The COM+ Configuration Tool is located in the Control Panel->Administrative Tools-> Component Services.
On the left tree view, expand Component Services -> Computers -> My Computer -> COM+ Applications.

You can see all the COM+ applications installed in your computer. Soon we will add a component there. Right clicking on any application, and selecting Properties will open the confiduration window. There are many tabs with a lot of configurations you can change, without even knowing what the application does (so be careful!!!). You don't even need to restart the application - clicking Apply will configure the application while it is running.

If you expand the applications even further, you can see Components. Under components you can see all the classes you registered to COM+ (remember that not all the applications here are .NET). Under each component you can see the Interfaces of this component. You can configure a Component, an Interface, and even a Method (by selecting properties).

The data that you configure in the COM+ Configuration Tool is stored in a database called "COM+ Catalog". This database is managed by Windows, and you can even access it programatically. The data that is stored in the COM+ Catalog is the metadata of all the applications and components.

What Services Are Provided By COM+
Lets start to explore the Properties window, in order to better understand what features are supported by COM+. As I describe a feature, try to think how you can achieve it using your .NET programming skills (without COM+ of course).

Application Recycling
The application will be restarted automatically depending on configurable conditions, such as Lifetime Limit (restart every period of time), Memory Limit (protection against memory leaks), Call Limit and more. The recycling is done in a smart way. For example, COM+ waits until there are no more clients to the application. This can be configured in the "Pooling&Recycling" tab.
For more information: COM+ Application Recycling Concepts

Just-In-Time Activation
It happens a lot that a client holds on to an object, it does not need right now. The object may be expensive on resources, so there is a waste of resources. JITA helps manage the resources more efficiently. When the client finish working with the object, COM+ will automatically deactivate the object, freeing all resources. When the client uses the object again, it will be activated once more by COM+. The client uses the object without even knowing that JITA is applied. Notice that the object should be stateless, or it should be able to store and restore its state.
This configuartion is per Component, in the Activation tab, under Activation Context, by selecting Don't Force Activation Context, and checking the Enable Just-In-Time Activation.
For more information: COM+ Just-in-Time Activation Concepts

Please note that some of the services of COM+ depend on other services. For example: When working with transactions we must enable the JITA, in order to support Atomicity.

When we use JITA, we must take into consideration the tradoff: When we call a JITA object, we implicitly activate and deactivate it (time consuming), but we benefit the proper management of resources (the object will not hold unused resources).

Object Pooling
We can have a pool of objects, initialized when the application started. Whenever a client creates a new instance, it gets an object from the pool, without the cost of creating the new instance. When the client releases the object, it goes back to the pool.
You can enable Object Pooling of a compoent, in the Activation tab. Select Enable Object Pooling. You can set a minimum and maximum pool size. The minimum pool size will be created when the application starts (slowing your application startup). When the pool is full, and a new instance is requested it is created and added to the pool. If the pool size approched its maximum size, the creation of new objects would fail (after a timeout - configurable too).
You can configure the object pooling while the application is running of course, and profile the performance in order to fine-tune the startup time, memory usage and performance of the application.
For more information: COM+ Object Pooling Concepts

Notice the possibility to enable both Object Pooling and JITA - whenever an object is used it is activated from the pool, and when finished it is sent back to the pool. This will speed up the activation/deactivation process of JITA.

I will not discribe here what a transaction is. I will only give a short and simple example: A starts a transaction, calling B and C. B fails during execution. Even if A and C succeeded they must all fail, and roll-back to the last consistent state.
COM+ handles the transactions for you (using DTC). You will need to perform some actions in code in order for this to work.
For more information: COM+ Transactions Concepts

In order to configure transactions, go to the Transactions tab of your component, and select one of the options for Transaction Support.

Synchronization (Concurrency)
When there are several components sharing resources in a multi threading environment, we must always keep the resources consistent. If our components share one logical activity, we can set up the Synchroniztion service of COM+, so it will lock a component for any component that is not in the activity.
For example: Say we 4 components A,B,C, and D. A starts a thread, and calls B, which in turn calls C. D will not be able to access A,B, or C, and will be blocked until the activity is over.
For more information: COM+ Synchronization Concepts

Note: This is a very heavy mechanism in a multithreaded environment. If B has two methods, and A only calls one of them, D will not be able to call any of B's methods. However, this forces you to design and implement a well behaved multi threaded application, which might be a good thing.

In order to configure Synchronization, go to the Concurrency tab of your component, and select one of the options for Synchronization Support. If you use transactions, you must also use Synchronization (COM+ will configure this for you).

Queued Component
MSMQ is one of the services provided by Windows. MSMQ provides message queueing services, administrated by Windows. There is a good integration between COM+ and MSMQ, in the form of Queued Component service.
COM+ can intercept any call to a method of the component, and serialize (translate) it into a message in a queue. The COM+ component, in its own time, will handle the COM+ call. COM+ will deserialize the message to a proper method call, and the server will be able to perform the action. This mechanism is a good solution in cases of asynchronous method calls. It will not work well with synchronous calls. The server should not return a value, throw exceptions and so on (It is possible, but not trivial).
This mechanism is good when the client should not rely on the service availability.
For more information: COM+ Queued Components Concepts

Note: You can use MSMQ from .NET independently of COM+, using the System.Messaging namespace. For more information: Accessing Message Queues

You can configure Queueing services of an Interface, in the Queueing tab, by checking the Queued option.

Loosly Coupled Events
COM+ introduces a new concept of event handling. Usually a publisher raises an event, and a subscriber's method is executed. This implies a coupling between the publisher and the subscriber. In COM+ publishers provide events, subscribers register for these events, but COM+ is the mediator between the publishers and the subscribers. The benefits of this approach is that the subscribe do not know which publisher raised the event.
The service is good if the publishers and subscribers fdo not know each other (they may not even know that the others exist). It is also good when there are number of publishers for the same event, and it is not important which one raised the event.
For more information: COM+ Events Concepts
The subscriptions for event is configured in the COM+ Configuration Tool.

Role-Based Security
COM+ provides a mechanism in which you can define which user can initializes components, call a method or even perform a custom operation. Instead of selecting users for each task, you define Roles, and assign rules to that role. The users are mapped into roles using the COM+ Configuration Tool, using the Windows built-in authentication system.
For more information: COM+ Security Concepts

Object Construction
You can send a message to a component when it is constructed in the form of a string. This mechanism resembles the command line arguments of an application. For example, you can pass a connection string, or a path, or several arguments. The advantage is that it is configurable from the COM+ tool (in the Activation tab).

Compensating Resource Managers (CRM)
In order to participate in a COM+ distributed transaction, the resources should be managed by a resource manager. For example: SQL Server and MSMQ are both resource managers, so you can handle them from whithin a transaction.
Take for example the case of writing to a file. How can you roll back from that? If you want the writing to a file action to be coordinated in a transaction, you must create a Compensating Resource Manager for that file. COM+ provide the infrastructure to create a class with an interface that supports the CRM. By implementing this class you can add any resource into the transaction.
For more information: COM+ Compensating Resource Manager Concepts

Shared Property Manager (SPM)
COM+ provides a mechanism to share state between components.
For more information: COM+ Shared Property Manager Concepts

You can create a SOAP WebService Wrapper around your application (you will need IIS installed for this). This feature will automatically create a Web Service facade to be used in order to connect remotely to the COM+ application.
All you need to do is select the Uses SOAP checkbox in the Activation tab of the application.

Server and Library Components
COM+ applications can be in the form of Library applications or Server applications. It is configurable from the COM+ Configuration Tool, in the Activation tab of an application.
All the Server applications has icons shaped as boxes, and Library applications has a round shaped icons.
A Server application will be hosted by a dllhost, and will run as a different process. All the instances will run in the same dllhost. A Library application will run in the process of the client.
You can start a Server application, by right clicking on the application and selecting Start. You can stop the applications in the same way. All the runnning Server applications can be monitored in the Running Processes node, under COM+ Applications. Each running application will have a number in brackets. This is the process ID of the dllhost hosting the application (You can see the process in the Task Manager).

When you use Library applications (the default) you will gain all the non-distributed services of COM+. For example, you can perform transactions from inside the process without the costs of DCOM communication. On the other hand, if you use a Server application you loose performance, because of the inter process communication, but you gain centralization of the services inside one process. For example, you might want to put all the services in the server-side. You can use the Shared Property Manager in order to share data between clients.

If you use a Server application in a remote machine, you must consider the deployment of the components in the clients. It is a good practice to avoid this situation, and to use other, more advanced, technologies, such as Web Services, in order to handle the communication, and to limit the usage of COM+ from whithin the boundaries of the machine.

Exporting the Configuration
For server applications, it is a very important feature to save the current settings in order to load it later on. For that you have the Export options in the context menu of the application. A handy wizard will guide you through the very easy process. You select a path, click OK and an msi file will be generated. By double clicking on it you will install the COM+ component on the machine, and set the configuration as the one that was saved.

A very powerful feature is to save the component as an Application Proxy. This way you can install the application on machines that should connect to the server remotely.

All of the services which are provided by COM+ are fully documented in MSDN: COM+ Services
In the next article we will create a COM+ component in .NET and deploy it.

Friday, February 17, 2006

Administrative Configuration of Applications

This article is one of a series of articles exploring .NET Assemblies.
Exploring .NET Assemblies
Multi-File Assemblies
Strong Named Assemblies
Placing Assemblies in the Global Assembly Cache (GAC)
Administrative Configuration of Applications
The articles are intended for programmers who are using .NET for a while, and wish to know more about .NET assemblies.

.NET Applications compile into executable assemblies. The standard way to configure .NET applications is through .NET built-in mechanism of configuration files. Those files are Xml files, with a specific format. The exact structure of those files can be seen here.
.NET configration files holds information about versioning, references, Remoting, and many other, including custom configurations done by the user (Here is a nice tutorial about the subject: Article From CodeGuru

In order to create the configuration file yourself, you add to the project a new item of type Application Configuration File. The item that is added to the project is called App.Config (for web applications it is called Web.Config). When you build your project, the App.Config file is copied into the target directory, and renamed to [exename].exe.config. It must be located in the same directory as the exeutable, with that specific name. When an administrator wants to change edit this file, he can open the application.exe.config file in an Xml editor, and if he knows wht he is doing change the configuration. By the way, Whenever building the application, the settings are overwritten by the original App.Config file.

Administrative Configuration
The problem with letting administrators configure Xml files, is that they must be familiar with Xml in general, and with .NET configuration files in particular. It would have been nice to have a tool that can manage the configuration of the application without the need to open Xml files.

Such a tool exist, and supplied when installing the .NET Framework. In the Control Panel->Administrative Tools->Microsoft .NET Framework 1.1 Configuration, there is a configuration tool. Under the Application section, you can select "Add an Application to Configure", and then choose from the list of applications installed on the computer. By selecting "View the Application's Properties", you can select the Garbage Collector mode, Publisher Policy and the relative search path for additional assemblies. I will not discuss the first two here. By selecting a relative search path you can specify directories where referenced assemblies may be located (this is called "Probing"). For example: You might wish to place all the dlls in a subfolder called "references". In that case put the string "references" (of course you will need to deploy the application in this way). The relative path must be underneath the executable (You cannot put "..\" in the relative search path). As soon as you save your settings, an application.exe.config file is created (if the file already exist it is simply modified).

There is a lot of strength in using this tool. There is a central repositories of all managed application. The configuration can be done by administrators (who know what they are doing of course) rather than the programmer. If the assembly is deployed properly, the configuration cannot impact the application performance and correctness. The deployment can be decided administratively rather than be constant (you might want as a user to place files in a different location).

Note that everything you can configure with the tool, can be configured manually by changing the Xml.

More Configurations
The "Managed Configured Assemblies" link, shows you all the references you can configure. By default no assemblies are seen. By right-clicking ad selecting "Add...", you can select assemblies which are referenced: From the GAC, Framework assemblies the application uses, or manually (by typing the name of the assembly and the public key token you can see in the manifest of the application). Once you select the assembly, you can configure some things about the referencing: Publisher Policy (will not be discussed here), Binding Policy (you can select the version of the reference), and Code Bases (you can select the full path to the assembly, including full URLs).

I would like especially to discuss the Binding Policy. It is often that you diploy an application, and then provide a new version of a dll. You do not need to compile the exe and deploy it again. If you did not change the interface of the dll, then the exe should work fine. When you deploy the new version of the dll, the old version of the dll can coexist with the new version (for example by placing both in the GAC). But you still need to tell the exe in the client to change the binding to the new version of the dll. This is done by using the Binding Policy page. You can tell the version the exe is currently using, and the new version you want the exe to use (You can also set a range of versions).

You can also configure the Remoting (when necessary), but I will not discuss this in this scope.

Machine Configuration
There is a set of Xml configuration files for the .NET framework, located in your Windows folder under C:\Windows\Microsoft.NET\Framework\v1.1.4322\CONFIG.

The file machine.config contains a lot of global configurations, some of which are defaults for applications running on the machine. Avoid changing this file as much as possible. Some of the settings can be overrided in the application configuration files.
Discussing the exact contents of the machine.config file is out of scope in this article.

Wednesday, February 15, 2006

Placing Assemblies in the Global Assembly Cache (GAC)

This article is one of a series of articles exploring .NET Assemblies.
Exploring .NET Assemblies
Multi-File Assemblies
Strong Named Assemblies
Placing Assemblies in the Global Assembly Cache (GAC)
Administrative Configuration of Applications
The articles are intended for programmers who are using .NET for a while, and wish to know more about .NET assemblies.

The Global Assembly Cache (GAC) is a repository of .NET assemblies. Assemblies which are stored in the GAC are meant to be shared by several applications on the computer. Good examples of such assemblies are the .NET Framework assemblies (System.Data.dll, System.Xml.dll, etc).

In .NET the deployment process is called XCopy deployment: you just copy the output folder from one place to another and everything works without the need to register stuff on the computer. The Registry is avoided altogether in .NET, and in future Windows versions, it might disappear.

As default, when you reference a dll, and build the project, VS.NET will copy the dll to the output folder, where the exe is generated (Unless the Copy Local property of the reference is set to false). You can place the dll in a sub folder named as the dll without problem. If the referenced Assembly is in the GAC, it will appear in the .NET tab of the Add Reference dialog in .NET, and the Copy Local property will be set to false as default. The exe will know to look for the assembly in the GAC.
There is a very good article about where the assembly searches the referenced assemblies here:

Placing an Assembly in the GAC
There are few ways to place an assembly in the GAC. First of all you must sign the assembly with a strong name (look here).
One way is to simply copy the assembly file to the GAC folder: C:\WINDOWS\assembly (this is relevant to my computer), by dragging using Windows Explorer.
As you can see this is not a regular folder: You can see that many assemblies appear several times with different versions (the public key token can also be seen). This is the solution to "Dll Hell". Dlls can exist side-by-side with different versions. This way old application which has reference to the old versions can still work.

Another way is to use the command line utility: gacutil /i MyAssembly.dll.

The last way is to use the administrative tools for .NET. Open Start->Control Panel->Administrative Tools->.NET Configuration 1.1 .
On the dialog select "Assembly Cache", and then click on "Add an Assembly to the Assembly Cache". Select the assembly and it will be added to the GAC. If you click on "View List of Assemblies in the Assembly Cache", you can see a different view of the GAC, and you can delete an assembly from the GAC (be careful). You can also delete the assembly from the windows folder.

Setting the Version of the Assembly
In order to version the assembly, you should open the AssemblyInfo file in the project and set the AssemblyVersion attribute. [assembly: AssemblyVersion("1.0.*")].
The version consists of four number separated by dots ([major version].[minor version].[build number].[revision]). The asterisk (*) tells the framework to advance the two least significant numbers every build. The two most significant numbers (the left ones) should be changed by you when you want to change versions. This version can be seen in the Manifest of the assembly, and can be seen as a result in the GAC.
You can set some attributes (AssemblyTitle, AssemblyDescription, etc...) to characterize the assembly. This attributes can be seen in Windows Explorer by right-clicking on the assembly file, selecting "Properties", in the Version tab.

In order to change the version of the dll that the exe should refer to (for example if I now have a new dll, and the old exe is still referring to the old version of the dll), we need to change the application configuration file of the exe. This file must be located in the same folder as the exe, and the name of the file is [exename].exe.config . If there is no such file, you can add it yourself. The following code must appear in the file:
<bindingRedirect oldVersion="" newVersion=""/>
For more information you can look here: or in the link I have posted in the introduction.

Note: You can configure an application assembly through the administrative tool .NET Configuration 1.1 (in Control Panel) by adding an assembly to the Applications section. The tool will generate the App.Config for you.

Tuesday, February 14, 2006

Strong Named Assemblies

This article is one of a series of articles exploring .NET Assemblies.
Exploring .NET Assemblies
Multi-File Assemblies
Strong Named Assemblies
Placing Assemblies in the Global Assembly Cache (GAC)
Administrative Configuration of Applications
The articles are intended for programmers who are using .NET for a while, and wish to know more about .NET assemblies.

Digital Signature is the way to ensure that information was not altered since it was signed.
As developers we distribute our compiled assemblies to our clients. Both the developers and the clients want to make sure that the assemblies are exactly the ones that were first distributed. For example, a malicious programmer can hack into the assembly and alter it to his will. The least we can do is make sure that such thing did not happen before loading the assembly. It would be shameful, for example, if we tried to run an assembly signed by Microsoft, which was changed by someone without our knowledge.

An assembly which is digitally signed is called a Strongly-Named Assembly.

The Process of Signing the Assembly
In order to sign an assembly we need to do the following steps:
1. From the command line, run the line sn -k keys.snk where keys.snk is the name of the file with the keys. This step will generate a file containing a private key and a public key.
2. Open the project you want to sign in VS.NET, and in the AssemblyIbfo file, set the AssemblyKeyFile attribute with the path the snk file.
[assembly: AssemblyKeyFile("..\\..\\keys.snk")]
Alternatively you can do this while linking in the command line:
al /out:MyAssembly.dll /keyfile:keys.snk

After building the assembly if we open the assembly with ildasm, and watch the Manifest, we will be able to see the public key. So the public key is stored inside the assembly file.
The signature appears somewhere at the end of the assembly and cannot be seen in the manifest.

The snk file MUST be kept (in a secure way). Without this file you will not be able to sign the assembly with the same key anymore, and if you already deployed your dll, and there are existing executables referencing to this assembly, those executables will no longer work, unless deployed again with the new signed dll. Access to the snk file by malicious user will allow him or her to make changes to the assembly and sign it on your behalf with the same pair of keys. Be careful.

How Do the Signing Process Work
Now that we know technically how to sign the assembly, lets try to understand what exactly happens, and then try to evaluate the costs of signing.

At the compilation of the dll: The dll IL code is hashed in a well known Hashing algorithm (SHA1). The hashing algorithm produces a word of a constant amount of bits. This word cannot be decrypted back to the assembly code. But, it is statistically almost sure that only the dll produced this code (finding another code that will be hashed to the same hash as the original code is not practical).
After we have the hashed assembly we encrypt it in RSA algorithm using the private key from the snk file. Only the assembly knows the private key, so only the assembly can be encrypted using that key. As we saw before the public key is distributed inside the resulting assembly, so it is known to everybody.

At the compilation of the referencing assembly: For every strong named assembly which is referenced the following process is done: The public key of the referenced assembly is hashed into a 16 bit word, which is called the public key token. You will be able to see this in the Manifest of the assembly after building using ildasm. It will appear under the appropriate reference.

So far we discussed compilation time actions done by the compiler. There are no implications to performance so far.
At run-time: Whenever a Strong Named assembly is loaded, the public key of the dll is first Hashed. The result of the hash should be the same as the public key token of that dll. In order to evaluate the signature, we Hash the dll. This will be the expected result after evaluating the digital signature. We take the digital signature of the dll assembly, and decipher it using RSA algorithm with the public key of the dll. The result should be the same as the hashed dll.
If the comparison failed, we have a problem and we know for sure that the dll was tampered with. The CLR will refuse to load the assembly in that case.

Why Do We Sign Assemblies?
The drawbacks of signing the assemblies are obvious. First, there is an overhead every time we load the assembly. The assembly will be loaded more slowly (Although this should not be too long). Second, we need to look after the snk file or we will loose the private key.

The advantage is that by signing the assemblies we provide reliability to the assembly. The client and you will be sure that the assembly was provided by you (of course the signature has nothing to do with certification for the assembly, and the assembly can still be malicious if you programmed it so).

There are cases where you must sign assemblies: If you place them in the GAC, or if you develop an Enterprise Service (COM+). There are more cases.

It is a best practice to always sign assemblies.

Getting Rid of the Snk File
After generating the snk file (by calling sn -k), we can call
sn -i keys.snk AssemblyKey
Now in the AssemblyInfo file, instead of updating the AssemblyKeyFile attribute (leave empty string), we need to update AssemblyKeyName:
[assembly: AssemblyKeyName("AssemblyKey")]
You can see that we can now use a string ("AssemblyKey" in this example) instead of an snk file.
The file snk file can now be deleted from the computer. We don't need it.

The sn application put the keys in the folder
"C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys" .

Delay Signing
In some scenarios, during development, the signing mechanism is unnecessary and inconvenient. In this case we can prepare the assembly for signing eventually, but for the time being tell the runtime not to validate the signature. When the appropriate time comes, and we wish to deploy the assembly, we will enable the signature validating. This process is called Delay Signing.
In order to delay sign the assembly:
1. Create an snk file: sn -k keys.snk.
2. Create an snk file which contains only the public key: sn -p keys.snk publicKey.snk
3. Go to AssemblyInfo file, and change the AssemblyDelaySign Attribute to true.
[assembly: AssemblyDelaySign(true)]
4. Change the AssemblyKeyFile Attribute to the publicKey.snk file
[assembly: AssemblyKeyFile("..\\..\\public.snk")]
5. Remove the request to validate the assembly: sn -Vr MyAssembly.dll

When we wish to stop the delay signing, and return to regular signing:
1. Re-sign the assembly with the original snk file: sn -R MyAssembly.dll ..\..\keys.snk
2. Enable the signature validation: sn -Vu MyAssembly.dll

Multi-File Assemblies

This article is one of a series of articles exploring .NET Assemblies.
Exploring .NET Assemblies
Multi-File Assemblies
Strong Named Assemblies
Placing Assemblies in the Global Assembly Cache (GAC)
Administrative Configuration of Applications
The articles are intended for programmers who are using .NET for a while, and wish to know more about .NET assemblies.

Assemblies are normally built into a single file (exe or dll). There is a feature to compile parts of the assembly into net-modules, and then build an assembly that consists of all the net-modules.

Creating Multi-File Assemblies
The feature is not supported by the IDE, so we must compile in command line (using csc.exe r vbc.exe).
Lets assume we have the following code:
Class A (A.cs):
using System;
namespace ClassLibrary1
public class A
public void foo()
Class B (B.cs)
using System;
namespace ClassLibrary1
public class B
public void foo()
A a = new A();;
As you can see B uses class A.

Now start Command-Line (Start->Programs->.NET 2003->Tools->Command Line Prompt).
Change directory to the folder of the files ( cd MyFolder ).
In that folder we have a.cs and b.cs .
Type the following lines (one by one):
csc /t:module a.cs
csc /t:module b.cs /addmodule:a.netmodule
al /out:MyAssembly.dll /t:library a.netmodule b.netmodule

Lets go over the code line by line:
csc /t:module a.cs
This line will compile a.cs (and any other file you might specify) into one netmodule file.
After the line is executed you can see in the folder a file called a.netmodule.
If you view the a.netmodule using ildasm, you can see that the structure of a.netmodule is similar to a regular assembly.

csc /t:module b.cs /addmodule:a.netmodule
As in class A, cladd B which resides in b.cs is compiled into a netmodule. the /addmodule directive is used to add a "reference" to a.netmodule. The reason we need this is because B used class A, so much like dlls we need a reference to all the netmodules containing classes that are used in the current netmodule we are trying to compile.
Again, a file called b.netmodule now exists.

al /out:MyAssembly.dll /t:library a.netmodule b.netmodule
al is the Assembly Linker. We use it in order to generate our Dll Assembly (using the /out directive). All the netmodules are specified.
After running this code we see a file MyAssembly.dll in the folder. Lets run ildasm on this assembly. There is only a Manifest to the assembly, and no types. Inside the manifest you will see the references to the netmodules. You can see that the dll do not contain the types, but it reference to them. This is important, because now we can replace the netmodules without the need to compile the assembly.

Note: Because the IDE do not support Multi-file assemblies, you will need to create batch files which compile the code for you. This can be quite tedious.

Advantages of Multi File Assemblies
Although the obvious disadvantage stated above, there are some benefits to this kind of deployment:
1. We can use multiple languages inside one assembly. Each language should be compiled in its own netmodule. This can't be done otherwise.
2. Modularity - We can replace netmodules without affecting other netmodules.
3. Late Loading - The netmodule will only be loaded if and when used.
4. Less network traffic while deploying. Only the changed netmodule will be transferred.

Also this can be useful when we have an infrastructure assembly (which is shared by many solutions), with only a subset of classes changing from solution to solution. A netmodule can be useful to compile the changing part, without the need to compile the other parts of the assembly.

Exploring .NET Assemblies

This article is one of a series of articles exploring .NET Assemblies.
Exploring .NET Assemblies
Multi-File Assemblies
Strong Named Assemblies
Placing Assemblies in the Global Assembly Cache (GAC)
Administrative Configuration of Applications
The articles are intended for programmers who are using .NET for a while, and wish to know more about .NET assemblies.

Assemblies are the smallest deployment units possible in the .NET environment. Assembly is the term to describe a .NET executable (.exe) or class library (dll) file. A .NET assembly contains two main parts: The Manifest and the IL code. The Manifest contains all the metadata of the assembly: The Version and ID of the assembly, public keys, references and more. The IL code is a binary representation of the intermediate language which is very similar to Assembly language. The fact that an IL code is in the assembly file, and not C# or VB.NET, enable us to program in any .NET language we want (currently there are more than 20 managed languages), and even combine languages. All of the code is compiled into the IL code, which is similar no matter what language you programmed in.

There is not much of a difference between an Exe file and a Dll file. The obvious difference is the fact that an Exe file can be executed, while a Dll cannot be executed. Other than that the only difference is the Main method in the Exe code, which is the entry point of the Assembly. There is nothing else to distinguish between an Exe and a Dll. You can even reference an Exe just as you reference a Dll (It is not supported by the .NET 2003 IDE though, you can use the csc.exe compiler to add references to Exe files).

Solving The Dll Hell
A known problem with native Dlls and COM components was called "Dll Hell". In short, the problem was that no multiple versions were able to be stored at the same time, which caused problems in deployment.
.NET solved the problem by putting versions inside the assemblies manifests. This way two assemblies with different versions can co-exist in the same computer.
Another advantage of .NET is the ease of deployment. In order to run the application, the exe with all referenced dlls should be placed in the same windows folder (This is the most simple scenario). With a simple copy of files the application can be installed in a different location. No need of registry or any additional installations.

Exploring the Assembly
There is a very nice tool to explore the .NET assemblies called ildasm. To run ildasm you need to run the .NET Command Prompt (from the Start->Programs->.NET 2003->.NET Tools folder). Then by typing "ildasm" you start the application. The option "ildasm /adv" will show more properties of the assembly. Opening an Assembly (any .NET exe or dll file), will show its contents in a tree view. First we can see the Manifest file (by double clicking on it). We see the references, followed by the Assembly Info (as it appears in the AssemblyInfo attributes), resources and other information.
After the Manifest we can see the Assembly types. Each type can be expanded to see all the methods, members, properties and events. By double-clicking on a method, for example, we can see the IL code of the method.

Another option is the View->MetaInfo->Show! . Here is all the metadata which is exposed by the Reflection mechanism in .NET, among other things, such as list of constant strings used in the assembly. You can use File->Dump to save the assembly contents to a file.

A very nice (and free) tool is Reflector. This tool is a MUST. It can be downloaded from here: . The tool has nicer UI than ildasm, and it can show you the code in C# and VB.NET. So you can view Dlls and Exes that you did not write, and even disassemble the code for you. You can even see the code of .NET Framework dlls, such as System.IO, System.Xml etc.
Tons of Add-ins can be found here: (search Google for more).

Everyone Can See Your Code
Yes, this is true. You cannot hide your code. Everyone with ildasm or Reflector can disassemble your code and see it.
First, this is not such a problem. Even for native assemblies, with some effort the exe and dll files can be disassembled. With the assumption that any hacker can see your code eventually, Microsoft did not bother to make it hard for people to disassemble the code.
What you can do is make the code hard to read. For this there is a tool called Dotfuscator (which is installed with .NET). The tool makes the code less readable. Here is a link to articles about this tool:

Feel free to use everything here. Add links to my site if you wish.

Do not copy anything to other sites without adding link to here.

All the contents of the site belong to Yariv Hammer.