F#: Disabling SSL Certificate validation

Yesterday I wanted to download some content off a website with F#, however unfortunately the certificate of the website was expired.

    let result = 
        try
            let request = 
                "https://somewebsite/with/expired/ssl/certificate/data.json?paramx=1&paramy=2"
                |> WebRequest.Create

            let response = 
                request.GetResponse ()

            // parse data
            let parsed = "..." 

            Ok parsed
        with
        | ex ->      
            Error ex

If we execute this, then result would be of Error with the following exception:

SSL certificate validation exception
SSL certificate validation exception
ex.Message
"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel."
ex.InnerException.Message
"The remote certificate is invalid according to the validation procedure."

So how do we fix this?

The solution is to set the following code at startup of the application (or at least before the first call):

ServicePointManager.ServerCertificateValidationCallback <- 
    new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)

Notice that you should not do this, because this does not validate the certificate at ALL!
Also, this is for ALL calls, if you want to do it on a specific call you need to do make some changes.

First of all, it doesn’t work with WebRequest.Create, you need to use WebRequest.CreateHttp, or cast the WebRequest to HttpWebRequest, as the property we need, ServerCertificateValidationCallback is not available on WebRequest, only on HttpWebRequest. The resulting code looks like this:

            let request = 
                "https://somewebsite/with/expired/ssl/certificate/data.json?paramx=1&paramy=2"
                |> WebRequest.CreateHttp

            request.ServerCertificateValidationCallback <- new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)

            let response = 
                request.GetResponse ()

Again, don’t do this in production!

If need be, do it on a single HttpWebRequest, like the last example, and write some code so that you ignore the expiration part, but leave in place the validation part.

Code on Github!

Be careful with npm shrinkwrap

Recently we had the issue that we used version x of an npm package. However in the course of time this package was updated by the author and contained a critical bug, which broke our AWS deployments. Locally this was no issue because we had a version installed that satisfied the version requirements.

In order to prevent issues like this we looked at npm shrinkwrap. This writes a file called npm-shrinkwrap.json which ‘freezes’ all of the package-versions that are installed in current project.

Now this is dangerous, as we just found out.

The issue is that when an author decides to delete a package from the feed. Delete you ask? Yes, delete, gone, no trace, nothing.

What’s the issue you might ask? You’ll find it next time?

Not really.

Imagine you’re using Elastic Beanstalk, which, based on certain triggers, can spawn a new instance of a server, or delete one.

Now today you release your application to your servers, and you shrinkwrap your packages.

Before you do that, you obviously clear your local npm cache (located in %appdata%\npm-cache), and your local node_modules. Then you do an npm install to verify every package is correctly installed, you do a few test runs, maybe on a local server. Then you package and send it of to AWS.

All runs well, you’re happy, and your boss is happy.

Next week, for whatever reason, you get a high load on your servers. Elastic Beanstalk decides to add one more instance.

And then stuff starts to break. You get emails that its health is degraded. Then you get emails that its health severe.

At 2a.m. you open your laptop, and you start looking at the logs. There you find something in the lines of:

  npm ERR! Linux 3.14.48-33.39.amzn1.x86_64
  npm ERR! argv "/usr/bin/iojs" "/usr/bin/npm" "install"
  npm ERR! node v2.4.0
  npm ERR! npm  v2.13.0
  
  npm ERR! version not found: node-uuid@1.4.4
  npm ERR! 
  npm ERR! If you need help, you may report this error at:
  npm ERR!     <https://github.com/npm/npm/issues>
  
  npm ERR! Please include the following file with any support request:
  npm ERR!     /app/npm-debug.log

What? You tested locally? What happened?

Okay, you fire up a console window. You make a test dir. You run npm installnode-uuid@1.4.4.

All goes well. Or does it?

Let’s look at the output:

C:\__SOURCES>mkdir test

C:\__SOURCES>cd test

C:\__SOURCES\test>npm install node-uuid@1.4.4
npm http GET https://registry.npmjs.org/node-uuid/1.4.4
npm http 404 https://registry.npmjs.org/node-uuid/1.4.4
node-uuid@1.4.4 node_modules\node-uuid

Notice the 404? I didn’t… But it’s important!

Now here’s what happened: locally I had node-uuid@1.4.4 in my cache, so he took that one, even though the package disappeared from the registry.

However: my new instance on Elastic Beanstalk didn’t. That’s why it failed.

So, solutions:

  • Be careful when you shrinkwrap. Stuff might break in the future, as authors delete packages
  • Create a private feed, that you curate
  • As a package author, don’t delete packages. Just don’t. Other people might depend on you.

DynamoDb & updating objects: it’s doesn’t react like SQL!

Today I stumbled upon the following bug:

We had an object with some properties that we wanted to update, but only if a certain property of that object is not set, i.e. it should be null.

{
    "Id": 1, // Id is the HashKey
}

In this case we wanted to update the object with Id 1, and set an attribute called Foo to "Bar"

To do this I wrote the following Javascript, using the aws-sdk:

function updateObject(id) {
    var dynamodb = new AWS.DynamoDB();

    dynamodb.updateItem({ 
            Id: id 
        }, { 
            UpdateExpression: "SET Foo = :value", 
            ExpressionAttributeValues: {
                ":value": "Bar"
            },
            ConditionExpression: "attribute_not_exists(Foo)" 
        }, function(error, data) { 
            if(error) { 
                // TODO check that the error is a ConditionalCheckFailedException, in 
                // which case the Condition failed, otherwise something else might be off. 
                console.log("Error");
            } else {
                console.log("All good, we've updated the object");
            } 
        }
    );
}

Perfect!

Now assume we have have a range of 1 -> 12 in our table, where half of them already have the Foo attribute, so we should get 50% Error, and 50% All good, ... (which is the case).

However, what do we expect when we update an item with Id 13?

When I, in my mind, which talks (used to) talk SQL when thinging about a database, updating something that is not there, doesn’t do anything.

Consider the following table:

CREATE TABLE Test(
  Id INT NOT NULL,
  Foo NVARCHAR(255) NULL
)

With the following query:

INSERT INTO Test (Id, Foo) VALUES (1, NULL), (2, N'Bar'), (3, NULL)
GO

--SELECT * FROM Test
--GO

UPDATE Test SET Foo = 'Bar' WHERE Id = 1 AND Foo IS NULL
IF @@ROWCOUNT = 1
BEGIN
  SELECT N'1 updated, set Foo to Bar'
END
ELSE
BEGIN
  SELECT N'1 not updated, Foo was already set'
END
GO

--SELECT * FROM Test
--GO

UPDATE Test SET Foo = 'Bar' WHERE Id = 2 AND Foo IS NULL
IF @@ROWCOUNT = 1
BEGIN
  SELECT N'2 updated, set Foo to Bar'
END
ELSE
BEGIN
  SELECT N'2 not updated, Foo was already set'
END

--SELECT * FROM Test
--GO
UPDATE Test SET Foo = 'Bar' WHERE Id = 7 AND Foo IS NULL -- 7 Doesn't exist!
IF @@ROWCOUNT = 1
BEGIN
  SELECT N'7 updated, set Foo to Bar'
END
ELSE
BEGIN
  SELECT N'7 not updated, because 7 doesn''t exist!'
END

This will print, along with some empty result sets, the following:

1 updated, set Foo to Bar
2 not updated, Foo was already set
7 not updated, because 7 doesn't exist!

Now, that knowledge in SQL doesn’t apply to DynamoDb.

While testing on some non-existing values we saw that our code passed the testcases perfectly. That’s not how it should be.

Let’s take a look again at the documentation, this time do actually read the first line:

Edits an existing item’s attributes, or adds a new item to the table if it does not already exist.

(emphasis mine).

Image depicting Homer who also has issues updating his objects in DynamoDb
Doh!

So we need to guard ourselves against updates on non-existing items? How do we do that? Let’s extend our ConditionExpression. Start by taking the original code, and change the ConditionExpression as highlighted:

function updateObject(id) {
    var dynamodb = new AWS.DynamoDB();

    dynamodb.updateItem({ 
            Id: id 
        }, { 
            UpdateExpression: "SET Foo = :value", 
            ExpressionAttributeValues: {
                ":id": id,
                ":value": "Bar"
            },
            // make sure the object we're updating actually has
            // :id as Id, the side-effect of this is that if none of those
            // is found, it will throw a ConditionalCheckFailedException
            // which is what we want
            ConditionExpression: "Id = :id AND attribute_not_exists(Foo)" 
        }, function(error, data) { 
            if(error) { 
                // TODO check that the error is a ConditionalCheckFailedException, in 
                // which case the Condition failed, otherwise something else might be off. 
                console.log("Error");
            } else {
                console.log("All good, we've updated the object");
            } 
        }
    );
}

Comments? Sound off below!

Topshelf install, PowerShell and Get-Credentials

In the project I’m currently working at we use PowerShell script for configuration and build execution.

This means that if you get a new laptop, or a new member joins the team, or even when you need to change your Windows password, you just need to run the script again and it will set up everything in the correct locations & with the correct credentials.

The credentials were a problem though.

When installing a Topshelf service with the --interactive parameter (we need to install under the current user, not System) it will prompt you for your credentials for each service you want to install. For one, it’s fine, for 2, it’s already boring, for 3, … You get the point.

We initially used the following command line to install the services:

. $pathToServiceExe --install --interactive --autostart

To fix this we will give the $pathToServiceExe the username and password ourselves with the -username and -password. We should also omit the --interactive.

First gotcha here: When reading the documentation, it says one must specify the commands in this format:

. $pathToServiceExe --install --autostart -username:username -password:password

However, this is not the case. You mustn’t separate the command line argument and the value with a :.

Now, we don’t want to hardcode the username & password file in the setup script.

So let’s get the credentials of the current user:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs"

Next up we should extract the username & password of the $credentialsOfCurrentUser variable, as we need it in clear-text (potential security risk!).

One can do this in 2 ways, either by getting the NetworkCredential from the PSCredential with GetNetworkCredential():

$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = ("{0}\{1}") -f $networkCredentials.Domain, $networkCredentials.UserName # change this if you want the user@domain syntax, it will then have an empty Domain and everything will be in UserName. 
$password = $networkCredentials.Password

Notice the $username caveat.

Or, by not converting it to a NetworkCredential:

# notice the UserName contains the Domain AND the UserName, no need to extract it separately
$username = $credentialsOfCurrentUser.UserName

# little more for the password
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentialsOfCurrentUser.Password)
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Notice the extra code to retrieve the $password in plain-text.

I would recommend combining both, using the NetworkCredential for the $password, but the regular PSCredential for the $username as then you’re not dependent on how your user enters his username.

So the best version is:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs" 
$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = $credentialsOfCurrentUser.UserName
$password = $networkCredentials.Password

Now that we have those variables we can pass them on to the install of the Topshelf exe:

. $pathToServiceExe install -username `"$username`" -password `"$password`" --autostart

Notice the backticks (`) to ensure the double quotes are escaped.

In this way you can install all your services and only prompt your user for his credentials once!

When frameworks try to be smart, AngularJS & Expressions

One of my colleagues just discovered this bug/feature in AngularJS. Using an ngIf on a string "no" will result in false.

HTML:

<div ng-app>
    <div ng-controller="yesNoController">
        <div ng-if="yes">Yes is defined, will display</div>
        <div ng-if="no">No is defined, but will not display on Angular 1.2.1</div>
        <div ng-if="notDefined">Not defined, will not display</div>
     </div>
</div>

JavaScript:

function yesNoController($scope) {
    $scope.yes = "yes";
    $scope.no = "no";
    $scope.notDefined = undefined;
}

Will print:

Yes is defined, will display

Let’s read the documentation on expression, to see where this case is covered.

.

.

.

.

Can you find it?

Neither can I.

Fix?

Use the double bang:

        // ...
        <div ng-if="!!no">No is defined, but we need to add a double bang for it to parse correctly</div>
        // ...

JSFiddle can be found here.

For those who care, it’s not a JS thing:

// execute this line in a console
alert("no" ? "no evaluated as true" : "no evaluated as false"); // will alert "no evaluated as true"

Enabling dynamic compression (gzip) for WebAPI and IIS

A lot of code on the internet refers to writing custom ActionFilters, or even HttpHandlers that will compress your return payload for you.

For example, see this package (which with its name implies that it is Microsoft, but then says it’s not Microsoft).

At the moment of writing the above-linked package even throws an error when you return a 200 OK without a body…

But in the end, it’s very simple to enable compression on your IIS server without writing a single line of code:

You first need to install the IIS Dynamic Content Compression module:

Dynamic Content Compression
Dynamic Content Compression

Or, if you’re a command line guy, execute the following command in an elevated CMD:

dism /online /Enable-Feature /FeatureName:IIS-HttpCompressionDynamic

Next up you need to enable the Dynamic Content Compression to compress

application/json

and

application/json; charset=utf-8

To do this, execute the following commands in an elevated CMD:

cd c:\Windows\System32\inetsrv

appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost

This adds the 2 mimetypes to the list of types the module is allowed to compress. Validate that they are added with this command:

appcmd.exe list config -section:system.webServer/httpCompression

Validate that the 2 mimetypes are there and enabled:

Validate that application/json and application/json; charset=utf-8 are compressable
Validate that application/json and application/json; charset=utf-8 are compressable

And lastly, you’ll probably need to restart the Windows Process Activation Service.

Best is to do this through the UI because I have yet to find a way in CMD to restart a service (can’t seem to start services that are dependent on the one we just started).

In services.msc you’ll need to search for Windows Process Activation Service. Restart it.

Restart WPAS
Restart WPAS

Obviously there are more settings available, take a look at the httpCompression Element settings page.

I recommend reading about 2 at least:

  • dynamicCompressionDisableCpuUsage
  • noCompressionForProxies

Good luck,

-Kristof

The impact of SqlDataReader.GetOrdinal on performance

I recently had a discussion about the impact of SqlDataReader.GetOrdinal on execution of a SqlClient.SqlCommand. I then decided to run some code to measure the difference, because I think that’s the only way to get a decent opinion. This is the code that I’ve used to run a certain query 1000 times:

private void InvokeQuery(Action mapObject)
{
    Stopwatch stopwatch = Stopwatch.StartNew();

    for (int i = 0; i < Iterations; i++)
    {
        using (var sqlCommand = new SqlCommand(this._query, this._sqlConnection))
        {
            using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
            {
                while (sqlDataReader.NextResult())
                {
                    mapObject(sqlDataReader);
                }
            }
        }
    }

    stopwatch.Stop();

    Debug.WriteLine("Running {0} queries took {1} milliseconds!", Iterations, stopwatch.ElapsedMilliseconds);
}

mapObject uses either directly the ordinal, or fetches the ordinal based on the column name. Also, I moved everything inside of the for loop to ensure nothing could be reused between queries. Here are the mapObject Actions, with GetOrdinal:

Action<SqlDataReader> = sqlDataReader =>
{
    int salesOrderID = sqlDataReader.GetOrdinal("SalesOrderID");
    int revisionNumber = sqlDataReader.GetOrdinal("RevisionNumber");
    int orderDate = sqlDataReader.GetOrdinal("OrderDate");
    int dueDate = sqlDataReader.GetOrdinal("DueDate");
    int shipDate = sqlDataReader.GetOrdinal("ShipDate");
    int status = sqlDataReader.GetOrdinal("Status");
    int onlineOrderFlag = sqlDataReader.GetOrdinal("OnlineOrderFlag");
    int salesOrderNumber = sqlDataReader.GetOrdinal("SalesOrderNumber");
    int purchaseOrderNumber = sqlDataReader.GetOrdinal("PurchaseOrderNumber");
    int accountNumber = sqlDataReader.GetOrdinal("AccountNumber");
    int customerID = sqlDataReader.GetOrdinal("CustomerID");
    int salesPersonID = sqlDataReader.GetOrdinal("SalesPersonID");
    int territoryID = sqlDataReader.GetOrdinal("TerritoryID");
    int billToAddressID = sqlDataReader.GetOrdinal("BillToAddressID");
    int shipToAddressID = sqlDataReader.GetOrdinal("ShipToAddressID");
    int shipMethodID = sqlDataReader.GetOrdinal("ShipMethodID");
    int creditCardID = sqlDataReader.GetOrdinal("CreditCardID");
    int creditCardApprovalCode = sqlDataReader.GetOrdinal("CreditCardApprovalCode");
    int currencyRateID = sqlDataReader.GetOrdinal("CurrencyRateID");
    int subTotal = sqlDataReader.GetOrdinal("SubTotal");
    int taxAmt = sqlDataReader.GetOrdinal("TaxAmt");
    int freight = sqlDataReader.GetOrdinal("Freight");
    int totalDue = sqlDataReader.GetOrdinal("TotalDue");
    int comment = sqlDataReader.GetOrdinal("Comment");
    int rowguid = sqlDataReader.GetOrdinal("rowguid");
    int modifiedDate = sqlDataReader.GetOrdinal("ModifiedDate");

    var temp = new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(salesOrderID),
        revisionNumber: sqlDataReader.GetInt16(revisionNumber),
        orderDate: sqlDataReader.GetDateTime(orderDate),
        dueDate: sqlDataReader.GetDateTime(dueDate),
        shipDate: sqlDataReader.GetDateTime(shipDate),
        status: sqlDataReader.GetInt16(status),
        onlineOrderFlag: sqlDataReader.GetBoolean(onlineOrderFlag),
        salesOrderNumber: sqlDataReader.GetString(salesOrderNumber),
        purchaseOrderNumber: sqlDataReader.GetString(purchaseOrderNumber),
        accountNumber: sqlDataReader.GetString(accountNumber),
        customerID: sqlDataReader.GetInt32(customerID),
        salesPersonID: sqlDataReader.GetInt32(salesPersonID),
        territoryID: sqlDataReader.GetInt32(territoryID),
        billToAddressID: sqlDataReader.GetInt32(billToAddressID),
        shipToAddressID: sqlDataReader.GetInt32(shipToAddressID),
        shipMethodID: sqlDataReader.GetInt32(shipMethodID),
        creditCardID: sqlDataReader.GetInt32(creditCardID),
        creditCardApprovalCode: sqlDataReader.GetString(creditCardApprovalCode),
        currencyRateID: sqlDataReader.GetInt32(currencyRateID),
        subTotal: sqlDataReader.GetDecimal(subTotal),
        taxAmt: sqlDataReader.GetDecimal(taxAmt),
        freight: sqlDataReader.GetDecimal(freight),
        totalDue: sqlDataReader.GetDecimal(totalDue),
        comment: sqlDataReader.GetString(comment),
        rowguid: sqlDataReader.GetGuid(rowguid),
        modifiedDate: sqlDataReader.GetDateTime(modifiedDate)
        );
};

And without GetOrdinal:

Action<SqlDataReader> mapSalesOrderHeader = sqlDataReader =>
{
    new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(0),
        revisionNumber: sqlDataReader.GetInt16(1),
        orderDate: sqlDataReader.GetDateTime(2),
        dueDate: sqlDataReader.GetDateTime(3),
        shipDate: sqlDataReader.GetDateTime(4),
        status: sqlDataReader.GetInt16(5),
        onlineOrderFlag: sqlDataReader.GetBoolean(6),
        salesOrderNumber: sqlDataReader.GetString(7),
        purchaseOrderNumber: sqlDataReader.GetString(8),
        accountNumber: sqlDataReader.GetString(9),
        customerID: sqlDataReader.GetInt32(10),
        salesPersonID: sqlDataReader.GetInt32(11),
        territoryID: sqlDataReader.GetInt32(12),
        billToAddressID: sqlDataReader.GetInt32(13),
        shipToAddressID: sqlDataReader.GetInt32(14),
        shipMethodID: sqlDataReader.GetInt32(15),
        creditCardID: sqlDataReader.GetInt32(16),
        creditCardApprovalCode: sqlDataReader.GetString(17),
        currencyRateID: sqlDataReader.GetInt32(18),
        subTotal: sqlDataReader.GetDecimal(19),
        taxAmt: sqlDataReader.GetDecimal(20),
        freight: sqlDataReader.GetDecimal(21),
        totalDue: sqlDataReader.GetDecimal(22),
        comment: sqlDataReader.GetString(23),
        rowguid: sqlDataReader.GetGuid(24),
        modifiedDate: sqlDataReader.GetDateTime(25));
};

With GetOrdinal the results are:

CreateWithGetOrdinal
CreateWithGetOrdinal

And without:

CreateWithoutGetOrdinal
CreateWithoutGetOrdinal

As you can see the performance difference is so low that I honestly don’t think you should sacrifice the readability and maintainability of your code vs a mere 82 milliseconds on a 1000 queries. Readability speaks for itself, you don’t talk with ints anymore, and for maintainability, consider the following: If your query column(s) change and you forget to update your code, GetOrdinal will throw an IndexOutOfRangeException, instead of maybe get an InvalidCastException or, if you’re really unlucky, another column and then broken code behavior… One sidenote to add:

GetOrdinal performs a case-sensitive lookup first. If it fails, a second, case-insensitive search occurs (a case-insensitive comparison is done using the database collation). Unexpected results can occur when comparisons are affected by culture-specific casing rules. For example, in Turkish, the following example yields the wrong results because the file system in Turkish does not use linguistic casing rules for the letter ‘i’ in “file”. The method throws an IndexOutOfRange exception if the zero-based column ordinal is not found.

GetOrdinal is kana-width insensitive.

So do watch out with cases, and your culture rules. Good luck, and let me know your opinion!

PS: the project itself is hosted on GitHub, you can find it here!

TransactionScope & SqlConnection not rolling back? Here’s why…

A while back we ran into an issue with one of our projects where we executed a erroneous query (missing DELETE statement), and then left the database in an inconsistent state.

Which is weird, considering the fact that we use a TransactionScope.

After some digging around I found the behavior I wanted, and how to write it in correct C#.

Allow me to elaborate.

Consider a database with 3 tables:

T2 --> T1 <-- T3

Where both T2 and T3 link to an entity in T1, thus we cannot delete lines from T1 that are still referenced in T2 or T3.

I jumped to C# and started playing with some code, and discovered the following (mind you, each piece of code is actually supposed to throw an exception and abort):

This doesn’t use a TransactionScope, thus leaving the database in an inconsistent state:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
    {
        sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 
        // DELETE FROM T1 will cause violation of integrity, because rows from T2 are still using rows from T1.

        sqlCommand.ExecuteNonQuery();
    } 
}

Now I wanted to wrap this in a TransactionScope, so I tried this:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (var transactionScope = new TransactionScope())
    {
        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

Well guess what, this essentially fixes nothing. The database, upon completion of the ExecuteNonQuery() is left in the same inconsistent state. T3 was empty, which shouldn’t happen since the delete from T1 failed.

So what is the correct behavior?

Well, it doesn’t matter whether you create the TransactionScope or the SqlConnection first, as long as you Open() the SqlConnection inside of the TransactionScope:

using (var transactionScope = new TransactionScope())
{
    using (var sqlConnection = new SqlConnection(ConnectionString))
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}                                                                                                                           

Or the inverse (swapping the declaration of the TransactionScope and SqlConnection):

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    using (var transactionScope = new TransactionScope())
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

I wrote the test cases on a project on GitHub which you can download, compile and run as Tests for yourself!

https://github.com/CSharpFan/transaction-scope

Have a good one,

-Kristof

About a dictionary, removing and adding items, and their order.

I had a weird problem today using a Dictionary. The process involved removing and adding data, and then printing the data. I assumed that it was ordered. I was wrong! Let me show you:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry

After a while I added another line to the dictionary:

dictionary.Add(423, "jumps");

While printing this data I discovered an oddity.

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

What do you expect the output of this to be?

5 => The
31 => brown
145 => fox
423 => jumps

However the actual result was this:

5 => The
423 => jumps
31 => brown
145 => fox

The documentation tells us the following:

For purposes of enumeration, each item in the dictionary is treated as a KeyValuePair<TKey, TValue> structure representing a value and its key. The order in which the items are returned is undefined.

Interested in the actual behavior I looked at the source code of Dictionary here.

If you look closely, first at Remove and then to Add (and subsequently Insert) you can see that when you remove an item it holds a reference (in freelist) to the free ‘entry’.

What’s more weird is the behavior when you delete 2 entries, and then add 2 others:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry
dictionary.Remove(31); // also remove the "brown" entry

dictionary.Add(423, "jumps");
dictionary.Add(534, "high");

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

Which yields:

5 => The
534 => high
423 => jumps
145 => fox

But for that you’ll need to look at line 340 and further!

So what have we learned? It’s not ordered until MSDN tells you!

Have a good one!

Default values and overloads are not the same!

Consider the following class of the Awesome(r) library, using default parameters.

public class Foo
{
    public void DoCall(int timeout = 10)
    {
        /* awesome implementation goes here */
    }
}

You get the dll and that class & function in your code, like this:

Foo foo = new Foo();

foo.DoCall();

Can’t get much easier than this right?

Then the Awesome(r) library gets updated:

public class Foo
{
    public void DoCall(int timeout = 20)
    {
        /* awesome implementation goes here */
    }
}

Notice that the default value has changed. You assume that when you just overwrite the dll in production, you will adopt the new behavior.

Nop. You need to recompile. Let me show you: the problem with default values is that the developer of Awesome(r) library is no longer in control of it.

Let’s take a look at an excerpt of the IL where we create a new Foo and call DoCall without specifying timeout:

  IL_0000:  newobj     instance void AwesomeLibrary.Foo::.ctor()
  IL_0005:  stloc.0
  IL_0006:  ldloc.0
  IL_0007:  ldc.i4.s   10
  IL_0009:  callvirt   instance void AwesomeLibrary.Foo::DoCall(int32)
  IL_000e:  ret

This is a release build.

Notice how on line 4 value 10 gets pushed on the the stack, and the next line calls the DoCall.

This is a big danger in public APIs, and this is why the developer of Awesome(r) library should have used an overload instead of a default parameter:

public class Foo
{
    public void DoCall()
    { 
        this.DoCall(20); 
    }

    public void DoCall(int timeout)
    {
        /* awesome implementation goes here */
    }
}

This ensures that when a new version of Awesome(r) library is released AND that if that release is backwards API compatible, it can just be dropped in, without you having to recompile your whole codebase (but you should still test it 😛 )