AWS & Encryption keys: Revert manually edited policy

Since we’ve been working with AWS we sometimes did stuff that, after looking back on it, wasn’t the best approach.

One of those things was manually applying Key Policies on Encryption Keys.

It currently looked like this:

Manually edited key policy

Manually edited key policy

Notice the sentence:

We’ve detected that the policy document for this key has been manually edited. You may now edit the document directly to make changes to permissions.

This gives a lot of issues, for example, you cannot view grants anymore through the UI, nor can you easily add & remove Key Administrators. While the API allows you to modify the grants, that wasn’t enough for simple changes we’d like to make when testing / operating our products.

Because of the fact that you cannot delete nor reset keys in AWS, you have to find another way.

So I do have another key that shows me the UI I want, where I can modify Key Administrators and Key Usage.

So, what do we do then? We fetch the correct policy from a key that shows the correct UI and see whether we can apply it to our ‘broken’ key, and see if it works. (Spoiler, it does).

Should you not have a ‘working’ key (as described next), and don’t want to create a new one for the sake of doing this (you can’t delete a key, so I completely understand), click here to scroll down to the correct policy. 

First, let’s get the ARN of a working key, just navigate to the Encryption Key section in the IAM Management console, set your region and select your key, and copy the ARN:

Find the ARN

Find the ARN

So, how do we get that correct policy? Let’s use Python with boto3.

First of all we make sure we have an account in

%userprofile%\.aws\credentials

If you don’t please follow the steps here.

Next up is ensuring we have boto3 installed. Fire up a cmd window and execute the following:

pip install boto3

When that’s done, we can open Python, and that key for its policy.

import boto3

kms = boto3.client("iam")

policy = kms.get_key_policy(KeyId="THE ARN YOU JUST GOT FROM A WORKING KEY", PolicyName="default")["Policy"]

print policy

2 things here:

  1. Do paste in the correct ARN!
  2. Why default as policy name? That’s the only one they support.

That policy is a JSON string. It’s full of \n gibberish, so let’s trim that out (in the same window, we reuse that policy variable):

import json

json.dumps(json.loads(policy))

Which should give you this beautiful JSON document:

'{"Version": "2012-10-17", "Id": "key-consolepolicy-2", "Statement": [{"Action": "kms:*", "Principal": {"AWS": "arn:aws:iam::************:root"}, "Resource": "*", "Effect": "Allow", "Sid": "Enable IAM User Permissions"}, {"Action": ["kms:Describe*", "kms:Put*", "kms:Create*", "kms:Update*", "kms:Enable*", "kms:Revoke*", "kms:List*", "kms:Get*", "kms:Disable*", "kms:Delete*"], "Resource": "*", "Effect": "Allow", "Sid": "Allow access for Key Administrators"}, {"Action": ["kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt"], "Resource": "*", "Effect": "Allow", "Sid": "Allow use of the key"}, {"Action": ["kms:ListGrants", "kms:CreateGrant", "kms:RevokeGrant"], "Resource": "*", "Effect": "Allow", "Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}, "Sid": "Allow attachment of persistent resources"}]}'

(!) Notice the single quotes in the beginning and the end. You DON’T want those. Also notice that I’ve removed my Account Id (replaced by asterisks), so if you’re just copy pasting, make sure you replace them by your own Account Id, which you can find here (middle, Account Id, 12 digit number).

Now let’s go to our broken key again, and in the policy field we paste in our just-retrieved working policy.

Hit the save button, and lo and behold, we revert back to the original UI.

Succes!

Had any issues? Fire off in the comments!

Topshelf install, PowerShell and Get-Credentials

In the project I’m currently working at we use PowerShell script for configuration and build execution.

This means that if you get a new laptop, or a new member joins the team, or even when you need to change your Windows password, you just need to run the script again and it will set up everything in the correct locations & with the correct credentials.

The credentials were a problem though.

When installing a Topshelf service with the --interactive parameter (we need to install under the current user, not System) it will prompt you for your credentials for each service you want to install. For one, it’s fine, for 2, it’s already boring, for 3, … You get the point.

We initially used the following command line to install the services:

. $pathToServiceExe --install --interactive --autostart

To fix this we will give the $pathToServiceExe the username and password ourselves with the -username and -password. We should also omit the --interactive.

First gotcha here: When reading the documentation, it says one must specify the commands in this format:

. $pathToServiceExe --install --autostart -username:username -password:password

However, this is not the case. You mustn’t separate the command line argument and the value with a :.

Now, we don’t want to hardcode the username & password file in the setup script.

So let’s get the credentials of the current user:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs"

Next up we should extract the username & password of the $credentialsOfCurrentUser variable, as we need it in clear-text (potential security risk!).

One can do this in 2 ways, either by getting the NetworkCredential from the PSCredential with GetNetworkCredential():

$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = ("{0}\{1}") -f $networkCredentials.Domain, $networkCredentials.UserName # change this if you want the user@domain syntax, it will then have an empty Domain and everything will be in UserName. 
$password = $networkCredentials.Password

Notice the $username caveat.

Or, by not converting it to a NetworkCredential:

# notice the UserName contains the Domain AND the UserName, no need to extract it separately
$username = $credentialsOfCurrentUser.UserName

# little more for the password
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentialsOfCurrentUser.Password)
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Notice the extra code to retrieve the $password in plain-text.

I would recommend combining both, using the NetworkCredential for the $password, but the regular PSCredential for the $username as then you’re not dependent on how your user enters his username.

So the best version is:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs" 
$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = $credentialsOfCurrentUser.UserName
$password = $networkCredentials.Password

Now that we have those variables we can pass them on to the install of the Topshelf exe:

. $pathToServiceExe install -username `"$username`" -password `"$password`" --autostart

Notice the backticks (`) to ensure the double quotes are escaped.

In this way you can install all your services and only prompt your user for his credentials once!

When frameworks try to be smart, AngularJS & Expressions

One of my colleagues just discovered this bug/feature in AngularJS. Using an ngIf on a string "no" will result in false.

HTML:

<div ng-app>
    <div ng-controller="yesNoController">
        <div ng-if="yes">Yes is defined, will display</div>
        <div ng-if="no">No is defined, but will not display on Angular 1.2.1</div>
        <div ng-if="notDefined">Not defined, will not display</div>
     </div>
</div>

JavaScript:

function yesNoController($scope) {
    $scope.yes = "yes";
    $scope.no = "no";
    $scope.notDefined = undefined;
}

Will print:

Yes is defined, will display

Let’s read the documentation on expression, to see where this case is covered.

.

.

.

.

Can you find it?

Neither can I.

Fix?

Use the double bang:

        // ...
        <div ng-if="!!no">No is defined, but we need to add a double bang for it to parse correctly</div>
        // ...

JSFiddle can be found here.

For those who care, it’s not a JS thing:

// execute this line in a console
alert("no" ? "no evaluated as true" : "no evaluated as false"); // will alert "no evaluated as true"

Enabling dynamic compression (gzip) for WebAPI and IIS

A lot of code on the internet refers to writing custom ActionFilters, or even HttpHandlers that will compress your return payload for you.

For example, see this package (which with its name implies that it is Microsoft, but then says it’s not Microsoft).

At the moment of writing the above-linked package even throws an error when you return a 200 OK without a body…

But in the end, it’s very simple to enable compression on your IIS server without writing a single line of code:

You first need to install the IIS Dynamic Content Compression module:

Dynamic Content Compression

Dynamic Content Compression

Or, if you’re a command line guy, execute the following command in an elevated CMD:

dism /online /Enable-Feature /FeatureName:IIS-HttpCompressionDynamic

Next up you need to enable the Dynamic Content Compression to compress

application/json

and

application/json; charset=utf-8

To do this, execute the following commands in an elevated CMD:

cd c:\Windows\System32\inetsrv

appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost

This adds the 2 mimetypes to the list of types the module is allowed to compress. Validate that they are added with this command:

appcmd.exe list config -section:system.webServer/httpCompression

Validate that the 2 mimetypes are there and enabled:

Validate that application/json and application/json; charset=utf-8 are compressable

Validate that application/json and application/json; charset=utf-8 are compressable

And lastly, you’ll probably need to restart the Windows Process Activation Service.

Best is to do this through the UI because I have yet to find a way in CMD to restart a service (can’t seem to start services that are dependent on the one we just started).

In services.msc you’ll need to search for Windows Process Activation Service. Restart it.

Restart WPAS

Restart WPAS

Obviously there are more settings available, take a look at the httpCompression Element settings page.

I recommend reading about 2 at least:

  • dynamicCompressionDisableCpuUsage
  • noCompressionForProxies

Good luck,

-Kristof

Update on handedness (menu location)

A while back I wrote how to change the handedness (which seems to be the correct term, instead of the dreadful ‘Menu on the wrong side with a touch screen’).

I got a machine in my hands which exhibited the previously mentioned problem. However Tablet PC Settings weren’t installed, so we couldn’t open the tab.

After searching the bowels of the internet I found the following shell shortcut:

shell:::{80F3F1D5-FECA-45F3-BC32-752C152E456E}

Putting this in Winkey+R, or in the Windows 7/8(.1) search box will open the Tablet PC Settings, and if you don’t have a touch screen, will default to the Other tab, where you can change the handedness of your menus!

Menus appear to the right of your hand.

Have a good one,

-Kristof

The impact of SqlDataReader.GetOrdinal on performance

I recently had a discussion about the impact of SqlDataReader.GetOrdinal on execution of a SqlClient.SqlCommand. I then decided to run some code to measure the difference, because I think that’s the only way to get a decent opinion. This is the code that I’ve used to run a certain query 1000 times:

private void InvokeQuery(Action mapObject)
{
    Stopwatch stopwatch = Stopwatch.StartNew();

    for (int i = 0; i < Iterations; i++)
    {
        using (var sqlCommand = new SqlCommand(this._query, this._sqlConnection))
        {
            using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
            {
                while (sqlDataReader.NextResult())
                {
                    mapObject(sqlDataReader);
                }
            }
        }
    }

    stopwatch.Stop();

    Debug.WriteLine("Running {0} queries took {1} milliseconds!", Iterations, stopwatch.ElapsedMilliseconds);
}

mapObject uses either directly the ordinal, or fetches the ordinal based on the column name. Also, I moved everything inside of the for loop to ensure nothing could be reused between queries. Here are the mapObject Actions, with GetOrdinal:

Action<SqlDataReader> = sqlDataReader =>
{
    int salesOrderID = sqlDataReader.GetOrdinal("SalesOrderID");
    int revisionNumber = sqlDataReader.GetOrdinal("RevisionNumber");
    int orderDate = sqlDataReader.GetOrdinal("OrderDate");
    int dueDate = sqlDataReader.GetOrdinal("DueDate");
    int shipDate = sqlDataReader.GetOrdinal("ShipDate");
    int status = sqlDataReader.GetOrdinal("Status");
    int onlineOrderFlag = sqlDataReader.GetOrdinal("OnlineOrderFlag");
    int salesOrderNumber = sqlDataReader.GetOrdinal("SalesOrderNumber");
    int purchaseOrderNumber = sqlDataReader.GetOrdinal("PurchaseOrderNumber");
    int accountNumber = sqlDataReader.GetOrdinal("AccountNumber");
    int customerID = sqlDataReader.GetOrdinal("CustomerID");
    int salesPersonID = sqlDataReader.GetOrdinal("SalesPersonID");
    int territoryID = sqlDataReader.GetOrdinal("TerritoryID");
    int billToAddressID = sqlDataReader.GetOrdinal("BillToAddressID");
    int shipToAddressID = sqlDataReader.GetOrdinal("ShipToAddressID");
    int shipMethodID = sqlDataReader.GetOrdinal("ShipMethodID");
    int creditCardID = sqlDataReader.GetOrdinal("CreditCardID");
    int creditCardApprovalCode = sqlDataReader.GetOrdinal("CreditCardApprovalCode");
    int currencyRateID = sqlDataReader.GetOrdinal("CurrencyRateID");
    int subTotal = sqlDataReader.GetOrdinal("SubTotal");
    int taxAmt = sqlDataReader.GetOrdinal("TaxAmt");
    int freight = sqlDataReader.GetOrdinal("Freight");
    int totalDue = sqlDataReader.GetOrdinal("TotalDue");
    int comment = sqlDataReader.GetOrdinal("Comment");
    int rowguid = sqlDataReader.GetOrdinal("rowguid");
    int modifiedDate = sqlDataReader.GetOrdinal("ModifiedDate");

    var temp = new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(salesOrderID),
        revisionNumber: sqlDataReader.GetInt16(revisionNumber),
        orderDate: sqlDataReader.GetDateTime(orderDate),
        dueDate: sqlDataReader.GetDateTime(dueDate),
        shipDate: sqlDataReader.GetDateTime(shipDate),
        status: sqlDataReader.GetInt16(status),
        onlineOrderFlag: sqlDataReader.GetBoolean(onlineOrderFlag),
        salesOrderNumber: sqlDataReader.GetString(salesOrderNumber),
        purchaseOrderNumber: sqlDataReader.GetString(purchaseOrderNumber),
        accountNumber: sqlDataReader.GetString(accountNumber),
        customerID: sqlDataReader.GetInt32(customerID),
        salesPersonID: sqlDataReader.GetInt32(salesPersonID),
        territoryID: sqlDataReader.GetInt32(territoryID),
        billToAddressID: sqlDataReader.GetInt32(billToAddressID),
        shipToAddressID: sqlDataReader.GetInt32(shipToAddressID),
        shipMethodID: sqlDataReader.GetInt32(shipMethodID),
        creditCardID: sqlDataReader.GetInt32(creditCardID),
        creditCardApprovalCode: sqlDataReader.GetString(creditCardApprovalCode),
        currencyRateID: sqlDataReader.GetInt32(currencyRateID),
        subTotal: sqlDataReader.GetDecimal(subTotal),
        taxAmt: sqlDataReader.GetDecimal(taxAmt),
        freight: sqlDataReader.GetDecimal(freight),
        totalDue: sqlDataReader.GetDecimal(totalDue),
        comment: sqlDataReader.GetString(comment),
        rowguid: sqlDataReader.GetGuid(rowguid),
        modifiedDate: sqlDataReader.GetDateTime(modifiedDate)
        );
};

And without GetOrdinal:

Action<SqlDataReader> mapSalesOrderHeader = sqlDataReader =>
{
    new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(0),
        revisionNumber: sqlDataReader.GetInt16(1),
        orderDate: sqlDataReader.GetDateTime(2),
        dueDate: sqlDataReader.GetDateTime(3),
        shipDate: sqlDataReader.GetDateTime(4),
        status: sqlDataReader.GetInt16(5),
        onlineOrderFlag: sqlDataReader.GetBoolean(6),
        salesOrderNumber: sqlDataReader.GetString(7),
        purchaseOrderNumber: sqlDataReader.GetString(8),
        accountNumber: sqlDataReader.GetString(9),
        customerID: sqlDataReader.GetInt32(10),
        salesPersonID: sqlDataReader.GetInt32(11),
        territoryID: sqlDataReader.GetInt32(12),
        billToAddressID: sqlDataReader.GetInt32(13),
        shipToAddressID: sqlDataReader.GetInt32(14),
        shipMethodID: sqlDataReader.GetInt32(15),
        creditCardID: sqlDataReader.GetInt32(16),
        creditCardApprovalCode: sqlDataReader.GetString(17),
        currencyRateID: sqlDataReader.GetInt32(18),
        subTotal: sqlDataReader.GetDecimal(19),
        taxAmt: sqlDataReader.GetDecimal(20),
        freight: sqlDataReader.GetDecimal(21),
        totalDue: sqlDataReader.GetDecimal(22),
        comment: sqlDataReader.GetString(23),
        rowguid: sqlDataReader.GetGuid(24),
        modifiedDate: sqlDataReader.GetDateTime(25));
};

With GetOrdinal the results are:

CreateWithGetOrdinal

CreateWithGetOrdinal

And without:

CreateWithoutGetOrdinal

CreateWithoutGetOrdinal

As you can see the performance difference is so low that I honestly don’t think you should sacrifice the readability and maintainability of your code vs a mere 82 milliseconds on a 1000 queries. Readability speaks for itself, you don’t talk with ints anymore, and for maintainability, consider the following: If your query column(s) change and you forget to update your code, GetOrdinal will throw an IndexOutOfRangeException, instead of maybe get an InvalidCastException or, if you’re really unlucky, another column and then broken code behavior… One sidenote to add:

GetOrdinal performs a case-sensitive lookup first. If it fails, a second, case-insensitive search occurs (a case-insensitive comparison is done using the database collation). Unexpected results can occur when comparisons are affected by culture-specific casing rules. For example, in Turkish, the following example yields the wrong results because the file system in Turkish does not use linguistic casing rules for the letter ‘i’ in “file”. The method throws an IndexOutOfRange exception if the zero-based column ordinal is not found.

GetOrdinal is kana-width insensitive.

So do watch out with cases, and your culture rules. Good luck, and let me know your opinion!

PS: the project itself is hosted on GitHub, you can find it here!

Menu on the wrong side with a touch screen?

When you’re reading this you probably have a touch screen.

So, I never use my touch screen. Almost never. But I did notice that by default my menus in Windows (from a menu bar, not a ribbon) appear (when possible) on the right side of the clicked menu item.

Like this:

Menu appears on left side of the menu toolbar item.

Menu appears on left side of the menu toolbar item.

Goosebumps. Something is off. It took me a while to realize this,

The menu expanded to the left!

So, what is this. I can’t remember what exactly I searched for, but the change you need to make is in Tablet PC Settings.

When your menus are expanded to the left you’ll see something like this:

Menus appear to the left of your hand.

Menus appear to the left of your hand.

This is different from the default that I’ve been used to since I’ve been using Windows 95.

Change it to ‘Left-handed':

Menus appear to the right of your hand.

Menus appear to the right of your hand.

Hit apply, and restart any offending programs, open a menu and enjoy:

Menu appears on right side of the menu toolbar item.

Menu appears on right side of the menu toolbar item.

I can easy again…

TransactionScope & SqlConnection not rolling back? Here’s why…

A while back we ran into an issue with one of our projects where we executed a erroneous query (missing DELETE statement), and then left the database in an inconsistent state.

Which is weird, considering the fact that we use a TransactionScope.

After some digging around I found the behavior I wanted, and how to write it in correct C#.

Allow me to elaborate.

Consider a database with 3 tables:

T2 --> T1 <-- T3

Where both T2 and T3 link to an entity in T1, thus we cannot delete lines from T1 that are still referenced in T2 or T3.

I jumped to C# and started playing with some code, and discovered the following (mind you, each piece of code is actually supposed to throw an exception and abort):

This doesn’t use a TransactionScope, thus leaving the database in an inconsistent state:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
    {
        sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 
        // DELETE FROM T1 will cause violation of integrity, because rows from T2 are still using rows from T1.

        sqlCommand.ExecuteNonQuery();
    } 
}

Now I wanted to wrap this in a TransactionScope, so I tried this:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (var transactionScope = new TransactionScope())
    {
        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

Well guess what, this essentially fixes nothing. The database, upon completion of the ExecuteNonQuery() is left in the same inconsistent state. T3 was empty, which shouldn’t happen since the delete from T1 failed.

So what is the correct behavior?

Well, it doesn’t matter whether you create the TransactionScope or the SqlConnection first, as long as you Open() the SqlConnection inside of the TransactionScope:

using (var transactionScope = new TransactionScope())
{
    using (var sqlConnection = new SqlConnection(ConnectionString))
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}                                                                                                                           

Or the inverse (swapping the declaration of the TransactionScope and SqlConnection):

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    using (var transactionScope = new TransactionScope())
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

I wrote the test cases on a project on GitHub which you can download, compile and run as Tests for yourself!

https://github.com/CSharpFan/transaction-scope

Have a good one,

-Kristof

About a dictionary, removing and adding items, and their order.

I had a weird problem today using a Dictionary. The process involved removing and adding data, and then printing the data. I assumed that it was ordered. I was wrong! Let me show you:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry

After a while I added another line to the dictionary:

dictionary.Add(423, "jumps");

While printing this data I discovered an oddity.

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

What do you expect the output of this to be?

5 => The
31 => brown
145 => fox
423 => jumps

However the actual result was this:

5 => The
423 => jumps
31 => brown
145 => fox

The documentation tells us the following:

For purposes of enumeration, each item in the dictionary is treated as a KeyValuePair<TKey, TValue> structure representing a value and its key. The order in which the items are returned is undefined.

Interested in the actual behavior I looked at the source code of Dictionary here.

If you look closely, first at Remove and then to Add (and subsequently Insert) you can see that when you remove an item it holds a reference (in freelist) to the free ‘entry’.

What’s more weird is the behavior when you delete 2 entries, and then add 2 others:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry
dictionary.Remove(31); // also remove the "brown" entry

dictionary.Add(423, "jumps");
dictionary.Add(534, "high");

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

Which yields:

5 => The
534 => high
423 => jumps
145 => fox

But for that you’ll need to look at line 340 and further!

So what have we learned? It’s not ordered until MSDN tells you!

Have a good one!

Default values and overloads are not the same!

Consider the following class of the Awesome(r) library, using default parameters.

public class Foo
{
    public void DoCall(int timeout = 10)
    {
        /* awesome implementation goes here */
    }
}

You get the dll and that class & function in your code, like this:

Foo foo = new Foo();

foo.DoCall();

Can’t get much easier than this right?

Then the Awesome(r) library gets updated:

public class Foo
{
    public void DoCall(int timeout = 20)
    {
        /* awesome implementation goes here */
    }
}

Notice that the default value has changed. You assume that when you just overwrite the dll in production, you will adopt the new behavior.

Nop. You need to recompile. Let me show you: the problem with default values is that the developer of Awesome(r) library is no longer in control of it.

Let’s take a look at an excerpt of the IL where we create a new Foo and call DoCall without specifying timeout:

  IL_0000:  newobj     instance void AwesomeLibrary.Foo::.ctor()
  IL_0005:  stloc.0
  IL_0006:  ldloc.0
  IL_0007:  ldc.i4.s   10
  IL_0009:  callvirt   instance void AwesomeLibrary.Foo::DoCall(int32)
  IL_000e:  ret

This is a release build.

Notice how on line 4 value 10 gets pushed on the the stack, and the next line calls the DoCall.

This is a big danger in public APIs, and this is why the developer of Awesome(r) library should have used an overload instead of a default parameter:

public class Foo
{
    public void DoCall()
    { 
        this.DoCall(20); 
    }

    public void DoCall(int timeout)
    {
        /* awesome implementation goes here */
    }
}

This ensures that when a new version of Awesome(r) library is released AND that if that release is backwards API compatible, it can just be dropped in, without you having to recompile your whole codebase (but you should still test it 😛 )