F#: Disabling SSL Certificate validation

Yesterday I wanted to download some content off a website with F#, however unfortunately the certificate of the website was expired.

    let result = 
            let request = 
                |> WebRequest.Create

            let response = 
                request.GetResponse ()

            // parse data
            let parsed = "..." 

            Ok parsed
        | ex ->      
            Error ex

If we execute this, then result would be of Error with the following exception:

SSL certificate validation exception
SSL certificate validation exception
"The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel."
"The remote certificate is invalid according to the validation procedure."

So how do we fix this?

The solution is to set the following code at startup of the application (or at least before the first call):

ServicePointManager.ServerCertificateValidationCallback <- 
    new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)

Notice that you should not do this, because this does not validate the certificate at ALL!
Also, this is for ALL calls, if you want to do it on a specific call you need to do make some changes.

First of all, it doesn’t work with WebRequest.Create, you need to use WebRequest.CreateHttp, or cast the WebRequest to HttpWebRequest, as the property we need, ServerCertificateValidationCallback is not available on WebRequest, only on HttpWebRequest. The resulting code looks like this:

            let request = 
                |> WebRequest.CreateHttp

            request.ServerCertificateValidationCallback <- new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)

            let response = 
                request.GetResponse ()

Again, don’t do this in production!

If need be, do it on a single HttpWebRequest, like the last example, and write some code so that you ignore the expiration part, but leave in place the validation part.

Code on Github!

Be careful with npm shrinkwrap

Recently we had the issue that we used version x of an npm package. However in the course of time this package was updated by the author and contained a critical bug, which broke our AWS deployments. Locally this was no issue because we had a version installed that satisfied the version requirements.

In order to prevent issues like this we looked at npm shrinkwrap. This writes a file called npm-shrinkwrap.json which ‘freezes’ all of the package-versions that are installed in current project.

Now this is dangerous, as we just found out.

The issue is that when an author decides to delete a package from the feed. Delete you ask? Yes, delete, gone, no trace, nothing.

What’s the issue you might ask? You’ll find it next time?

Not really.

Imagine you’re using Elastic Beanstalk, which, based on certain triggers, can spawn a new instance of a server, or delete one.

Now today you release your application to your servers, and you shrinkwrap your packages.

Before you do that, you obviously clear your local npm cache (located in %appdata%\npm-cache), and your local node_modules. Then you do an npm install to verify every package is correctly installed, you do a few test runs, maybe on a local server. Then you package and send it of to AWS.

All runs well, you’re happy, and your boss is happy.

Next week, for whatever reason, you get a high load on your servers. Elastic Beanstalk decides to add one more instance.

And then stuff starts to break. You get emails that its health is degraded. Then you get emails that its health severe.

At 2a.m. you open your laptop, and you start looking at the logs. There you find something in the lines of:

  npm ERR! Linux 3.14.48-33.39.amzn1.x86_64
  npm ERR! argv "/usr/bin/iojs" "/usr/bin/npm" "install"
  npm ERR! node v2.4.0
  npm ERR! npm  v2.13.0
  npm ERR! version not found: n[email protected]
  npm ERR! 
  npm ERR! If you need help, you may report this error at:
  npm ERR!     <https://github.com/npm/npm/issues>
  npm ERR! Please include the following file with any support request:
  npm ERR!     /app/npm-debug.log

What? You tested locally? What happened?

Okay, you fire up a console window. You make a test dir. You run npm [email protected].

All goes well. Or does it?

Let’s look at the output:

C:\__SOURCES>mkdir test

C:\__SOURCES>cd test

C:\__SOURCES\test>npm install [email protected]
npm http GET https://registry.npmjs.org/node-uuid/1.4.4
npm http 404 https://registry.npmjs.org/node-uuid/1.4.4
[email protected] node_modules\node-uuid

Notice the 404? I didn’t… But it’s important!

Now here’s what happened: locally I had [email protected] in my cache, so he took that one, even though the package disappeared from the registry.

However: my new instance on Elastic Beanstalk didn’t. That’s why it failed.

So, solutions:

  • Be careful when you shrinkwrap. Stuff might break in the future, as authors delete packages
  • Create a private feed, that you curate
  • As a package author, don’t delete packages. Just don’t. Other people might depend on you.

DynamoDb & updating objects: it’s doesn’t react like SQL!

Today I stumbled upon the following bug:

We had an object with some properties that we wanted to update, but only if a certain property of that object is not set, i.e. it should be null.

    "Id": 1, // Id is the HashKey

In this case we wanted to update the object with Id 1, and set an attribute called Foo to "Bar"

To do this I wrote the following Javascript, using the aws-sdk:

function updateObject(id) {
    var dynamodb = new AWS.DynamoDB();

            Id: id 
        }, { 
            UpdateExpression: "SET Foo = :value", 
            ExpressionAttributeValues: {
                ":value": "Bar"
            ConditionExpression: "attribute_not_exists(Foo)" 
        }, function(error, data) { 
            if(error) { 
                // TODO check that the error is a ConditionalCheckFailedException, in 
                // which case the Condition failed, otherwise something else might be off. 
            } else {
                console.log("All good, we've updated the object");


Now assume we have have a range of 1 -> 12 in our table, where half of them already have the Foo attribute, so we should get 50% Error, and 50% All good, ... (which is the case).

However, what do we expect when we update an item with Id 13?

When I, in my mind, which talks (used to) talk SQL when thinging about a database, updating something that is not there, doesn’t do anything.

Consider the following table:


With the following query:

INSERT INTO Test (Id, Foo) VALUES (1, NULL), (2, N'Bar'), (3, NULL)


UPDATE Test SET Foo = 'Bar' WHERE Id = 1 AND Foo IS NULL
  SELECT N'1 updated, set Foo to Bar'
  SELECT N'1 not updated, Foo was already set'


UPDATE Test SET Foo = 'Bar' WHERE Id = 2 AND Foo IS NULL
  SELECT N'2 updated, set Foo to Bar'
  SELECT N'2 not updated, Foo was already set'

UPDATE Test SET Foo = 'Bar' WHERE Id = 7 AND Foo IS NULL -- 7 Doesn't exist!
  SELECT N'7 updated, set Foo to Bar'
  SELECT N'7 not updated, because 7 doesn''t exist!'

This will print, along with some empty result sets, the following:

1 updated, set Foo to Bar
2 not updated, Foo was already set
7 not updated, because 7 doesn't exist!

Now, that knowledge in SQL doesn’t apply to DynamoDb.

While testing on some non-existing values we saw that our code passed the testcases perfectly. That’s not how it should be.

Let’s take a look again at the documentation, this time do actually read the first line:

Edits an existing item’s attributes, or adds a new item to the table if it does not already exist.

(emphasis mine).

Image depicting Homer who also has issues updating his objects in DynamoDb

So we need to guard ourselves against updates on non-existing items? How do we do that? Let’s extend our ConditionExpression. Start by taking the original code, and change the ConditionExpression as highlighted:

function updateObject(id) {
    var dynamodb = new AWS.DynamoDB();

            Id: id 
        }, { 
            UpdateExpression: "SET Foo = :value", 
            ExpressionAttributeValues: {
                ":id": id,
                ":value": "Bar"
            // make sure the object we're updating actually has
            // :id as Id, the side-effect of this is that if none of those
            // is found, it will throw a ConditionalCheckFailedException
            // which is what we want
            ConditionExpression: "Id = :id AND attribute_not_exists(Foo)" 
        }, function(error, data) { 
            if(error) { 
                // TODO check that the error is a ConditionalCheckFailedException, in 
                // which case the Condition failed, otherwise something else might be off. 
            } else {
                console.log("All good, we've updated the object");

Comments? Sound off below!

AWS & Encryption keys: Revert manually edited policy

Since we’ve been working with AWS we sometimes did stuff that, after looking back on it, wasn’t the best approach.

One of those things was manually applying Key Policies on Encryption Keys.

It currently looked like this:

Manually edited key policy
Manually edited key policy

Notice the sentence:

We’ve detected that the policy document for this key has been manually edited. You may now edit the document directly to make changes to permissions.

This gives a lot of issues, for example, you cannot view grants anymore through the UI, nor can you easily add & remove Key Administrators. While the API allows you to modify the grants, that wasn’t enough for simple changes we’d like to make when testing / operating our products.

Because of the fact that you cannot delete nor reset keys in AWS, you have to find another way.

So I do have another key that shows me the UI I want, where I can modify Key Administrators and Key Usage.

So, what do we do then? We fetch the correct policy from a key that shows the correct UI and see whether we can apply it to our ‘broken’ key, and see if it works. (Spoiler, it does).

Should you not have a ‘working’ key (as described next), and don’t want to create a new one for the sake of doing this (you can’t delete a key, so I completely understand), click here to scroll down to the correct policy. 

First, let’s get the ARN of a working key, just navigate to the Encryption Key section in the IAM Management console, set your region and select your key, and copy the ARN:

Find the ARN
Find the ARN

So, how do we get that correct policy? Let’s use Python with boto3.

First of all we make sure we have an account in


If you don’t please follow the steps here.

Next up is ensuring we have boto3 installed. Fire up a cmd window and execute the following:

pip install boto3

When that’s done, we can open Python, and that key for its policy.

import boto3

kms = boto3.client("iam")

policy = kms.get_key_policy(KeyId="THE ARN YOU JUST GOT FROM A WORKING KEY", PolicyName="default")["Policy"]

print policy

2 things here:

  1. Do paste in the correct ARN!
  2. Why default as policy name? That’s the only one they support.

That policy is a JSON string. It’s full of \n gibberish, so let’s trim that out (in the same window, we reuse that policy variable):

import json


Which should give you this beautiful JSON document:

'{"Version": "2012-10-17", "Id": "key-consolepolicy-2", "Statement": [{"Action": "kms:*", "Principal": {"AWS": "arn:aws:iam::************:root"}, "Resource": "*", "Effect": "Allow", "Sid": "Enable IAM User Permissions"}, {"Action": ["kms:Describe*", "kms:Put*", "kms:Create*", "kms:Update*", "kms:Enable*", "kms:Revoke*", "kms:List*", "kms:Get*", "kms:Disable*", "kms:Delete*"], "Resource": "*", "Effect": "Allow", "Sid": "Allow access for Key Administrators"}, {"Action": ["kms:DescribeKey", "kms:GenerateDataKey*", "kms:Encrypt", "kms:ReEncrypt*", "kms:Decrypt"], "Resource": "*", "Effect": "Allow", "Sid": "Allow use of the key"}, {"Action": ["kms:ListGrants", "kms:CreateGrant", "kms:RevokeGrant"], "Resource": "*", "Effect": "Allow", "Condition": {"Bool": {"kms:GrantIsForAWSResource": true}}, "Sid": "Allow attachment of persistent resources"}]}'

(!) Notice the single quotes in the beginning and the end. You DON’T want those. Also notice that I’ve removed my Account Id (replaced by asterisks), so if you’re just copy pasting, make sure you replace them by your own Account Id, which you can find here (middle, Account Id, 12 digit number).

Now let’s go to our broken key again, and in the policy field we paste in our just-retrieved working policy.

Hit the save button, and lo and behold, we revert back to the original UI.


Had any issues? Sound off in the comments!

Topshelf install, PowerShell and Get-Credentials

In the project I’m currently working at we use PowerShell script for configuration and build execution.

This means that if you get a new laptop, or a new member joins the team, or even when you need to change your Windows password, you just need to run the script again and it will set up everything in the correct locations & with the correct credentials.

The credentials were a problem though.

When installing a Topshelf service with the --interactive parameter (we need to install under the current user, not System) it will prompt you for your credentials for each service you want to install. For one, it’s fine, for 2, it’s already boring, for 3, … You get the point.

We initially used the following command line to install the services:

. $pathToServiceExe --install --interactive --autostart

To fix this we will give the $pathToServiceExe the username and password ourselves with the -username and -password. We should also omit the --interactive.

First gotcha here: When reading the documentation, it says one must specify the commands in this format:

. $pathToServiceExe --install --autostart -username:username -password:password

However, this is not the case. You mustn’t separate the command line argument and the value with a :.

Now, we don’t want to hardcode the username & password file in the setup script.

So let’s get the credentials of the current user:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs"

Next up we should extract the username & password of the $credentialsOfCurrentUser variable, as we need it in clear-text (potential security risk!).

One can do this in 2 ways, either by getting the NetworkCredential from the PSCredential with GetNetworkCredential():

$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = ("{0}\{1}") -f $networkCredentials.Domain, $networkCredentials.UserName # change this if you want the [email protected] syntax, it will then have an empty Domain and everything will be in UserName. 
$password = $networkCredentials.Password

Notice the $username caveat.

Or, by not converting it to a NetworkCredential:

# notice the UserName contains the Domain AND the UserName, no need to extract it separately
$username = $credentialsOfCurrentUser.UserName

# little more for the password
$BSTR = [System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($credentialsOfCurrentUser.Password)
$password = [System.Runtime.InteropServices.Marshal]::PtrToStringAuto($BSTR)

Notice the extra code to retrieve the $password in plain-text.

I would recommend combining both, using the NetworkCredential for the $password, but the regular PSCredential for the $username as then you’re not dependent on how your user enters his username.

So the best version is:

$credentialsOfCurrentUser = Get-Credential -Message "Please enter your username & password for the service installs" 
$networkCredentials = $credentialsOfCurrentUser.GetNetworkCredential();
$username = $credentialsOfCurrentUser.UserName
$password = $networkCredentials.Password

Now that we have those variables we can pass them on to the install of the Topshelf exe:

. $pathToServiceExe install -username `"$username`" -password `"$password`" --autostart

Notice the backticks (`) to ensure the double quotes are escaped.

In this way you can install all your services and only prompt your user for his credentials once!

When frameworks try to be smart, AngularJS & Expressions

One of my colleagues just discovered this bug/feature in AngularJS. Using an ngIf on a string "no" will result in false.


<div ng-app>
    <div ng-controller="yesNoController">
        <div ng-if="yes">Yes is defined, will display</div>
        <div ng-if="no">No is defined, but will not display on Angular 1.2.1</div>
        <div ng-if="notDefined">Not defined, will not display</div>


function yesNoController($scope) {
    $scope.yes = "yes";
    $scope.no = "no";
    $scope.notDefined = undefined;

Will print:

Yes is defined, will display

Let’s read the documentation on expression, to see where this case is covered.





Can you find it?

Neither can I.


Use the double bang:

        // ...
        <div ng-if="!!no">No is defined, but we need to add a double bang for it to parse correctly</div>
        // ...

JSFiddle can be found here.

For those who care, it’s not a JS thing:

// execute this line in a console
alert("no" ? "no evaluated as true" : "no evaluated as false"); // will alert "no evaluated as true"

Enabling dynamic compression (gzip) for WebAPI and IIS

A lot of code on the internet refers to writing custom ActionFilters, or even HttpHandlers that will compress your return payload for you.

For example, see this package (which with its name implies that it is Microsoft, but then says it’s not Microsoft).

At the moment of writing the above-linked package even throws an error when you return a 200 OK without a body…

But in the end, it’s very simple to enable compression on your IIS server without writing a single line of code:

You first need to install the IIS Dynamic Content Compression module:

Dynamic Content Compression
Dynamic Content Compression

Or, if you’re a command line guy, execute the following command in an elevated CMD:

dism /online /Enable-Feature /FeatureName:IIS-HttpCompressionDynamic

Next up you need to enable the Dynamic Content Compression to compress



application/json; charset=utf-8

To do this, execute the following commands in an elevated CMD:

cd c:\Windows\System32\inetsrv

appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json',enabled='True']" /commit:apphost
appcmd.exe set config -section:system.webServer/httpCompression /+"dynamicTypes.[mimeType='application/json; charset=utf-8',enabled='True']" /commit:apphost

This adds the 2 mimetypes to the list of types the module is allowed to compress. Validate that they are added with this command:

appcmd.exe list config -section:system.webServer/httpCompression

Validate that the 2 mimetypes are there and enabled:

Validate that application/json and application/json; charset=utf-8 are compressable
Validate that application/json and application/json; charset=utf-8 are compressable

And lastly, you’ll probably need to restart the Windows Process Activation Service.

Best is to do this through the UI because I have yet to find a way in CMD to restart a service (can’t seem to start services that are dependent on the one we just started).

In services.msc you’ll need to search for Windows Process Activation Service. Restart it.

Restart WPAS
Restart WPAS

Obviously there are more settings available, take a look at the httpCompression Element settings page.

I recommend reading about 2 at least:

  • dynamicCompressionDisableCpuUsage
  • noCompressionForProxies

Good luck,


Update on handedness (menu location)

A while back I wrote how to change the handedness (which seems to be the correct term, instead of the dreadful ‘Menu on the wrong side with a touch screen’).

I got a machine in my hands which exhibited the previously mentioned problem. However Tablet PC Settings weren’t installed, so we couldn’t open the tab.

After searching the bowels of the internet I found the following shell shortcut:


Putting this in Winkey+R, or in the Windows 7/8(.1) search box will open the Tablet PC Settings, and if you don’t have a touch screen, will default to the Other tab, where you can change the handedness of your menus!

Menus appear to the right of your hand.

Have a good one,


The impact of SqlDataReader.GetOrdinal on performance

I recently had a discussion about the impact of SqlDataReader.GetOrdinal on execution of a SqlClient.SqlCommand. I then decided to run some code to measure the difference, because I think that’s the only way to get a decent opinion. This is the code that I’ve used to run a certain query 1000 times:

private void InvokeQuery(Action mapObject)
    Stopwatch stopwatch = Stopwatch.StartNew();

    for (int i = 0; i < Iterations; i++)
        using (var sqlCommand = new SqlCommand(this._query, this._sqlConnection))
            using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
                while (sqlDataReader.NextResult())


    Debug.WriteLine("Running {0} queries took {1} milliseconds!", Iterations, stopwatch.ElapsedMilliseconds);

mapObject uses either directly the ordinal, or fetches the ordinal based on the column name. Also, I moved everything inside of the for loop to ensure nothing could be reused between queries. Here are the mapObject Actions, with GetOrdinal:

Action<SqlDataReader> = sqlDataReader =>
    int salesOrderID = sqlDataReader.GetOrdinal("SalesOrderID");
    int revisionNumber = sqlDataReader.GetOrdinal("RevisionNumber");
    int orderDate = sqlDataReader.GetOrdinal("OrderDate");
    int dueDate = sqlDataReader.GetOrdinal("DueDate");
    int shipDate = sqlDataReader.GetOrdinal("ShipDate");
    int status = sqlDataReader.GetOrdinal("Status");
    int onlineOrderFlag = sqlDataReader.GetOrdinal("OnlineOrderFlag");
    int salesOrderNumber = sqlDataReader.GetOrdinal("SalesOrderNumber");
    int purchaseOrderNumber = sqlDataReader.GetOrdinal("PurchaseOrderNumber");
    int accountNumber = sqlDataReader.GetOrdinal("AccountNumber");
    int customerID = sqlDataReader.GetOrdinal("CustomerID");
    int salesPersonID = sqlDataReader.GetOrdinal("SalesPersonID");
    int territoryID = sqlDataReader.GetOrdinal("TerritoryID");
    int billToAddressID = sqlDataReader.GetOrdinal("BillToAddressID");
    int shipToAddressID = sqlDataReader.GetOrdinal("ShipToAddressID");
    int shipMethodID = sqlDataReader.GetOrdinal("ShipMethodID");
    int creditCardID = sqlDataReader.GetOrdinal("CreditCardID");
    int creditCardApprovalCode = sqlDataReader.GetOrdinal("CreditCardApprovalCode");
    int currencyRateID = sqlDataReader.GetOrdinal("CurrencyRateID");
    int subTotal = sqlDataReader.GetOrdinal("SubTotal");
    int taxAmt = sqlDataReader.GetOrdinal("TaxAmt");
    int freight = sqlDataReader.GetOrdinal("Freight");
    int totalDue = sqlDataReader.GetOrdinal("TotalDue");
    int comment = sqlDataReader.GetOrdinal("Comment");
    int rowguid = sqlDataReader.GetOrdinal("rowguid");
    int modifiedDate = sqlDataReader.GetOrdinal("ModifiedDate");

    var temp = new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(salesOrderID),
        revisionNumber: sqlDataReader.GetInt16(revisionNumber),
        orderDate: sqlDataReader.GetDateTime(orderDate),
        dueDate: sqlDataReader.GetDateTime(dueDate),
        shipDate: sqlDataReader.GetDateTime(shipDate),
        status: sqlDataReader.GetInt16(status),
        onlineOrderFlag: sqlDataReader.GetBoolean(onlineOrderFlag),
        salesOrderNumber: sqlDataReader.GetString(salesOrderNumber),
        purchaseOrderNumber: sqlDataReader.GetString(purchaseOrderNumber),
        accountNumber: sqlDataReader.GetString(accountNumber),
        customerID: sqlDataReader.GetInt32(customerID),
        salesPersonID: sqlDataReader.GetInt32(salesPersonID),
        territoryID: sqlDataReader.GetInt32(territoryID),
        billToAddressID: sqlDataReader.GetInt32(billToAddressID),
        shipToAddressID: sqlDataReader.GetInt32(shipToAddressID),
        shipMethodID: sqlDataReader.GetInt32(shipMethodID),
        creditCardID: sqlDataReader.GetInt32(creditCardID),
        creditCardApprovalCode: sqlDataReader.GetString(creditCardApprovalCode),
        currencyRateID: sqlDataReader.GetInt32(currencyRateID),
        subTotal: sqlDataReader.GetDecimal(subTotal),
        taxAmt: sqlDataReader.GetDecimal(taxAmt),
        freight: sqlDataReader.GetDecimal(freight),
        totalDue: sqlDataReader.GetDecimal(totalDue),
        comment: sqlDataReader.GetString(comment),
        rowguid: sqlDataReader.GetGuid(rowguid),
        modifiedDate: sqlDataReader.GetDateTime(modifiedDate)

And without GetOrdinal:

Action<SqlDataReader> mapSalesOrderHeader = sqlDataReader =>
    new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(0),
        revisionNumber: sqlDataReader.GetInt16(1),
        orderDate: sqlDataReader.GetDateTime(2),
        dueDate: sqlDataReader.GetDateTime(3),
        shipDate: sqlDataReader.GetDateTime(4),
        status: sqlDataReader.GetInt16(5),
        onlineOrderFlag: sqlDataReader.GetBoolean(6),
        salesOrderNumber: sqlDataReader.GetString(7),
        purchaseOrderNumber: sqlDataReader.GetString(8),
        accountNumber: sqlDataReader.GetString(9),
        customerID: sqlDataReader.GetInt32(10),
        salesPersonID: sqlDataReader.GetInt32(11),
        territoryID: sqlDataReader.GetInt32(12),
        billToAddressID: sqlDataReader.GetInt32(13),
        shipToAddressID: sqlDataReader.GetInt32(14),
        shipMethodID: sqlDataReader.GetInt32(15),
        creditCardID: sqlDataReader.GetInt32(16),
        creditCardApprovalCode: sqlDataReader.GetString(17),
        currencyRateID: sqlDataReader.GetInt32(18),
        subTotal: sqlDataReader.GetDecimal(19),
        taxAmt: sqlDataReader.GetDecimal(20),
        freight: sqlDataReader.GetDecimal(21),
        totalDue: sqlDataReader.GetDecimal(22),
        comment: sqlDataReader.GetString(23),
        rowguid: sqlDataReader.GetGuid(24),
        modifiedDate: sqlDataReader.GetDateTime(25));

With GetOrdinal the results are:


And without:


As you can see the performance difference is so low that I honestly don’t think you should sacrifice the readability and maintainability of your code vs a mere 82 milliseconds on a 1000 queries. Readability speaks for itself, you don’t talk with ints anymore, and for maintainability, consider the following: If your query column(s) change and you forget to update your code, GetOrdinal will throw an IndexOutOfRangeException, instead of maybe get an InvalidCastException or, if you’re really unlucky, another column and then broken code behavior… One sidenote to add:

GetOrdinal performs a case-sensitive lookup first. If it fails, a second, case-insensitive search occurs (a case-insensitive comparison is done using the database collation). Unexpected results can occur when comparisons are affected by culture-specific casing rules. For example, in Turkish, the following example yields the wrong results because the file system in Turkish does not use linguistic casing rules for the letter ‘i’ in “file”. The method throws an IndexOutOfRange exception if the zero-based column ordinal is not found.

GetOrdinal is kana-width insensitive.

So do watch out with cases, and your culture rules. Good luck, and let me know your opinion!

PS: the project itself is hosted on GitHub, you can find it here!

Menu on the wrong side with a touch screen?

When you’re reading this you probably have a touch screen.

So, I never use my touch screen. Almost never. But I did notice that by default my menus in Windows (from a menu bar, not a ribbon) appear (when possible) on the right side of the clicked menu item.

Like this:

Menu appears on left side of the menu toolbar item.
Menu appears on left side of the menu toolbar item.

Goosebumps. Something is off. It took me a while to realize this,

The menu expanded to the left!

So, what is this. I can’t remember what exactly I searched for, but the change you need to make is in Tablet PC Settings.

When your menus are expanded to the left you’ll see something like this:

Menus appear to the left of your hand.
Menus appear to the left of your hand.

This is different from the default that I’ve been used to since I’ve been using Windows 95.

Change it to ‘Left-handed’:

Menus appear to the right of your hand.
Menus appear to the right of your hand.

Hit apply, and restart any offending programs, open a menu and enjoy:

Menu appears on right side of the menu toolbar item.
Menu appears on right side of the menu toolbar item.

I can easy again…