The impact of SqlDataReader.GetOrdinal on performance

I recently had a discussion about the impact of SqlDataReader.GetOrdinal on execution of a SqlClient.SqlCommand. I then decided to run some code to measure the difference, because I think that’s the only way to get a decent opinion. This is the code that I’ve used to run a certain query 1000 times:

private void InvokeQuery(Action mapObject)
{
    Stopwatch stopwatch = Stopwatch.StartNew();

    for (int i = 0; i < Iterations; i++)
    {
        using (var sqlCommand = new SqlCommand(this._query, this._sqlConnection))
        {
            using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
            {
                while (sqlDataReader.NextResult())
                {
                    mapObject(sqlDataReader);
                }
            }
        }
    }

    stopwatch.Stop();

    Debug.WriteLine("Running {0} queries took {1} milliseconds!", Iterations, stopwatch.ElapsedMilliseconds);
}

mapObject uses either directly the ordinal, or fetches the ordinal based on the column name. Also, I moved everything inside of the for loop to ensure nothing could be reused between queries. Here are the mapObject Actions, with GetOrdinal:

Action<SqlDataReader> = sqlDataReader =>
{
    int salesOrderID = sqlDataReader.GetOrdinal("SalesOrderID");
    int revisionNumber = sqlDataReader.GetOrdinal("RevisionNumber");
    int orderDate = sqlDataReader.GetOrdinal("OrderDate");
    int dueDate = sqlDataReader.GetOrdinal("DueDate");
    int shipDate = sqlDataReader.GetOrdinal("ShipDate");
    int status = sqlDataReader.GetOrdinal("Status");
    int onlineOrderFlag = sqlDataReader.GetOrdinal("OnlineOrderFlag");
    int salesOrderNumber = sqlDataReader.GetOrdinal("SalesOrderNumber");
    int purchaseOrderNumber = sqlDataReader.GetOrdinal("PurchaseOrderNumber");
    int accountNumber = sqlDataReader.GetOrdinal("AccountNumber");
    int customerID = sqlDataReader.GetOrdinal("CustomerID");
    int salesPersonID = sqlDataReader.GetOrdinal("SalesPersonID");
    int territoryID = sqlDataReader.GetOrdinal("TerritoryID");
    int billToAddressID = sqlDataReader.GetOrdinal("BillToAddressID");
    int shipToAddressID = sqlDataReader.GetOrdinal("ShipToAddressID");
    int shipMethodID = sqlDataReader.GetOrdinal("ShipMethodID");
    int creditCardID = sqlDataReader.GetOrdinal("CreditCardID");
    int creditCardApprovalCode = sqlDataReader.GetOrdinal("CreditCardApprovalCode");
    int currencyRateID = sqlDataReader.GetOrdinal("CurrencyRateID");
    int subTotal = sqlDataReader.GetOrdinal("SubTotal");
    int taxAmt = sqlDataReader.GetOrdinal("TaxAmt");
    int freight = sqlDataReader.GetOrdinal("Freight");
    int totalDue = sqlDataReader.GetOrdinal("TotalDue");
    int comment = sqlDataReader.GetOrdinal("Comment");
    int rowguid = sqlDataReader.GetOrdinal("rowguid");
    int modifiedDate = sqlDataReader.GetOrdinal("ModifiedDate");

    var temp = new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(salesOrderID),
        revisionNumber: sqlDataReader.GetInt16(revisionNumber),
        orderDate: sqlDataReader.GetDateTime(orderDate),
        dueDate: sqlDataReader.GetDateTime(dueDate),
        shipDate: sqlDataReader.GetDateTime(shipDate),
        status: sqlDataReader.GetInt16(status),
        onlineOrderFlag: sqlDataReader.GetBoolean(onlineOrderFlag),
        salesOrderNumber: sqlDataReader.GetString(salesOrderNumber),
        purchaseOrderNumber: sqlDataReader.GetString(purchaseOrderNumber),
        accountNumber: sqlDataReader.GetString(accountNumber),
        customerID: sqlDataReader.GetInt32(customerID),
        salesPersonID: sqlDataReader.GetInt32(salesPersonID),
        territoryID: sqlDataReader.GetInt32(territoryID),
        billToAddressID: sqlDataReader.GetInt32(billToAddressID),
        shipToAddressID: sqlDataReader.GetInt32(shipToAddressID),
        shipMethodID: sqlDataReader.GetInt32(shipMethodID),
        creditCardID: sqlDataReader.GetInt32(creditCardID),
        creditCardApprovalCode: sqlDataReader.GetString(creditCardApprovalCode),
        currencyRateID: sqlDataReader.GetInt32(currencyRateID),
        subTotal: sqlDataReader.GetDecimal(subTotal),
        taxAmt: sqlDataReader.GetDecimal(taxAmt),
        freight: sqlDataReader.GetDecimal(freight),
        totalDue: sqlDataReader.GetDecimal(totalDue),
        comment: sqlDataReader.GetString(comment),
        rowguid: sqlDataReader.GetGuid(rowguid),
        modifiedDate: sqlDataReader.GetDateTime(modifiedDate)
        );
};

And without GetOrdinal:

Action<SqlDataReader> mapSalesOrderHeader = sqlDataReader =>
{
    new SalesOrderHeader(
        salesOrderID: sqlDataReader.GetInt32(0),
        revisionNumber: sqlDataReader.GetInt16(1),
        orderDate: sqlDataReader.GetDateTime(2),
        dueDate: sqlDataReader.GetDateTime(3),
        shipDate: sqlDataReader.GetDateTime(4),
        status: sqlDataReader.GetInt16(5),
        onlineOrderFlag: sqlDataReader.GetBoolean(6),
        salesOrderNumber: sqlDataReader.GetString(7),
        purchaseOrderNumber: sqlDataReader.GetString(8),
        accountNumber: sqlDataReader.GetString(9),
        customerID: sqlDataReader.GetInt32(10),
        salesPersonID: sqlDataReader.GetInt32(11),
        territoryID: sqlDataReader.GetInt32(12),
        billToAddressID: sqlDataReader.GetInt32(13),
        shipToAddressID: sqlDataReader.GetInt32(14),
        shipMethodID: sqlDataReader.GetInt32(15),
        creditCardID: sqlDataReader.GetInt32(16),
        creditCardApprovalCode: sqlDataReader.GetString(17),
        currencyRateID: sqlDataReader.GetInt32(18),
        subTotal: sqlDataReader.GetDecimal(19),
        taxAmt: sqlDataReader.GetDecimal(20),
        freight: sqlDataReader.GetDecimal(21),
        totalDue: sqlDataReader.GetDecimal(22),
        comment: sqlDataReader.GetString(23),
        rowguid: sqlDataReader.GetGuid(24),
        modifiedDate: sqlDataReader.GetDateTime(25));
};

With GetOrdinal the results are:

CreateWithGetOrdinal
CreateWithGetOrdinal

And without:

CreateWithoutGetOrdinal
CreateWithoutGetOrdinal

As you can see the performance difference is so low that I honestly don’t think you should sacrifice the readability and maintainability of your code vs a mere 82 milliseconds on a 1000 queries. Readability speaks for itself, you don’t talk with ints anymore, and for maintainability, consider the following: If your query column(s) change and you forget to update your code, GetOrdinal will throw an IndexOutOfRangeException, instead of maybe get an InvalidCastException or, if you’re really unlucky, another column and then broken code behavior… One sidenote to add:

GetOrdinal performs a case-sensitive lookup first. If it fails, a second, case-insensitive search occurs (a case-insensitive comparison is done using the database collation). Unexpected results can occur when comparisons are affected by culture-specific casing rules. For example, in Turkish, the following example yields the wrong results because the file system in Turkish does not use linguistic casing rules for the letter ‘i’ in “file”. The method throws an IndexOutOfRange exception if the zero-based column ordinal is not found.

GetOrdinal is kana-width insensitive.

So do watch out with cases, and your culture rules. Good luck, and let me know your opinion!

PS: the project itself is hosted on GitHub, you can find it here!

TransactionScope & SqlConnection not rolling back? Here’s why…

A while back we ran into an issue with one of our projects where we executed a erroneous query (missing DELETE statement), and then left the database in an inconsistent state.

Which is weird, considering the fact that we use a TransactionScope.

After some digging around I found the behavior I wanted, and how to write it in correct C#.

Allow me to elaborate.

Consider a database with 3 tables:

T2 --> T1 <-- T3

Where both T2 and T3 link to an entity in T1, thus we cannot delete lines from T1 that are still referenced in T2 or T3.

I jumped to C# and started playing with some code, and discovered the following (mind you, each piece of code is actually supposed to throw an exception and abort):

This doesn’t use a TransactionScope, thus leaving the database in an inconsistent state:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
    {
        sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 
        // DELETE FROM T1 will cause violation of integrity, because rows from T2 are still using rows from T1.

        sqlCommand.ExecuteNonQuery();
    } 
}

Now I wanted to wrap this in a TransactionScope, so I tried this:

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    sqlConnection.Open();

    using (var transactionScope = new TransactionScope())
    {
        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

Well guess what, this essentially fixes nothing. The database, upon completion of the ExecuteNonQuery() is left in the same inconsistent state. T3 was empty, which shouldn’t happen since the delete from T1 failed.

So what is the correct behavior?

Well, it doesn’t matter whether you create the TransactionScope or the SqlConnection first, as long as you Open() the SqlConnection inside of the TransactionScope:

using (var transactionScope = new TransactionScope())
{
    using (var sqlConnection = new SqlConnection(ConnectionString))
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}                                                                                                                           

Or the inverse (swapping the declaration of the TransactionScope and SqlConnection):

using (var sqlConnection = new SqlConnection(ConnectionString))
{
    using (var transactionScope = new TransactionScope())
    {
        sqlConnection.Open();

        using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
        {
            sqlCommand.CommandText = "USE [TransactionScopeTests]; DELETE FROM T3; DELETE FROM T1;"; 

            sqlCommand.ExecuteNonQuery();
        }

        transactionScope.Complete();
    }
}

I wrote the test cases on a project on GitHub which you can download, compile and run as Tests for yourself!

https://github.com/CSharpFan/transaction-scope

Have a good one,

-Kristof

About a dictionary, removing and adding items, and their order.

I had a weird problem today using a Dictionary. The process involved removing and adding data, and then printing the data. I assumed that it was ordered. I was wrong! Let me show you:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry

After a while I added another line to the dictionary:

dictionary.Add(423, "jumps");

While printing this data I discovered an oddity.

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

What do you expect the output of this to be?

5 => The
31 => brown
145 => fox
423 => jumps

However the actual result was this:

5 => The
423 => jumps
31 => brown
145 => fox

The documentation tells us the following:

For purposes of enumeration, each item in the dictionary is treated as a KeyValuePair<TKey, TValue> structure representing a value and its key. The order in which the items are returned is undefined.

Interested in the actual behavior I looked at the source code of Dictionary here.

If you look closely, first at Remove and then to Add (and subsequently Insert) you can see that when you remove an item it holds a reference (in freelist) to the free ‘entry’.

What’s more weird is the behavior when you delete 2 entries, and then add 2 others:

var dictionary = new Dictionary<int, string>();

dictionary.Add(5, "The");
dictionary.Add(7, "quick");
dictionary.Add(31, "brown");
dictionary.Add(145, "fox");

dictionary.Remove(7); // remove the "quick" entry
dictionary.Remove(31); // also remove the "brown" entry

dictionary.Add(423, "jumps");
dictionary.Add(534, "high");

dictionary
    .ToList()
    .ForEach(e => Console.WriteLine("{0} => {1}", e.Key, e.Value));

Which yields:

5 => The
534 => high
423 => jumps
145 => fox

But for that you’ll need to look at line 340 and further!

So what have we learned? It’s not ordered until MSDN tells you!

Have a good one!

Default values and overloads are not the same!

Consider the following class of the Awesome(r) library, using default parameters.

public class Foo
{
    public void DoCall(int timeout = 10)
    {
        /* awesome implementation goes here */
    }
}

You get the dll and that class & function in your code, like this:

Foo foo = new Foo();

foo.DoCall();

Can’t get much easier than this right?

Then the Awesome(r) library gets updated:

public class Foo
{
    public void DoCall(int timeout = 20)
    {
        /* awesome implementation goes here */
    }
}

Notice that the default value has changed. You assume that when you just overwrite the dll in production, you will adopt the new behavior.

Nop. You need to recompile. Let me show you: the problem with default values is that the developer of Awesome(r) library is no longer in control of it.

Let’s take a look at an excerpt of the IL where we create a new Foo and call DoCall without specifying timeout:

  IL_0000:  newobj     instance void AwesomeLibrary.Foo::.ctor()
  IL_0005:  stloc.0
  IL_0006:  ldloc.0
  IL_0007:  ldc.i4.s   10
  IL_0009:  callvirt   instance void AwesomeLibrary.Foo::DoCall(int32)
  IL_000e:  ret

This is a release build.

Notice how on line 4 value 10 gets pushed on the the stack, and the next line calls the DoCall.

This is a big danger in public APIs, and this is why the developer of Awesome(r) library should have used an overload instead of a default parameter:

public class Foo
{
    public void DoCall()
    { 
        this.DoCall(20); 
    }

    public void DoCall(int timeout)
    {
        /* awesome implementation goes here */
    }
}

This ensures that when a new version of Awesome(r) library is released AND that if that release is backwards API compatible, it can just be dropped in, without you having to recompile your whole codebase (but you should still test it 😛 )

The behavior of FlagsAttribute is probably not what you suspect

Let’s create another enum:

enum Foo
{
    A,
    B,
    C,
    D
}

You add the FlagsAttribute:

[FlagsAttribute]
enum Foo
{
    A,
    B,
    C,
    D
}

Meaning you want to use the Enum as a Flag, so you can combine them. For example:

Foo foo = Foo.B | Foo.C | Foo.D;

Later, you pass this value on, and you want to test for the presence of Foo.A:

// foo is the same foo as previous 
var hasA = (foo & Foo.A) == Foo.A;

Console.WriteLine("hasA: {0}", hasA);

You think that hasA is false. Is it? It’s not:

Does foo include Foo.A?
Does foo include Foo.A?

How come? Applying the FlagsAttribute doesn’t DO anything with the generated constants for your enum members.

As per the documentation you still need to do it yourself:

Define enumeration constants in powers of two, that is, 1, 2, 4, 8, and so on. This means the individual flags in combined enumeration constants do not overlap.

So we update our enum:

[FlagsAttribute]
enum Foo
{
    A = 1,
    B = 2,
    C = 4,
    D = 8
}

and then we test our code again:

Foo foo = Foo.B | Foo.C | Foo.D;
var hasA = (foo & Foo.A) == Foo.A;

Console.WriteLine("hasA: {0}", hasA);

And the result is:

Does foo include Foo.A? It does!
Does foo include Foo.A? It does!

Success!

Hope you have a good one,

-Kristof

PS: please not that I should have added a None enum member, as per the documentation:

Use None as the name of the flag enumerated constant whose value is zero. You cannot use the None enumerated constant in a bitwise AND operation to test for a flag because the result is always zero.

Foreach now captures variables! (Access to modified closure)

Foreach has changed in C# 5.0!

Consider the following piece of code in C# < 5.0:

public class Test
{
    public static void Main()
    {
        var words = new[] { "foo", "bar", "baz", "beer" };
        var actions = new List<Action>();
        foreach (string word in words)
        {
            actions.Add(() => Console.WriteLine(word));
        }

        actions.ForEach(e => e());
    }
}

What will this print?

Some of you will see the warning that ReSharper will print on line 9.

Access to foreach variable in closure. May have different behaviour when compiled with different versions of compiler

Notice the second sentence, and remember this warning, we’ll get back to it!

Now go ahead, try and run this in Visual Studio 2010. This will be your result:

beer beer beer beer

While I do love beer, this is not what I expect.

So how do we fix it? Well, either let ReSharper fix it (Alt+Enter -> Enter), or manual, capture the current word in a different variable:

public class Test
{
    public static void Main()
    {
        var words = new[] { "foo", "bar", "baz", "beer" };
        var actions = new List<Action>();
        foreach (string word in words)
        {
            string temp = word;
            actions.Add(() => Console.WriteLine(temp));
        }

        actions.ForEach(e => e());
    }
}

Problem solved. The code above has identical results in Visual Studio 2012.

However…

Using the first piece of code (without our temp variable) in Visual Studio 2012 the result is as follows:

foo bar baz beer

Wait what?

The compiler has changed (note that even for .NET 3.5, 4, and 4.5 in Visual Studio 2012 the 4.5 compiler is used!).

Meaning that our variable word is now declared inside of the foreach loop, and not outside.

This change can be found in the C# 5.0 spec, page 247-248, found on your machine when you’ve installed VS2012 (not Express) in: C:\Program Files (x86)\Microsoft Visual Studio 11.0\VC#\Specifications\1033

If v was declared outside of the while loop, it would be shared among all iterations, and its value after the for loop would be the final value, 13, which is what the invocation of f would print. Instead, because each iteration has its own variable v, the one captured by f in the first iteration will continue to hold the value 7, which is what will be printed. (Note: earlier versions of C# declared v outside of the while loop.)

Note 1: read the file to get the meaning of ‘v’ and those values (like 13 and 7).
Note 2: I’ve tweeted to some guys to get the spec online.

While this is not necessarily a problem for projects coming from 2010 and upgrading to 2012, it can be an issue when you are doing round-tripping, for example in mixed teams. Developers using 2012 need to use the old behavior.

Worse, your build system is not at 2012 yet, so your result are different!

Watch out for this!

Have a good one,

-Kristof

When using an enum in PowerShell, use the member’s name, not the member’s value

Consider the following enum in C#:

enum State
{
    Started,
    Stopped,
    Unknown
}

Note that I have not added an explicit value for the enum members. They will be generated by the compiler. As stated in the C# spec:

… its associated value is set implicitly, as follows:

  • If the enum member is the first enum member declared in the enum type, its associated value is zero.
  • Otherwise, the associated value of the enum member is obtained by increasing the associated value of the textually preceding enum member by one. This increased value must be within the range of values that can be represented by the underlying type, otherwise a compile-time error occurs.

Found at http://www.microsoft.com/en-us/download/details.aspx?id=7029, page 400-401 (I can’t find the version for 4.5 though…).

Now what are the consequences of this? Consider the following piece of PowerShell:

$result = $serviceController.GetServiceStatus()
if($result -eq 1)
{
    MyLib.StartService()
}

This will work, because PowerShell implicitly converts the int to the actual enum member.

However since we are assuming the value can go wrong. In the next version you add extra values, say for example to represent a starting/stopping service:

enum State
{
    Starting,
    Started,
    Stopping,
    Stopped,
    Unknown
}

Since now all the values are shifted when you run your PowerShell again you start the service when it’s already started 😉 .

Solution?

First of all (as a consumer), use the enum’s member name instead of its value:

$result = $serviceController.GetServiceStatus()
if($result -eq [MyLib.State]::Stopped)
{
    MyLib.StartService()
}

This will ensure that you get the value for Started, not for anything else.

As a developer of a library you should ensure that you never mess up the order of an enum, by adding new values as last, or (prefered) set the value yourself:

enum State
{
    Started = 0,
    Stopped = 1,
    Unknown = 2,
}

Becomes:

enum State
{
    Starting = 3,
    Started = 0,
    Stopping = 4,
    Stopped = 1,
    Unknown = 2,
}

And now you can also perfectly reorder them so the numbers are sequential:

enum State
{
    Started = 0,
    Stopped = 1,
    Unknown = 2,
    Starting = 3,
    Stopping = 4,
}

Hope you have a good one,

-Kristof

WebClient not sending credentials? Here’s why!

TL;DR version here.
This post applies to more than just GitHub, read the rest to see the behavior!

I was playing with the GitHub API (more specifically generating a new OAuth token).

So what you need to do, as per the documentation, is to post a certain JSON string to https://api.github.com/authorizations, with Basic Authentication. I’m going to use the WebClient class for this.

This is the JSON string that you need to post:

{
  "scopes": [
    "repo"
  ],
  "note": "API test"
}

Now this is the code I used:

var content = new 
			{
				scopes = new[] { "repo" },
				note = "API test",
			};

var webClient = new WebClient
	                {
				Credentials = new NetworkCredential("*****", "*****"),
	                };

// JsonConvert is from NewtonSoft.Json, very handy!
string serializedObject = JsonConvert.SerializeObject(content);

string reply = webClient.UploadString(new Uri("https://api.github.com/authorizations"), "POST", serializedObject);

dynamic deserializedReply = JsonConvert.DeserializeObject(reply);

Console.WriteLine(deserializedReply.token);
Console.ReadLine();

However, when using this piece of code I always get a 404 not found.

Reading through the Github API documentation yields the following:

There are three ways to authenticate through GitHub API v3. Requests that require authentication will return 404, instead of 403, in some places. This is to prevent the accidental leakage of private repositories to unauthorized users.

(emphasis mine)

So there’s a good change that we just hit Github’s security through obscurity, so we do normally get a 403.

In fact, I tested it with Github Enterprise, and that one just returns a 403, so that’s how I figured out that the URL I was calling is correct (Github Enterprise doesn’t hide information like the regular does):

403 forbidden from a Github Enterprise instance

Let’s try the same code on a Simple IIS website with basic authentication:

Screenshot (12)

I then simplified the code to just download the contents of the website, with a simple GET:

var webClient = new WebClient
	                {
		                Credentials = new NetworkCredential("*****", "*****"),
	                };

string reply = webClient.DownloadString(new Uri("http://localhost/CredentialTest"));

Console.WriteLine(reply);
Console.ReadLine();

So that gives me the response (Default.aspx contains ‘Hi, it works’).

Now what’s going on? Is it the POST that conflicts?

Using the same code, but instead of DownloadString, I upload some arbitrary piece of text with UploadString which by default uses POST.

var webClient = new WebClient
	                {
		                Credentials = new NetworkCredential("*****", "*****"),
	                };

string reply = webClient.UploadString(new Uri("http://localhost/CredentialTest/Default.aspx"), "somerandomstuff");

Console.WriteLine(reply);
Console.ReadLine();

Please note that I post directly to Default.aspx. IIS doesn’t allow postings to directories (I’m sure you can enable it).

Anyway, this also works.

Next step? I was thinking that WebClient maybe only sends the credentials over when it detects that the machines are in the same domain / workstation?

Let’s find out with Fiddler.

I first monitored the flow for the console app to IIS and I was surprised to see that there were actually two requests, and what’s even more weird is that the first request doesn’t send the credentials (notice I still use POST, to mimic our code to connect to GitHub):

First request to IIS

Instead of returning a 403 on the file, IIS nicely returns a 401 with the WWW-Authenticate header:

first response

The WebClient is then smart enough to resend the request WITH the credentials:

second request

And then the server nicely responds with a 200, and the contents are sent (notice the picture are JUST the headers).

second response

This flow is always the same, whether it is GET or POST (what’s weird is that when you want to post a 2GB file, you send it, server replies 401, and you need to send the 2GB file again…).

Now that we know that our WebClient is behaving correctly, I decided to go and look at the request and response from GitHub:

The first request (like with IIS), doesn’t contain the Authorization header:

First GitHub request

However, in contrast to IIS, GitHub doesn’t play nice. It doesn’t send a 401 with a WWW-Authenticate header, it just returns 404 (or 403 on Github Enterprise).

First and only response from GitHub

For GitHub it is perfectly valid to send a 404 if it doesn’t want to disclose information.

The only problem is just that the WebClient doesn’t know what to do and thus, we need to do stuff ourselves!

We need to manually inject the headers when calling Github (or any webserver) at the first request:

var content = new 
			{
				scopes = new[] { "repo" },
				note = "API test",
			};

var webClient = new WebClient();

// replace webClient.Credentials = new NetWorkCredentials("*****","******") by these 2 lines

// create credentials, base64 encode of username:password
string credentials = Convert.ToBase64String(Encoding.ASCII.GetBytes("*****" + ":" + "*****"));

// Inject this string as the Authorization header
webClient.Headers[HttpRequestHeader.Authorization] = string.Format("Basic {0}", credentials);

// Continue as you are used to!
string serializedObject = JsonConvert.SerializeObject(content);

string reply = webClient.UploadString(new Uri("https://api.github.com/authorizations"), "POST", serializedObject);

dynamic deserializedReply = JsonConvert.DeserializeObject(reply);

Console.WriteLine(deserializedReply.token);
Console.ReadLine();

And we have our token!

Borat Great SuccessHave a good one!

-Kristof

Unless you have a VERY good reason, rethrow your exception.

I see it a lot, people write a function, they catch a possible exception and throw one of their own OmgSomethingHorribleHappenedException();

Check this simplefied example:

private static void Main()
{
	try
	{
		var result = (new Api()).Find("foo");
 
		Process(result);
	}
	catch (Exception e)
	{
		Console.WriteLine(e.Message + Environment.NewLine + e.StackTrace);
		Console.Read();
	}
}

And the Api.Find(string toFind) method:

namespace ExceptionRethrow
{
	using System;
	using System.Linq;
 
	public class Api
	{
		public string Find(string toFind)
		{
			try
			{
				// force a InvalidOperationException
				return (new string[] { }).Single(e => e == toFind);
			}
			catch (InvalidOperationException e)
			{
				Logger.Log(string.Format("Tried to look for element {0}, not found", toFind));
 
				throw e;
			}
		}
	}
 
	public class Logger
	{
		public static void Log(string message)
		{
			Console.WriteLine(message);
		}
	}
}

As you can see in the Api.Find method we catch the exception from the Single call, log the error (to notify the developers or something, …) and throw the exception again.

However when we look at the call stack in the Main method we see this:

e.StackTrace

Odd, what’s line 19 in Api.cs? Right, throw e. Useful! NOT!

So you need to replace the throw e by throw.

//...
catch (InvalidOperationException e)
{
	Logger.Log(string.Format("Tried to look for element {0}, not found", toFind));
 
	throw; // removed the 'e'
}
//...

Then the result will look like this:

e.StackTrace

Now we already know little more about the cause. However to improve we need to catch the error, and throw it again by passing in the current exception as inner exception:

//...
catch (InvalidOperationException e)
{
	Logger.Log(string.Format("Tried to look for element {0}, not found", toFind));
 
	throw new InvalidOperationException(e.Message, e);
}
//...

So let’s debug again:

e.StackTrace

Again we only see the lines where the exception was thrown last, however when we investigate the InnerException’s stacktrace we get way more info:

e.InnerException.StackTrace

Now that’s an exception that’s useful to debug Smile

We can leverage this knowledge in multiple ways, one of which is shielding exceptions, making sure none go through to the client. For example when your service broker detects an error, it clears the InnerException field, thus removing any business knowledge from the exception.

When in debug mode you can want to send over those exceptions to improve the finding of bugs Smile

You can find the complete example on GitHub.

Have a good one,

-Kristof