- A private network (*.1, *.3, *.4, …)
- A public network (other range)
- One server in between to provide routing services (network card in both VLANs) from the private to the public (*.2 on the private)
In the private network there were multiple servers whose gateway (*.2) were all set to the router. They all could contact the public network.
Except for one, the domain controller.
We compared the network configuration, and all was the same. After investigating we looked at the routes, and it seemed that the domain controller had to routes with network destination 0.0.0.0 and netmask 0.0.0.0. On one the gateway was the *.1, on the other one the gateway was *.2,even though it should only be using the *.2.
The weird thing was that the server itself was the *.1. Because the route to *.1 was higher on the list (same metric though…), it couldn’t route it’s request, because it thought it was the router itself. We didn’t see this in the network configuration of its network card (weird2):
By deleting the route manually the request flowed as normal to the outside. (with netsh).
We think this had to do with the server initially having the *.2 as network address, then upgrading it to a DC, and then changing the IP to *.1.
Hope this is helpful.