Saturday, August 30, 2008

Network Rebuild

Well, since I've told you about the WiFi Upgrade, More Network Woes, and then the Network Progress, I felt that it would be appropriate to provide you an update with where the network as gone to where stands right now as a result of all of this turmoil. I also truly hope that this will be where my network status remains for some time to come.


To start things off, I'm going to show you what my network looks like now. Then I will explain how and why it got to be this way. As I am pretty sure you've noticed, it looks considerably different from where things started before the lightning stuck my network.

The first couple things that should jump out at you is that my WiFi router is now directly connected to the cable modem and my phone adapter is in a DMZ. Yes, DMZ stands for Demilitarized Zone; networking has one of these also. Anyway, these two devices have swapped places because of an issue I encountered immediately following the installation of the new phone adapter.

As soon as I got my new adapter, I knew I had killed WAY too many peak time cell phone minutes, so I was quite excited about getting it up and running ASAP. After all was said and done, I had gone over 215 minutes OVER my calling plan for the month. But I digress, once the new adapter was put into place at the head of my network, in the old location, I started to experience issues. These issues were that my VPN connection to work would not remain stable. I thought I was having issues with my Internet connection. So, I dealt with it for a while. However, when I was done working, I was still having the same issue with my other computer w/o a VPN connection. Therefore, troubleshooting started. I rebooted my entire network. Unfortunately, this didn't fix a damn thing.

Later, I decided that I needed to see if I could find log files on one of my network devices to determine what was happening. Was this issue in my network or in my connection. Well, this new phone adapter device is drastically different, in a good way, from that of the Linksys device that was replaced. One of the unique values that are readily displayed on the status screen within the admin module for the device is its up-time. This up-time value was very telling. Every time I would experience an issue with my Internet connection and I would look at the up-time value for this device, it would tell me it in seconds.

This was obviously what identified this as the issue. Because of the successful identification of the issue, I called my service provider, Teleblend. I am not going to tell you all of the details associated with my experience with their tech support. That could be a rant all of its own. What I will tell you is that they were less than useful. Thankfully I am not a typical service provider. I took matters into my own hands. I moved this device from being the first stop in my network to being within my network. This move is where the DMZ came from. In order to ensure all of the SIP protocol communication reached this device, DMZ was the answer. With a DMZ, any inbound communication requests will do to an assigned location if the requests were not already being forwarded to a different device. Therefore, there wasn't a functional difference between how the device would be working whether it was first or second on the network. Therefore, using the DMZ allowed me to maintain a stable Internet connection AND utilize my VoIP service.

Once you get past the swap in location of the WiFi router and the phone adapter, the next thing you may notice is that there is a new device in the diagram. Well, you might think that, but not really. The network switch included in the new diagram was present in the old configuration. Just this time I felt like documenting it. You might be asking, with 2 routers with 4 port switches with a combined total of 7 ports on the network already, what do I need another switch for. Well, the answer is simple. There is only one CAT-5 drop in my office and I currently have 2 computers in there. Although, I haven't touched the second machine in a couple of weeks and should be returned to USBank shortly, but that depends on them. I have requested the shipping materials, labels, and so forth and have yet to receive them. Until that point, I will keep it up while I still have it. Therefore, to keep both machines up in the office, I have added the switch.

One additional change you may notice is the lack of DHCP addresses on two of the workstations. In my last network update, I mentioned that I was able to get the routers to play nicely with one another and using the wired router's WAN port. Well, this introduced a bit of a problem. I couldn't get the DHCP configuration to play nicely. I'd rather have the physical configuration that I want and deal with the software configuration manually. I have yet to put major effort into figuring out this DHCP issue and thus, the two workstations are currently static that are attached to this router.

In getting the two routers to play nicely, I had to set some static route points on the wifi router so it could acknowledge the wired network. On the wired router, I had to disable NAT (Network Address Translation). Google it if you are that interested. However, even when I had the wired router's DHCP server enabled, the workstations would not pick it up. But, if I were to assign static IPs to the machines, they started working just peachy. I don't know what the deal is, but for the time being, I'm not too worried about it. I don't foresee adding any new devices to the wired network any time soon anyway.

The last thing I completed, although it is likely pretty noticeable in the diagram, is the location of my server. I moved the server from being directly attached to the wifi router to being behind the voice adapter. The voice adapter is the DMZ after all. I felt that it only made sense that my external responding server should be there also. Ironically, moving the sever from one network segment to another was more difficult than it has any right to have been. Just like when I had to replace the NIC from the power surge, I had to do a bit of fighting to get this to work.

Before changing the IP in the server, I removed the port forwarding configuration in the WiFi router configuration. I then added the port forwarding to the phone adapter configuration. I left the server in its old location. The server reconfig was to be the last step so that hopefully, if I did it all correctly, it would just plug in and work. Well, in reconfiguring the server to be in different subnet, I initially reassigned the IP address, broadcast address, and local network. Once I completed this, restarted the network adapter and recabled the server into the LAN port of the phone adapter. I expected it work.

Of course, this was asking too much. In attempt to identify where the configuration was lying, I moved the switch to be behind the network adapter and cabled in my laptop. I have Apache installed on my laptop, so the standard tests to see if the network was prepared properly would apply to port 80. Well, to my surprise, it worked on the first try. Additionally, my laptop was able to communicate to the server with the ability to pull the website and SSH for remote access to the server. Therefore, my webserver hosting issues were not network device related. However, with all of the issues I've encountered along the way, I couldn't get the possibility of a network issue out of my head. I was so stumped as to what the issue could have been, I called Exile for assistance. He agreed it was a network related issue, but in the server network configuration, not the external network devices. He recommended that I research how IPTables as configured against this NIC.

Once I pulled up a list of all of the firewall filters and took some time to review them seriously, I was unable to identify any rules that would be causing my server to not respond to my requests. I then spent a ton of time searching for where the IPTables log information was being stored, but was unsuccessful. As it turns out, it wasn't relevant anyway. During my Google search, I came across a forum post that referenced the routes file in the same location as ifcfg-eth1 file. Well, this was the key. Once I opened this file, I realized immediately what this file contained. With all of the configuration settings stored in the ifcfg-eth1 file, the Default Gateway was not defined in there. Since I was moving the server from 172.16.2.0 to 172.16.4.0, the Default Gateway needed to be changed. So, as you can guess, once I made this configuration modification everything just popped in. Isn't networking and system administration fun?

In conclusion, if I set up the DMZ and server correctly (which I'm not sure I did or can readily prove), this should isolate my server from the rest of my network. For example, I am unable to access the server from my laptop by its local IP for SSH. In order to talk to my server over SSH, I must access my network's public IP. However, I can pull the server's website up via its local IP. I think this is a good indicator that I might be close at least. Therefore, if my DMZ is hacked, the rest of my machines should be safe. However, since I can't access my UI on my Linux server with any useful results, I guess I won't know until I rebuild it, replace it, or put my network switch back into the same network as the server and see what I can do. This is not an invitation for a hacker to try. However, if anyone has any suggestions how to better isolate my DMZ from my internal network, I am all ears.

Later.

Edit: I am unable to access my server by its IP address from within my network. I discovered today that the browser caching the response. Therefore, the only way I am able to access the server is to traverse the external route that the general public uses.

No comments: