Barracuda ADC Load Balancer – How to show client IPs and not the proxy IP address – Part 2

In my first post on showing the client IP addresses through a Barracuda ADC load balancer, I showed how to get Direct Server Return to work for clients on the same network by adding loopback interfaces on the back-end servers.  In this post I will discuss a problem when using a layer 7 proxy service with Client Impersonation enabled on a multi-homed ADC. One of the requirements for client impersonation is that the back-end servers must use an ADC IP address as the default gateway.  In traditional two-armed deployments where the real servers sit behind the ADC this is not a problem.  However when the network is more complex and the ADC has interfaces in different networks and both VIPs and real servers sit on each of these networks then there can be some unexpected behaviors when it comes to the routing of packets. For example this diagram shows an ADC that sits on a management network, DMZ and LAN.  We have both VIPs and real servers on the DMZ and LAN. The problem that occurs in this environment is that when back-end servers are setup to use the ADC as the default gateway, they can no longer get to other networks.  For example, we discovered that packets that came from the back-end servers in the DMZ could not reach the LAN or get to the internet.  We found that the packets were coming in on the ADC’s DMZ interface, but then leaving the ADC’s management interface!  The Barracuda documentation states: If you have multiple networks, you must specify a default gateway on the NETWORK > Routes page for every interface that accepts incoming traffic. Even though default routes were added on the DMZ interface as shown below, the...

Barracuda ADC Load Balancer – How to show client IPs and not the proxy IP address – Part 1

In order to see the client origination IP address on the real back-end servers when a Barracuda ADC load balancer is used there are two options.  The first is to use layer 4 load balancing on the service type.  The other option is to use a layer 7 proxy service and enable client impersonation (while setting the default gateway to the ADC - but this causes other problems that I’ve discussed in part 2 of this post).  Both of these options work ok for clients outside of the back-end server network, but these do not work if the clients are in the same layer 2 network.  (For example if your virtual IP is 10.0.0.5 and the back-end servers are 10.0.0.10, 10.0.0.11 and your client is 10.0.0.100, then neither of these options will work).  This is because when the back-end server sees that the IP address is on the same network, it will not send the return packets back through the Barracuda ADC, but rather straight to the client and thus break the TCP stream. The solution to this is to use a layer 4 TCP service with “Direct Server Return” (DSR) enabled, plus a loopback interface must be added to each of the back-end servers that have the IP address of the service virtual IP (VIP) assigned to them.   DSR will cause the ADC to change the data frame’s MAC address to the MAC of the real server before placing the packet back on the wire.  The server will accept the packet (bound for the service VIP) because the loopback has been assigned the same IP.  The server will then respond back directly to the client using the VIP as the source address. To do this, first go to the device manager and right click...

Tuning VMware vSphere ESXi for an EqualLogic iSCSI SAN

Tips on tuning VMware ESXi for EqualLogic SANs. Install Dell MEM Download from here: https://eqlsupport.dell.com/support/download.aspx?id=1484 Important: Unzip the package and inside the package is another zip file.  That’s the file that should be uploaded to a datastore so that it’s accessible from the host. SSH into the host and run the following (substituting your version of MEM).  I found that sometimes I needed to use the actual path name instead of the datastore friendly name in order to work: esxcli software vib install –depot /vmfs/volumes/vmfsvol/dell/dell-eql-mem-esx5-1.2.292203.zip (There are two dashes in front of “depot”, WordPress may format it differently.) Before rebooting complete next section. Disable large receive offload (LRO) Fist check to see if it’s enabled: esxcfg-advcfg -g /Net/TcpipDefLROEnabled If enabled disable it: esxcfg-advcfg -s 0 /Net/TcpipDefLROEnabled Reboot Tune the virtual and physical networks Change the MTU to 9000 on the virtual iSCSI switch and virtual iSCSI NICs Change the MTU on the physical iSCSI switches.  On some Cisco switches this is a global config and others this is an interface config (and possibly both) Tune the iSCSI Initiator Go to the host -> Configuration tab -> Storage Adapters -> iSCSI Initiator -> Properties -> Advanced Change LoginTimeout from 5 to 60 DelayedAck should be unchecked  (Update: for the delayed ACK setting to take effect, the static and dynamic iSCSI discoveries need to be deleted and the server needs to be rebooted.) Tune the VM Put all VMDKs on a separate virtual SCSI/SAS controller (i.e. node 1:0, 2:0, 3:0, NOT 1:0, 1:1, 1:2) Format the partitions with 64K cluster (allocation unit) size For VMs that need high IOPs, convert your virtual storage...

Preventing hacking attempts on RDP servers

I’ve seen this quite a few times now, so I wanted to give you guys a quick overview of some fixes. Below is an obvious series of server attacks. This appears to be a dictionary attack which is one of the most common attacks for RDP. Opening the individual log entries will show various usernames, however, these are coming from the same IP address. There are also several different attackers. Most likely these are scanners looking for easy access. The simplest way to prevent this is blocking ICMP echo replies on the firewall. To do this on a Cisco IOS router: 1.  Run the command “show ip interface brief” This will show you all of the physical (and some virtual) interfaces on the device. Look for the internet facing interface. In the below case this is GigabitEthernet0/1. 2.  Run the command “show run interface <Interface from step 1>”. Look for the line “ip access group ### in” this is the inbound access list. In this case 115. 3.  Next run the command “show ip access-list ###” Look for the lines ending in any of the following: echo-reply, time-exceeded, or unreachable. These lines should be similar to the following format: “### permit icmp any host #.#.#.# echo-reply” or “ ### permit icmp any #.#.#.# #.#.#.# echo-reply” Make a note of the line number (first number on the line).   a.     I also like to ensure that we have access to send ping requests to this IP address. To check for this look for a line towards the top similar to “### permit ip host x.x.x.x any” 4.  Next remove those...

Enabling jumbo frames in VMware ESX

To get the best performance out of VMware’s iSCSI initiator it’s a good idea to enable jumbo frames on the ESX hosts.  First configure a new vSwitch dedicated to the iSCSI network and if you’re doing this in the GUI delete the default port group that is created. Then hop into the command line and type: “esxcfg-vswitch -m 9000 vSwitch1” (where vSwitch1 matches the number of your virtual switch) Now your virtual switch has jumbo frames enabled but we need to add a port group with jumbo frames enabled so enter the following to create the port group and assign an IP address: esxcfg-vswitch -A iSCSI vSwitch1 esxcfg-vmknic -a -i 10.97.1.40 -n 255.255.255.0 -m 9000 iSCSI Check to make sure 9000 MTU is applied on the switch and port group by running the following: esxcfg-vswitch -l esxcfg-vmknic -l And lastly test a large packet by running: vmkping 10.97.1.10 -s 9000...

Windows 7 is coming

  With Windows 7 public availability around the corner, ADNS will be making the move this Friday.  We will report our experiences on this blog with all aspects from upgrades (Vista only) and new installations. Majority of our workstations are Dell Vostro 400s, so we should not have any issues meeting the recommended system requirements. Here are some specs to help get you ready: Minimum requirements- 1 gigahertz (GHz) or faster 32-bit (x86) or 64-bit (x64) processor 1 gigabyte (GB) RAM (32-bit) or 2 GB RAM (64-bit) 16 gigabyte available hard disk space (32-bit) or 20 GB (64-bit) DirectX 9 graphics device with WDDM 1.0 or higher driver Check back Friday 8/7 for part 1:...