[cod] RHE4 new 2.6 Kernel

Jay Vasallo jayco1 at charter.net
Fri Mar 25 11:03:00 EST 2005


I always have to save your emails Doc and read them on the weekend. This is the time the encryption team comes over for breakfast! 

=o)

Jay


  ----- Original Message ----- 
  From: Mark J. DeFilippis 
  To: cod at icculus.org 
  Sent: Friday, March 25, 2005 9:50 AM
  Subject: Re: [cod] RHE4 new 2.6 Kernel



  ooooo.. To fit what Steve said in there.  It is true that they likely are
  classifying ICMP Echo requests (ping) down as well in addition to the
  load balancing.

  You can easily check this yourself.

  Notice that when your pings are running at a "higher" rate, if you
  run the game client, and enter a map, then take a look at the
  response time, you will likely find it up to 20ms lower. This is
  game dependant however in "the way" they measure that response.

  ICMP is a link layer protocol, like UDP or TCP.  Hence some more advanced
  NIC cards today will respond to a ping even if the CPU on the box is
  DOA.  This is because at the lowest layer of the driver on the NIC
  card, it only reviews as follows: "Am I in promiscuous mode?  If so,
  accept every packet which is not the norm.  Normally your NIC card
  is not in promiscuous mode. In this case the card looks at the
  MAC address and will push it up in to the card buffer if one of
  three things are true.  The Destination MAC address matches
  the MAC of the card, the Destination MAC is Broadcast (All Bits
  set to 1), or a Multicast MAC. This is only at the Ethernet layer.

  Once in the NIC cards buffer, the DMA based driver that interfaces
  your OS to the NIC card will pull packets from the card buffer and
  move them in to OS Driver space. At this low level point, it only
  knows if it is a ARP, IP, or ICMP (ping) packet.  Some of the 
  motherboards out there with integrated NICs and newer NICs
  never even pass the ICMP echo up to the OS!  The lower level
  driver will swap the 8 bytes of IP address SA, DA, and copy
  the MAC address and respond to the ping. (Note that the
  entire OS could be crashed, and it will respond to ping!)
  The NIC card already learned it's IP address, as the very
  first thing a NIC card does when it initializes is send a ARP
  (Address resolution protocol) packet for it's own MAC address.
  (If someone else answers, that is bad, as this is how
  it knows it is unique and there is no duplicate MAC address
  on that network.).  When your OS inits, the NIC gets it's IP
  address, either static, or most often by DHCP.  THe NIC now
  knows it's MAC and IP address.  If your OS crashed completely,
  that NIC card will still respond to pings.

  I went off on a bit of a tangent, because I wanted to let you,
  and others know how "unreliable" ping is at telling you "That
  host is OK, I can ping it".

  Back to the original point, ICMP is a protocol on top of IP, at the
  same level as TCP and UDP. Hence it is easily identified by
  routers and switches, especially since PING is often used by
  "script kiddies" for DOS attacks. (With the newer NIC cards
  on your PC, if you run a local software firewall this is a good
  thing, as your PC won't get bogged down by a ping DOS attack.
  (although it will still chew up your bandwidth, it is unlikely to
  have any noticeable impact on you. (That is the good part about
  the smarter NIC cards).

  Back to Steve's Point, many of the games will use a UDP based
  echo/response. When you check in your game, all of a sudden the
  response rates will be 20ms lower.  So Steve's explanation is
  also a matter of fact.  So in reality, it is good that theplanet does
  prioritize the ICMP's lower.  It allows for your clients important
  game packets to have priority over someone's ping packet.
  The company is actually protecting your server and hence the
  clients gaming experience by giving this protocol a lower priority.

  Ask him what he sees when he is playing in the game. That is
  more reliable.  Many of todays games will measure the average
  response time based on the TCP ack's, rather than a ICMP
  echo ping once the client is connected to the gaming server.

  Sorry for the length, Jay will tell ya I disappear for a while, then come
  back and they can't shut me up. ;-))  But the way some of this
  all works is pretty cool, and no one ever talks about what happens
  behind the scenes when you first power up your PC from the
  NIC cards perspective.

  So at least now if anyone ever asks you "What is the first thing
  a NIC card does when PC is powered up, that you can actually
  see?",   The answer is "It sends a Address Resolution Protocol (ARP) packet
  to itself", to make sure it is the only one using that MAC address;
  i.e. some other NIC card don't respond to the packet".

  They installed my ES4.  Geez, I forgot about all the changes I made over time
  on the darn thing. ;-(

  Dr. D


  At 11:39 PM 3/24/2005, you wrote:

    All I have to say is WOW.  That is one awesome explination bro.  
     
    I just don't like the fact of my trace routes from my customers being "blown off" because of packet priority or whatever.
     
    I'm probably gonna lose one of my customers over this and it really hax me off.  
     
    Anyways thanx again for the explination man.  This explination makes me wanna open a ticket and paste it, but it would probably blow their mind as much as it did mine haha.
     
    You will like RHEL4 - it's nice and smooth.
     
     
    Have a nice day Dr. D.
     
    --
    NateDog

      ----- Original Message ----- 

      From: Mark J. DeFilippis 

      To: cod at icculus.org 

      Sent: Thursday, March 24, 2005 8:38 PM

      Subject: Re: [cod] RHE4 new 2.6 Kernel


      Thanks Nate, and Jay as always!  I have put the order in for the ES4 upgrade.

      I prey every time I put in a change ticket with support. Lately, it has been

      very poor.  I hope they can get it right 100%.  There is always something

      missing when they make a wholesale change...


      Nate, I noticed that for the period that they were migrating in the new data center

      they had major BGP convergence issues, and it would clear my servers.


      At first they blamed it on my location.  Yea sure.  Everyone around the USA, west coast,

      east, central, and it is the whole Internet not you.


      Glad I have access to certain info at a carrier that happens to feed the majority

      of servermatrix and theplanet.  The peer showed loss of BGP adjacency. I understand

      they had to make changes and re groom fiber, but it was clear from the flapping that they

      don't have BGP dampening configured on their border routers.


      I have experience with CRisco 65xx, but on the carrier side for larger customers I design

      for I use Juniper M-20's and M-40's, as well as Laurel Networks 120's. These are pretty big boys.

      Smallest line card on a Laurel is the 13 port DS3 card. But these are big boys, with 8 port

      gig cards wire speed, etc. In their design, they are likely using the Crisco 6509's are layer

      two aggregation devices.  While the design is a simple classic design, it relies on default

      load balancing provided by these switching/routers.


      I just have to wonder if they know what they are doing.  I have noticed that

      my pings run anywhere from 60ms from NY direct Level3 to 92ms.  I have the ability to test

      from many area's from around the US from work.  From each location, the variation in delay

      is past the local loop in to servermatrix/theplanet.  This means they are highly likely using

      load balancing on the Multi-link trunks that are inter-switch and/or switch/router uplinks.


      The above switches I have recently been working with in building Int-serv RSVP cores for service

      provider rfc2547bis MPBGP MPLS networks, where LSP's maintain MPLS based traffic engineering

      to provider QOS to IP based QOS CE devices.  We found that that MLT load balancing is not

      a strong point in these switches.  Adding in the additional distribution layers of inexpensive Crisco 650x

      switches for aggregation, and you have the poor network design they have.


      One of my guys designed something like this, with internal MLT load balancing in the core

      through the distribution layer, he would be unemployed.  Maybe they have all CCIE's. there? ;-))


      You are not imagining it.  I have a server over at EV1 as well, and my ping times to that

      server is rock solid at 52-66ms. This is reasonable considering much of the long haul is

      DWDM photonic, (plus local loops, and ISP), but note you are going ISP, to loop to loop

      to IXC to loop to LEC to loop to Datacenter.  The bulk of the loops are phototonic SONET

      with little latency. If you are generous, add 10ms latency.  NY to London on TAT14 I peaked

      at today is running steady at 74ms RTD (Round Trip Delay). (That is our Add/Drop Multiplexor directly

      off the light-wave mux driving the TAT cable.  (It is not speed of light as this is

      not speed of light in a vacuum, it is speed of light through glass, which we estimate the

      refraction index (to be a liberal of 1.32), hence the higher delay... In a perfect vacuum we should

      get a RTD of about 53ms.  So 74ms is pretty good).


      Their core is built differently, hence why their servers are more expensive. ;-)  I have not made up my mind about Theplanet, but if my users keep getting dumped, saving $100/mo on a server that is nearly worthless, is throwing $225/mo a way, not saving $100/mo.  I consider their networks vs EV1 as hamburger is to steak.


      Don't get me wrong. They are nice, growing, but their CEO had a "less expensive" model

      in place for their core than EV1 does, and it shows in ping times.  I think I would

      rather a sustained 70ms ping time than a ping time that has 20+ms of jitter in it and jumps

      from 52ms to 70ms like a yo yo.  I won't say I have never seen a network do this.  I have been in the business for 18 years.  I have.  But we fixed it, as we consider that "badly broken".



      Dr D



      At 12:16 AM 3/21/2005, you wrote:

        Yeah I've got RHEL 4 also.  BTW, I was the one that posted about the teamspeak / mysql issue.  You need the compat package for mysql - has the old libraries and such so that you can use them instead of the new ones that come with the new version of mysql on RHEL4:

        Snipet from the teamspeak server.ini:

        VendorLib=/usr/lib/libmysqlclient.so.10.0.0



        The package you need:

        MySQL-shared-compat-4.0.23-0.i386.rpm

        It may be a different set of numbers now but that's the name: MySQL-shared-compat



        After that teamspeak will work with mysql.



        I too ran into problems when I turned on journaling.  The main reason is because on a heavy loaded system because of the journal update time being 5 seconds it can cause' some lag - there is a setting you can pass that lowers this - used to work great with the 2.4 kernel but really the 2.6 doesn't need the journal setting from what I've tested - hurts it more than helps as Jay mentioned.



        But you can do one performance trick that seems to have helped some.  You can set this in your fstab:

        Example:

        ext3    defaults,noatime        1 2

        Add noatime after defaults on the hard drive your running your game servers off of.  Basic explination:

        "The noatime setting eliminates the need by the system to make writes to the file system for files which are simply being read....."


        Use at your own risk :) I just thought I'd mention it.



        I recently got upgraded to a dual 73gig SCSI server.  ThePlanet stuck me in the new datacenter at infomart in Dallas.  Has anyone been put in that datacenter yet?  I've been receiving complaints of ping spikes from one of my customers that is from California.  I gathered trace routes from everyone and it seems to be the link between datacenter 3 and 5 (the new one - infomart).



        Anyone else ran into anything yet?  From what I can tell it seems to be only from the western side of the US because of the trace routes I've received - the ones that had the problem were in like Arizona and California etc:



        13   246 ms   200 ms   199 ms  dist-vlan32.dsr3-2.dllstx3.theplanet.com [70.85.127.62]

        14    75 ms    57 ms    57 ms  po32.dsr1-2.dllstx5.theplanet.com [70.85.127.110]

        15    58 ms    61 ms    58 ms  po2.tp-car3.dllstx5.theplanet.com [70.84.160.165]

        16    57 ms    56 ms    56 ms  39.70-84-187.reverse.theplanet.com [70.84.187.39]



        And yes I've opened a support ticket but I'm not getting anywhere with that :(  I realize that trace route packets are low priority blah blah but when it's consistent between a whole bunch of different people and the ping is that high - I don't buy into that.



        Anyways.......



        --

        NateDog

         
        ----- Original Message ----- 
        From: Jay Vasallo 
        To: cod at icculus.org 
        Sent: Sunday, March 20, 2005 9:41 PM 
        Subject: Re: [cod] RHE4 new 2.6 Kernel


        Hey Mark,

         
        I rented a few servers from the planet last month and have the same setup you do to the t. Works great. I also noticed that it deals with swap mem a little different than the rhe3. But other than that, runs fine. Did some research on the new file journaling but noticed a decrease in productivity and increase in ping when i set the journaling to on so that was a waste of time. But other than that, if you use it exactly the way the planet gives it to you, the server rocks. 
        ----- Original Message ----- 
        From: Mark J. DeFilippis 
        To: cod at icculus.org 
        Sent: Sunday, March 20, 2005 9:23 PM 
        Subject: [cod] RHE4 new 2.6 Kernel



        Anyone had experience with running the COD binaries on the new RedHat Enterprise Server 4.0 with 2.6 SMP and threading enhancements?

        On theplanet.com, and servermatrix.com, there are a few quotes here and there about "nice performance increase". (Actually I would be happy if it is better than the existing ES3 SMP kernel which will often run a cpu up to 100% while the other sits idle at 0%, after the major lag, it kicks in. (yea! isn't that proactive!)

        I am hoping 2.6 enhancements to RHE4 does the trick.

        Anyone?

        I did see some issues with Teamspeak and issues with mysql.  At the time of posting, the admins recommended solution was to rev back up2date for the mysql package to 4.0, and Teamspeak is a happy camper again.

        Any input from someone doing this already would be appreciated.

        Thanks

        Md


        ------------------------------------------------------------------------------- 
        S1,Mark J. DeFilippis, Ph. D EE       defilm at acm.org 
                                              defilm at ieee.org

    S1-------------------------------------------------------------------------------
    Mark J. DeFilippis, Ph. D EE          defilm at acm.org
                                          defilm at ieee.org


  S4-------------------------------------------------------------------------------
  Mark J. DeFilippis                    defilm at acm.org
                                        defilm at ieee.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://icculus.org/pipermail/cod/attachments/20050325/b1aaafc6/attachment.htm>


More information about the Cod mailing list