[cod] RHE4 new 2.6 Kernel

Nathan P. natedog550 at hotmail.com
Thu Mar 24 23:39:27 EST 2005


All I have to say is WOW.  That is one awesome explination bro.  

I just don't like the fact of my trace routes from my customers being "blown off" because of packet priority or whatever.

I'm probably gonna lose one of my customers over this and it really hax me off.  

Anyways thanx again for the explination man.  This explination makes me wanna open a ticket and paste it, but it would probably blow their mind as much as it did mine haha.

You will like RHEL4 - it's nice and smooth.


Have a nice day Dr. D.

--
NateDog
  ----- Original Message ----- 
  From: Mark J. DeFilippis 
  To: cod at icculus.org 
  Sent: Thursday, March 24, 2005 8:38 PM
  Subject: Re: [cod] RHE4 new 2.6 Kernel


  Thanks Nate, and Jay as always!  I have put the order in for the ES4 upgrade.
  I prey every time I put in a change ticket with support. Lately, it has been
  very poor.  I hope they can get it right 100%.  There is always something
  missing when they make a wholesale change...

  Nate, I noticed that for the period that they were migrating in the new data center
  they had major BGP convergence issues, and it would clear my servers.

  At first they blamed it on my location.  Yea sure.  Everyone around the USA, west coast,
  east, central, and it is the whole Internet not you.

  Glad I have access to certain info at a carrier that happens to feed the majority
  of servermatrix and theplanet.  The peer showed loss of BGP adjacency. I understand
  they had to make changes and re groom fiber, but it was clear from the flapping that they
  don't have BGP dampening configured on their border routers.

  I have experience with CRisco 65xx, but on the carrier side for larger customers I design
  for I use Juniper M-20's and M-40's, as well as Laurel Networks 120's. These are pretty big boys.
  Smallest line card on a Laurel is the 13 port DS3 card. But these are big boys, with 8 port
  gig cards wire speed, etc. In their design, they are likely using the Crisco 6509's are layer
  two aggregation devices.  While the design is a simple classic design, it relies on default
  load balancing provided by these switching/routers.

  I just have to wonder if they know what they are doing.  I have noticed that
  my pings run anywhere from 60ms from NY direct Level3 to 92ms.  I have the ability to test
  from many area's from around the US from work.  From each location, the variation in delay
  is past the local loop in to servermatrix/theplanet.  This means they are highly likely using
  load balancing on the Multi-link trunks that are inter-switch and/or switch/router uplinks.

  The above switches I have recently been working with in building Int-serv RSVP cores for service
  provider rfc2547bis MPBGP MPLS networks, where LSP's maintain MPLS based traffic engineering
  to provider QOS to IP based QOS CE devices.  We found that that MLT load balancing is not
  a strong point in these switches.  Adding in the additional distribution layers of inexpensive Crisco 650x
  switches for aggregation, and you have the poor network design they have.

  One of my guys designed something like this, with internal MLT load balancing in the core
  through the distribution layer, he would be unemployed.  Maybe they have all CCIE's. there? ;-))

  You are not imagining it.  I have a server over at EV1 as well, and my ping times to that
  server is rock solid at 52-66ms. This is reasonable considering much of the long haul is
  DWDM photonic, (plus local loops, and ISP), but note you are going ISP, to loop to loop
  to IXC to loop to LEC to loop to Datacenter.  The bulk of the loops are phototonic SONET
  with little latency. If you are generous, add 10ms latency.  NY to London on TAT14 I peaked
  at today is running steady at 74ms RTD (Round Trip Delay). (That is our Add/Drop Multiplexor directly
  off the light-wave mux driving the TAT cable.  (It is not speed of light as this is
  not speed of light in a vacuum, it is speed of light through glass, which we estimate the
  refraction index (to be a liberal of 1.32), hence the higher delay... In a perfect vacuum we should
  get a RTD of about 53ms.  So 74ms is pretty good).

  Their core is built differently, hence why their servers are more expensive. ;-)  I have not made up my mind about Theplanet, but if my users keep getting dumped, saving $100/mo on a server that is nearly worthless, is throwing $225/mo a way, not saving $100/mo.  I consider their networks vs EV1 as hamburger is to steak.

  Don't get me wrong. They are nice, growing, but their CEO had a "less expensive" model
  in place for their core than EV1 does, and it shows in ping times.  I think I would
  rather a sustained 70ms ping time than a ping time that has 20+ms of jitter in it and jumps
  from 52ms to 70ms like a yo yo.  I won't say I have never seen a network do this.  I have been in the business for 18 years.  I have.  But we fixed it, as we consider that "badly broken".


  Dr D


  At 12:16 AM 3/21/2005, you wrote:

    Yeah I've got RHEL 4 also.  BTW, I was the one that posted about the teamspeak / mysql issue.  You need the compat package for mysql - has the old libraries and such so that you can use them instead of the new ones that come with the new version of mysql on RHEL4:
    Snipet from the teamspeak server.ini:
    VendorLib=/usr/lib/libmysqlclient.so.10.0.0
     
    The package you need:
    MySQL-shared-compat-4.0.23-0.i386.rpm
    It may be a different set of numbers now but that's the name: MySQL-shared-compat
     
    After that teamspeak will work with mysql.
     
    I too ran into problems when I turned on journaling.  The main reason is because on a heavy loaded system because of the journal update time being 5 seconds it can cause' some lag - there is a setting you can pass that lowers this - used to work great with the 2.4 kernel but really the 2.6 doesn't need the journal setting from what I've tested - hurts it more than helps as Jay mentioned.
     
    But you can do one performance trick that seems to have helped some.  You can set this in your fstab:
    Example:
    ext3    defaults,noatime        1 2
    Add noatime after defaults on the hard drive your running your game servers off of.  Basic explination:
    "The noatime setting eliminates the need by the system to make writes to the file system for files which are simply being read....."

    Use at your own risk :) I just thought I'd mention it.
     
    I recently got upgraded to a dual 73gig SCSI server.  ThePlanet stuck me in the new datacenter at infomart in Dallas.  Has anyone been put in that datacenter yet?  I've been receiving complaints of ping spikes from one of my customers that is from California.  I gathered trace routes from everyone and it seems to be the link between datacenter 3 and 5 (the new one - infomart).
     
    Anyone else ran into anything yet?  From what I can tell it seems to be only from the western side of the US because of the trace routes I've received - the ones that had the problem were in like Arizona and California etc:
     
    13   246 ms   200 ms   199 ms  dist-vlan32.dsr3-2.dllstx3.theplanet.com [70.85.127.62]
    14    75 ms    57 ms    57 ms  po32.dsr1-2.dllstx5.theplanet.com [70.85.127.110]
    15    58 ms    61 ms    58 ms  po2.tp-car3.dllstx5.theplanet.com [70.84.160.165]
    16    57 ms    56 ms    56 ms  39.70-84-187.reverse.theplanet.com [70.84.187.39]
     
    And yes I've opened a support ticket but I'm not getting anywhere with that :(  I realize that trace route packets are low priority blah blah but when it's consistent between a whole bunch of different people and the ping is that high - I don't buy into that.
     
    Anyways.......
     
    --
    NateDog
     

      ----- Original Message ----- 

      From: Jay Vasallo 

      To: cod at icculus.org 

      Sent: Sunday, March 20, 2005 9:41 PM

      Subject: Re: [cod] RHE4 new 2.6 Kernel


      Hey Mark,



      I rented a few servers from the planet last month and have the same setup you do to the t. Works great. I also noticed that it deals with swap mem a little different than the rhe3. But other than that, runs fine. Did some research on the new file journaling but noticed a decrease in productivity and increase in ping when i set the journaling to on so that was a waste of time. But other than that, if you use it exactly the way the planet gives it to you, the server rocks.

      ----- Original Message ----- 

      From: Mark J. DeFilippis 

      To: cod at icculus.org 

      Sent: Sunday, March 20, 2005 9:23 PM

      Subject: [cod] RHE4 new 2.6 Kernel




      Anyone had experience with running the COD binaries on the new RedHat Enterprise Server 4.0 with 2.6 SMP and threading enhancements?


      On theplanet.com, and servermatrix.com, there are a few quotes here and there about "nice performance increase". (Actually I would be happy if it is better than the existing ES3 SMP kernel which will often run a cpu up to 100% while the other sits idle at 0%, after the major lag, it kicks in. (yea! isn't that proactive!)


      I am hoping 2.6 enhancements to RHE4 does the trick.


      Anyone?


      I did see some issues with Teamspeak and issues with mysql.  At the time of posting, the admins recommended solution was to rev back up2date for the mysql package to 4.0, and Teamspeak is a happy camper again.


      Any input from someone doing this already would be appreciated.


      Thanks


      Md


      -------------------------------------------------------------------------------

      S1,Mark J. DeFilippis, Ph. D EE       defilm at acm.org

                                            defilm at ieee.org


  S1-------------------------------------------------------------------------------
  Mark J. DeFilippis, Ph. D EE          defilm at acm.org
                                        defilm at ieee.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://icculus.org/pipermail/cod/attachments/20050324/3ec8bd96/attachment.htm>


More information about the Cod mailing list