[cod] Multiple service server + Going off list due to "spamming"

Mark J. DeFilippis defilm at acm.org
Sat Dec 4 12:55:31 EST 2004


Dave,

This is most useful!  No apologies needed for length. Thanks for taking the 
time!
The single install has some added significant impact as well... I presume it is
sticky bit, hence same shared code segments, saves memory big time, as
well as load time for each instance.

The max rate is as you pointed out specific to the users experience.  Your
findings, per game is most useful information for all running a multiple gaming
server out there.

The Nice values.. that is a bit of quagmire. The information is very 
useful, but
as you note, very specific to the servers you are running and the load indeed.

There is no point of running SOF2 at 15K if 8K will do!  What you have brought
out, that based on any other comments, others are already doing and I am
completely in the dark, or something very significant. I believe I am not in
the dark, and your hard work can help others greatly.

What Dave has done here is:
1. Determined the max_rate point of diminishing returns on a game by game 
basis.
     (i.e. Don't run em at 25K if they see no difference above 15K, or 8K.
     These are changes you can make today in your configs, and you will not
     impact your quality, but will bring down your bandwidth use, and a 
side effect
     which I will mention in a moment.

2. The Nice, based on a quantified set of "server type".  Nice being relative,
     it will need to be tweaked for each server, but the general solution 
above is
     sound based on how nice works.

3. Another point, that is not as apparent, mentioned, but is real in the 
relativeness
     of the Linux Kernel is that processes are in one of three states... 
(Assuming the
     simple case of non-multi thread for simplification). A process in 
gaming is in 4 states:
     1. Idle (Ha!) not likely but indeed you may find lagging servers, and 
40% Idle time.
         (Not withstanding the reporting tools, can calculation deficiencies)
     2. Waiting in the CPU allocation queue for CPU cycles to execute
     3. Executing (run)
     4. In I/O Wait state

     NICE will impact the time the CPU is allotted to a process, as well as 
it's relative priority
     to attain the CPU.  But since all useful programs when running move 
between the state
     of CPU wait, CPU Run, I/O wait, I/O Complete, CPU Wait (and the DFA 
scheduling starts
     again), one can see a few things that are important:

     1. The amount of CPU it will require is specific to many of the games 
attributes, but it all
         boils down to transmitting/receiving the I/O it waited for or 
received.
     2. Hence, if you decrease the Max_rate, less I/O Wait states, makes 
the process available
         for CPU execution with a higher frequency. This is where the NICE 
value becomes
         relative to the game's CPU requirements AND the I/O process that 
is not always clear.

     Ultimately, the best use of resources can be had by finding the 
max_rate for user quality.
     This will define the I/O wait, and has a more farther reaching than 
simply bandwidth utilization.
     When niced properly for it's server type, the bandwidth per user is as 
efficient as it can get.
     The CPU Run time allocation, if you nice it down too much relatively 
to your other gaming servers
      will experience lag. Ironically not computational, but in kernel 
space setting up DMA transfers
      of I/O related data from it's previous state in the DFA.

His work is significant. I don't know if he knows all the inner workings of 
the kernel, I suspect
so, as this does not happen by accident.  I will tell you that other server 
companies that
have worked out these relationships have not shared them for competitive 
advantage.

There is a lot of useful information provided here by Dave, and he wrapped 
up some complexities
very nicely in to a nice general package for server admins to work with. I 
just tried to place my
thoughts about what he is really doing in the kernel in perspective to help 
anyone that wants to
take the "fine tune" process to a more granular level when troubleshooting 
two or more servers
that may have lag issues, giving you an idea of the basic DFA scheduler, so 
with a peek with
tools like vmstat, top, you can get an idea of what best to tweak to try to 
resolve your issue.

At the same time, the info Dave provided should give beginning admins that 
have these
powerful servers, but may not be able to seem to get all they wish to get 
out to the server.

My thanks Dave for the information and your time.

I noticed your CPU is very above 25% on a 2.4 Dual Pent Proc, with 10Mbs 
FD.  Very Nice!

My many thanks.

Dr. D

At 09:57 AM 12/3/2004, you wrote:
>You can see the overview at the CS client page 
><http://www.clanservers.net/clients.asp>http://www.clanservers.net/clients.asp. 
>We're in the second group "Game Servers by the Box at L33tgames." We have 
>a rather special relationship with CS because we've been with them since 
>the beginning and we do a lot of beta work for them. Except for initial 
>install we maintain all our game servers ourselves (which is rather unique 
>in the game server business) via telnet. Each game has it's own user 
>account and the game itself is running in its own Screen. We use Apache to 
>serve up player stats and overall stats.
>
>You can see our specific server stats at 
><http://washington.clanservers.net/>http://washington.clanservers.net/.
>
>As for the tuning. Indeed we have spent a lot of time tuning specific apps 
>and the 15k vs. 25k max_rates per client are a tradeoff between individual 
>player performance and overall system performance. We were encountering 
>the same sort of cross-server interference (lag) problems that you 
>mentioned. It was solved through a combination of nice settings (match 
>servers -15, public servers -10, custom map servers -5, telnet sessions 0) 
>and the per seat throttling. The throttling is different for each game 
>type because WET for instance chokes at 15k but is quite responsive at 
>25k. SOF2 on the other hand is tolerable at 8k. Each game will use the 
>full per player BW limit. The upper limit is of course the total BW limit 
>set by the connect contract. When we blow our monthly allotment, which we 
>have done when all games were running full BW, the connect overcharge is a 
>killer. The decision to use one IP per game with consecutive port numbers 
>for instances is somewhat arbitrary, however, we do run from a single 
>master install per game and symlink all common files into the instances. 
>There is really only one COD:UO "install" with 3 symlinked instances. That 
>way we only patch in one place per game.
>
>In fact all servers are running simultaneously (plus a 16 man SOF2 server 
>we sublet) and we rarely encounter any cross-game lag anymore. Of course 
>the match servers are not normally populated but the others are. We have 
>had AA, BF1942 and BF:V servers in the mix at one time or another as well.
>
>Wheu, this was rather long winded, but I hope it helps.
>Dave
>
>-------- Original Message --------
>Subject: RE: [cod] Multiple service server + Going off list due to
>"spamming"
>From: "Mark J. DeFilippis" <defilm at acm.org>
>Date: Fri, December 03, 2004 4:46 am
>To: cod at icculus.org
>
>
>What nifty utility are U using to format that nice output list
>or is that a flat file, or you use command line args with a
>ps -el | <perl script, sed, awk, etc>?
>
>
>On a Dual Pent 4, 3.2Ghz HT, 2GB Ram, 100Mbs, (6 IP's) when my
>UO foy 24x7 fills up with 32 users (No limits on tanks or etc. It
>is Ladder compliant), ONE cpu runs at 60-80% (Combined Real
>and Virtual).
>
>I have 24 Man Rifles Only server running, and a BF1942 server
>32 user. The UO Rifles only will Lag if the BF1942 server and
>32 user FOY fill up. Of course they get really ticked off in the
>rifles only server, as the most skill is indeed required there...
>(You have a bolt action rifle, or you can bash the guy. No scopes,
>smg's, nades (except smoke)).  So if they get lag, it can be
>pretty tough to hit a target, and becomes VERY frustrating
>quickly..
>
>I presume all these are not actually running at the same time?
>Also, I noticed some of your Max_rates are at 25K, some 15K.
>I never noticed much of a difference between 15K and 25K? But
>then again, I have not run as many virtual servers as you are.
>I ask, because if you set them lower, I presume you did so for
>a reason, or computed them and hence 15K vs default 25K?
>
>If you don't mind, can you tell us about this server?
>
>Physical...
>Your normal utilization?   Peak utilization?
>Limiting Mods?
>
>I think the information could be insightful, certainly for me
>simply by the fact that when the UO server is full and the
>BF1942 are full, there is hardly any CPU left...
>
>Thanks
>
>Mark
>
>
>
>At 11:54 PM 12/2/2004, you wrote:
>>You can do a similar setup using virtual IPs and assigning each game its own
>>IP.  Of course you need to "own" the IP you are assigning.  If you only have
>>access to one IP then assigning the games different ports is the way to go.
>>Donovan
>>-----Original Message-----
>>From: David A. Fuess [mailto:david at fuess.net]
>>Sent: Thursday, December 02, 2004 2:27 PM
>>To: cod at icculus.org
>>Subject: Re: [cod] Multiple service server + Going off list due to
>>"spamming"
>>I run 3 UO servers on one machine along with several other game types
>>(1-UT2k4, 3-SOF2, 3-WET, 3 COD:UO, 1-CSS)
>>Team Judge UT2004 67.18.122.226 7777 10000 24 8 ut4s xCTFGame CTF-SMOTE
>>Team Judge Public CTF 67.18.122.226 20100 15000 30 0 sof2s ctf mp_kam4
>>Team Judge Public Custom 67.18.122.226 20200 15000 30 0 sof2s ctf mp_Italy3
>>Team Judge L337 Match Server 67.18.122.226 20300 10000 20 0 sof2s ctf
>>arioche/mp_small
>>Team Judge 9 Map Campaign ETPro 3.0 67.18.122.227 27960 25000 28 1 ets et
>>supplydepot
>>Team Judge Custom/Stock 67.18.122.227 27961 100000 28 0 ets et oasis
>>Team Judge Match Server 67.18.122.227 27962 15000 18 0 ets et battery
>>Team Judge L337 COD/UO Server 67.18.122.228 28960 25000 24 5 cods tdm
>>mp_saint-rush
>>Team Judge L337 COD/UO CTF-HQ 67.18.122.228 28961 25000 24 0 cods ctf
>>mp_cassino
>>Team Judge L337 COD/UO Match 67.18.122.228 28962 25000 24 0 cods dom
>>mp_arnhem
>>Team Judge L337 HL2 CS 67.18.122.229 27015 - 24 0 hl2s cs_italy
>>You see the three UO servers are on ports 28960, 28961, and 28962. Each
>>game is a separate install on Linux so the connection is defined uniquely
>>for each.
>>Dave
>>At 01:46 PM 12/2/2004, you wrote:
>> >         People are so sensitive.  Ever hear of filters?  This is nothing
>> > compared to many many other list.
>> >
>> >
>> >Anyway, to bring this back to on-topic.
>> >
>> >I stumbled across one message here that mentioned running multiple servers
>> >on one machine.  I assume you must set different port numbers for them.
>> >
>> >Can you run one as internet and the other as LAN?
>> >
>> >I would also assume that the bandwidth calculations applies to both
>> >servers combined as well, correct?
>> >
>> >So if I had a Pentium III 550 running Slackware Linux and it has 100 BaseT
>> >connection to the world, my limitation would be the processing power of
>> >that machine?
>> >
>> >It would also seem like CoD would do a lot of repetitive commands (on the
>> >programming level) that a processor with a large cache would be
>> >best.  Since AMD processors usually handle the repetitive commands better
>> >than Intel, would it not make more sense to run an AMD processor?
>> >
>> >Has anyone ran a Quake II server along side a CoD?
>> >
>> >Thanks
>> >
>------------------------------------------------------------------------------- 
>
>Mark J. DeFilippis                    defilm at acm.org
>                                       defilm at ieee.org
S1-------------------------------------------------------------------------------
Mark J. DeFilippis, Ph. D EE          defilm at acm.org
                                       defilm at ieee.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://icculus.org/pipermail/cod/attachments/20041204/12acbf50/attachment.htm>


More information about the Cod mailing list