Tremulous Forum
General => General Discussion => Topic started by: gareth on January 31, 2007, 09:00:42 pm
-
I recorded some trem for a timedemo, to use as a benchmark. To run it follow these simple steps:
1. download http://g00gul.net/shots/timedemo1.dm_69 and place it in your demo folder - underneath the base folder in your trem installation, if the folder doesnt exist, create it.
2. open trem, using the settings you use to play it usually except be sure to set com_maxfps and r_swapinterval both to 0, open the console - usually press the key under the [esc] key.
3. type "timedemo 1" and hit enter.
4. type "demo timedemo1.dm_69" and hit enter.
5. the demo will run now, as fast as it can, wait for it to finish...
6. when it has finished, open the console and record the fps value it shows
7. post your fps along with your system spec and your trem settings in this thread.
-
ok i will go first:
system:
cpu: x2 3800+
video: 7600gt
ram: 2gb ddr2
settings:
1280*1204*32 fullscreen
all max in game
4*AA/2*AF from driver control panel
fps: 39.6
-
P4 3.0Ghz with HT
1GB RAM, 367MB load before demo
250 GB IDE-HDD
Ati x600 series 256MB RAM on 17" TFT
WinXP SP2
Resolution: 1280x1024x32
r_gamma: 1.4
All effects on
4x AA
Result:
3067 Frames, 84.4 seconds: 36.4 fps
-
XP3000+
TI4200
1.2GB ram
1280x1024x32
all fx
3067 frames, 95.0 seconds: 32.3 fps
-
AMD Athlon 64 X2 4200+
NVIDIA GeForce 7900GT
Ubuntu Fiesty Fawn
(hasn't been powered down in about two weeks)
2GB DDR RAM
Resolution: 1280x1024x32
r_smp 0, so it wasn't in SMP mode.
also ran the 32-bit binary
3067 frames, 67.9 seconds: 45.2 fps
-
amd athlon xp 1800+
radeon 9500
1GB pc133
1024x768
default settings
3067 frames
179.4 seconds
17.1 fps
I'm awesome.
-
ok i will go first:
system:
cpu: x2 3800+
video: 7600gt
ram: 2gb ddr2
settings:
1280*1204*32 fullscreen
all max in game
4*AA/2*AF from driver control panel
fps: 39.6
Note for myself : game is COMPLETLY cpu bound :)
system:
cpu: x2 3800+ ( same )
video: GeForce 7800 GT ( better card )
ram: 1gb ddr?
settings:
1280*1204*32 fullscreen
all max in game
No AA and no AF ( lower quality settings )
Os : Linux
fps: 40.5
Gotta activate AA and AF someday I guess
-
AMD Athlon 64 X2 4600+ 2G RAM
NVidia GeForce 6600GT
1280x1024x32
4*AA (export __GL_FSAA_MODE=8 )
full gfx settings
3067 frames, 79.6 seconds: 38.5 fps
(tested with a homegrown 64-bit binary)
-
3067 frames, 181.6 seconds, 16.9 fps
mobile AMD Athlon(tm) XP2800+
485.5MB RAM
ATI Technologies Inc Radeon Mobility U1 (IGP 320M)
640x480 @ really ultra low settings
-
3067 frames, 60.3 seconds: 50.9 fps
Core 2 duo 2ghz
1gb ram
ati mobility radeon x1400
1440x900 res
everything on high/max
no aa/af
-
athlon 850mhz
384MB RAM at such a low clock i'm embarassed to write it here
ati radeon 9250 256MB using the r200 driver
1280x1024 using mediumish settings.
10.5 fps
-
64bit 3200+
x600pro
2gb ddr 400
old slow ass hd
default settings 640 x 480 res :wink:
37.9 fps
-
i will make a graph displaying fps/system specs, we will see then when the cpu takes over from gfx-card
ill definetly make one, tomorrow, maybe...
-
Note for myself : game is COMPLETLY cpu bound :)
Definitely not true, compare mine to Ingar's.
-
AMD Athlon XP 2100+
C3D Radeon 9000pro 128
Kin 512 ddr333
800x600
High quality settings || Fullscreen
3067 frames
129.7 seconds
23.6 fps
:grenade: :grenade: :grenade:
-
http://user.uni-frankfurt.de/~adamkiew/trem_usage.xls
stats so far, there is one factor that keeps reapearing: CPU and RAM
the 2.8ghz with 1gb ram is as fast as a 1.8 with 2gb ram
-
Note for myself : game is COMPLETLY cpu bound :)
Definitely not true, compare mine to Ingar's.
That's why it was a note to MYSELF. I was talking for my case and for gareth case. And I see it's the same for you too. As for Ingar, well he doesn't need to push is GPU to the edge by activating AAx4 ;)
Edit : scratch what I said, game is probably CPU bound for Ingar too: he is running the 64bit binaries which have known performance problems compared to the 32bit binaries.
-
x2 2800 intel
512m ram
ati x1600 512m
640x480
3067 80.4 38.2
1280x1024
3067 80.3 38.2
eh?
-
Athlon 3200+ (2.01 Ghz)
1 GB ddr2
ATI Radeon x800 256 MB
1280x1024
FSAA disabled
Max settings
XP SP2
3067 Frames, 76.7 Seconds: 40.0 fps
:grenade:
-
Athlon64 3200 (Single core, 2GHz)
1GB DDR1 (2*512MB, dual channel)
GeForce 5900XT AGP
Mandrivel 2006 x86_64
1280*1024
Max settings
3067 frames, 76.2 seconds: 40.3 fps
-
640x480
3067 80.4 38.2
1280x1024
3067 80.3 38.2
eh?
You are CPU or memory bound.
-
system
AMD Athlon +2000
768 DDR 1 (256 + 512)
Geforce 4 MMX 440 AGP
800x600 high quality
time 147.9
frames 3067
fps 20.7
-
And now something to lough:
3047 Frames, 247.0 seconds: 12.4 FPS
HW/SW:
AMD Mobile Athlon-XP M 2400+ @ 1800MHz
256MB DDR-266 shared RAM
S3 Unichrome / VIA KM/KN 400 Integrated Graphics 64MB from RAM
Windows XP Home
Settings:
Goto Options, use config lowest, then raise res to 800x600 and picmip to 1 aka
No AA/AF
Bilinear
16 Bit Color
16 Bit Z
No Texture-Compression avail.
-
http://user.uni-frankfurt.de/~adamkiew/trem_usage.xls
stats so far, there is one factor that keeps reapearing: CPU and RAM
the 2.8ghz with 1gb ram is as fast as a 1.8 with 2gb ram
that table is just wrong.
-you cant compare clock speed across architectures
-you cant just add up clock speed on each core to get the "over up speed"
i agree that this timedemo is mainly cpu limited tho, afaict catalyc's core2duo is the fastest cpu, but his gfx is much less than some of the others, yet he he easily gets the highest fps. Not surprising tho, it is quake3.
-
Anyway, I guess I'll have to find a way to make smp mode work on Tremulous. And when I do that, I'll keep it to myself a few weeks just so that I can enjoy having the most FPS here :D
-
its simple, make a better one
-
Intel Core 1 Duo T2300 @ 1.66GHz
1GB RAM
ATi Mobility Radeon x1600
Gentoo Linux, kernel 2.6.19-gentoo-r4
X.org X11R7 v1.1.1-r4, ati-drivers-8.32.5
Video settings: 1280x800, highest in-game, no AA
SMP enabled (no SDL): 3067 frames, 56.8 seconds: 54.0 fps
SMP disabled (SDL): 3067 frames, 71.7 seconds: 42.7 fps
Note: SMP run might have some minor performance boost because Tremulous could not mmap /dev/dsp.
-
Some interesting thoughts were posted so I did some more testing on the 32-bit vs 64-bit issue:
64bit: 3067 frames, 79.0 seconds: 38.8 fps
32bit: 3067 frames, 72.0 seconds: 42.6 fps
There is a notable difference but nothing to really complain about
(38fps is not bad anyway :wink: )
Than I suddenly remembered these funny settings:
vm_cgame 2
vm_game 2
vm_ui 2
These force the game to use the .qvm's and ignore any the .so version (that would be .dll for windows users)
When we set these settings to 0 (to use the dll-version) the picture changes quite a bit:
64bit so
3067 frames, 65.2 seconds: 47.0 fps
32bit so
3067 frames, 69.0 seconds: 44.4 fps
The native version is the clear winner here.
-
Mmmm, interesting. I tried the vm_ thing to use the .so version and got exactly the same performance with the 32bit version. Maybe I should compile again a 64bit version and test if I get some more performance like that
-
Windows xp sp2
celeron 2.0ghz oc'ed 2.5ghz 250mhz fsb - need to get rid of this pos!!
512mb pc3200 ram
ati 9600 256mb 395/220
Fastest settings - 1024x768
avg fps- 24.6
-
640 * 480; fastest
pentium M, 1.86GHz , 768 mb ram, ati radeon x300; linux 2.6.18
avg fps: 47.9
-
Windows XP SP2
Pentium 4 2800 HT (Family 15, Model 4, Stepping 1)
1 GB 533MHz DDR2 RAM
GeForce 7300 LE/PCI/SSE2 128 MB
High Quality, 1280x1024x32: 3067 frames, 90.4 sec, 33.9 fps
Fast + change to 800x600: 3067 frames, 77.7 sec, 39.5 fps
-
MacBook
Mac OS X 10.4.8
2 GHz Intel Core Duo
1 GB 667 Mhz DDR2 SDRAM
Default settings except that I changed the resolution with
/r_customwidth 1280
/r_customheight 800
/r_mode -1
It got: 149.7 seconds, 20.5 FPS
-
I know this is an old topic but...
WinXP SP2
AMD Athlon64 3000+ (2.0 GHz)
1.5 GB RAM (PC3200 400 MHz)
80 GB 7200 RPM HDD
XFX nVidia 7300GT
1680x1050 resolution
~Average Settings
3067 Frames, 74.7 Seconds, 41.0 FPS.
I noticed the FPS drop to as low as 24 and as high as 73 during the demo.
-
1680x1050, GeForce 6800 GT, Athlon 64 3200+, 1.5 GB. Trem settings on "high quality".
42.5 FPS with R1Trem with 4x anisotropic filtering
40.4 FPS with stock tremulous.exe with no anisotropic filtering
Strange results, although I did have a lot of apps running in the background.
:human:
-
Macbook Pro
Mac OS X 10.4.10
2.33 GHz Intel Core Duo
2 GB 667 MHz DDR2 SDRAM
ATI Radeon X1600, 256 MB VRAM
1440x900 Resolution
All graphics settings maxed
83.2 seconds, 36.8 FPS [range approx. 23-85, damn chainsuits]
-
amd athlon xp 1800+
radeon 9500
1GB pc133
1024x768
default settings
3067 frames
179.4 seconds
17.1 fps
I'm awesome.
You keep tellin' yourself that mate...
-
Please don't get mad about my HUGE necro, but I thought this could be an educational bump. An entirely new generation of hardware is out and I wanted to gauge its performance.
Windows XP SP3
Intel E8400 overclocked to 3.6GHz
3GB DDR2-1066 RAM (4GB actually, but 32-bit OS)
EVGA GTX 260 @ 631/1360/1055
500GB Seagate Barracuda
1680x1050x32
Maximum in-game settings & 16xQ AA, 16x AF
3067 frames, 34.2 seconds: 89.7 fps
-
Hmm, seeing as you have one of the fastest gpus, yet your fps is only about twice as high as the fastest old gfx cards, it seems CPUs evolved far less.. AND the game is completely CPU bound. Which is a shame considering the speed of nowadays cheap <50-100 dollar graphic cards.
-
If there were greater support for multi-threading I'm sure my score would be much, much higher; Probably over 100.
-
If there were greater support for multi-threading I'm sure my score would be much, much higher; Probably over 100.
the biggest difference would probably come from rewriting the renderer to support VBOs.
-
The link to the demo file seems to be dead. Does somebody have a working link ?
-
The link to the demo file seems to be dead. Does somebody have a working link ?
i actually think i might have a copy of it, but if i do it's in a tarball on a freebsd partition that doesn't boot atm.
-
timedemo1 (http://overdose18.googlepages.com/timedemo1.dm_69)
Don't forget to follow the directions in the first post.
-
Intel Pentium 4 HT 2.8GHz (HT off)
768MB RAM (a 512mb pc2700 stick, and a pc3200 256mb stick)
nVidia GeForce 5200FX (GPU: 249.750mhz overclocked +20 mhz, RAM 405.000mhz, overclocked +30mhz)
Linux 2.6.27-luna-rc1
1024x768 fullscreen, shinytrem v12 and bloom enabled.
3067 frames, 199.6 seconds: 15.4 fps
-
3067 frames 55.6 seconds 55.2 fps
x86_64
Core2Duo E4400 OC'd at 2.3GHz
2048MB DDR2 800
Nvidia 7600GT
1680x1050 Fullscreen.
No AA/AF. Maxed out settings ( No extras such as bloom etc )
640x480
3067 frames 56.6 seconds 54.2 fps 8.0/18.5/38.0/5.2 ms
Eh?
-
i dont get this thred... does it have somthing to do with that if i download that file and use it on tremulous, ill beable to see things in slow motion? or maybe go back in time? i dont get it. whats with all that frame rate stuff and the ZOogersmale?
-
When you start the demo it runs as fast as it can, the numbers you are given at the end are statistics.
3067 total frames in the demo (won't change for anyone)
56.6 seconds is the time it took to complete, lower is better
54.2 fps is your average frame rate throughout the demo, higher is better
This is like a makeshift benchmarking program for Tremulous.
-
Athlon XP 1800+
1 GB RAM DDR
Ati Radeon 9550 128 MB
All high settings, no AA, 1280x1024, tjw client.
With shiny trem and superpie's visual mod: 15,2 fps
Without: 16, 7 fps
-
3067 frames 64.2 seconds 47.8 fps
PowerPC
PowerMac G5 Dual Core @ 2GHz
1,5 GB DDR2 533Mhz
Nvidia 6600GT
1280x1024 Fullscreen.
Maxed out settings.
-
AMD Sempron LE-1300 @ 2.31ghz
2GB RAM @ 333mhz, cas latency 5
GeForce 9800gt 512mb @ 663mhz core and 1900mhz mem
All settings on "high", bloom on with samples set to 512, 1680x1050, antialiasing 16x, superpies vismod 2, tremfusion 0.0.1b3
3067 frames 61.9 seconds 49.5 fps 10.0/20.2/75.0/4.6 ms
-
I'm actually glad this has been revived, it's a cool project.
My results straight from my logfiles:
[10:21][47:59]3067 frames 42.8 seconds 71.7 fps 7.0/14.0/56.0/3.5 ms
on Ubuntu 8.10 64-bit
with 4 GB RAM, not sure of my "rated" frequency
Intel Core2 Duo T9300 rated at 2.50 GHz
running GNOME 2.24.1 on kernel 2.6.27-7-generic
With the exception of not capping my FPS, I ran this demo just as I normally do. I did not try to free up any CPU/memory/IO resources. Bloom is off, snaps is 40, everything is pretty normal. My system's resolution is 1440 x 900, 50 Hz, and Tremulous runs in a window (r_mode == 7).
I will love to get comprehensive information on all of these so I can find trends; this looks interesting!
Edit: by "normal" I mean highest graphics, net, etc. settings.
-
Intel Core 2 Duo E8500 3.16 ghz
4 GB DDR2(1066) RAM
Nividia 9500GT 512 MB DDR2 Video Card
Fps 71.4
oh max settings
1024x768 fullscreen
-
openSUSE 11.1 factory, kernel 2.6.27.10-2-default
AMD Athlon64 X2 4000+ (2x2.1 GHz)
2.0 GB RAM (DDR2-800)
Bunch of disks
NVidia 7300GS 512MB PCI-Ex16
1680x1050 resolution
All settings maximum, fullscreen
3067 Frames, 79.3 Seconds, 38.7 FPS.
Too bad I can't test the fried GF4Ti4200 anymore. I had the feeling it was a bit faster in certain areas (128bit memory bus?) and a bit slower in other areas. A GF FX5200 is slow as ass though. At least at that resolution.
-
Hmm, I tried this with my new GF 9500GT with 512 MB RAM.
More or less same result. But I do get around 400 fps on startup at an empty server with uncapped fps...
This is seriously fucked.
-
Too bad I can't test the fried GF4Ti4200 anymore. I had the feeling it was a bit faster in certain areas (128bit memory bus?) and a bit slower in other areas. A GF FX5200 is slow as ass though. At least at that resolution.
If I remember correctly, the GeForce 4Ti cards were actually GF2 GPUs.
Arch Linux "Core Dump"
AMD Phenom X3 Tri-Core 2.4GHz (3x 2.95GHz at the moment)
4GB RAM (DDR2-800)
XFX GeForce 9800GTX+ XXX Black Edition 512MB (stock cooler at the moment.)
1600x1200 Res
Bloom, ShinyTrem, Global 16x AA
3067 frames 48.6 seconds 63.1 fps 7.0/15.9/70.0/3.9 ms
This is with Firefox and a lot of other shit running. So, maybe a bit better by itself.
-
Intel Core 2 Duo E7300 @2.66Ghz
2Gb RAM DDR2-800
XFX Geforce 8400GS 256MB
1680x1050 (DVI)
Risujin Client
Default Nvidia settings
Max Settings
3067 Frames 51.1 seconds 60.0 Fps
-
If there were greater support for multi-threading I'm sure my score would be much, much higher; Probably over 100.
the biggest difference would probably come from rewriting the renderer to support VBOs.
Try (http://patches.mercenariesguild.net/index.php?getfile=994) it out.
-
If there were greater support for multi-threading I'm sure my score would be much, much higher; Probably over 100.
the biggest difference would probably come from rewriting the renderer to support VBOs.
Try (http://patches.mercenariesguild.net/index.php?getfile=994) it out.
1) you shouldn't be mixing two patches together like that, and
2) you should be submitting this to ioq3
-
AMD Athlon 64 X2 4600+ 2G RAM
NVidia GeForce 6600GT
1280x1024x32
4*AA (export __GL_FSAA_MODE=8 )
full gfx settings
3067 frames, 79.6 seconds: 38.5 fps
Interesting difference:
Intel Core 2 Duo E6700 @ 2666Mhz + 8G RAM
NVidia GeForce 8800 GTS 640Mb
1680x1050
max settings
3067 frames 36.2 seconds 84.8 fps
Turning aniso/FSAA on or off doesn't make any difference.
Latest mg binary, 64bit linux
-
AMD Athlon 64 X2 4600+ 2G RAM
NVidia GeForce 6600GT
1280x1024x32
4*AA (export __GL_FSAA_MODE=8 )
full gfx settings
3067 frames, 79.6 seconds: 38.5 fps
Interesting difference:
Intel Core 2 Duo E6700 @ 2666Mhz + 8G RAM
NVidia GeForce 8800 GTS 640Mb
1680x1050
max settings
3067 frames 36.2 seconds 84.8 fps
Turning aniso/FSAA on or off doesn't make any difference.
Latest mg binary, 64bit linux
That is odd, my first guess would be ram. I didn't know ram could alter fps that much though. So the onlything to assume is AMD does make shitty cpus :P
-
Look at the video cards. 6600GT versus 8800 GTS? The 8800 can smoke the 6600.
I tried out the VBO patch -- Doesn't apply for me. There was a decent amount of errors (mostly SSE conflicts, but I dealt with those). The issue I have right now is
src/renderer/tr_marks.c: In function ‘R_BoxSurfaces_r_sse’:
src/renderer/tr_marks.c:260: error: ‘v4fMZeroDotFive’ undeclared (first use in this function)
src/renderer/tr_marks.c:260: error: (Each undeclared identifier is reported only once
src/renderer/tr_marks.c:260: error: for each function it appears in.)
src/renderer/tr_marks.c: In function ‘R_MarkFragments_sse’:
src/renderer/tr_marks.c:507: error: ‘mixMask0001’ undeclared (first use in this function)
src/renderer/tr_marks.c:507: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:512: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:514: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:565: error: ‘v4fMZeroDotOne’ undeclared (first use in this function)
src/renderer/tr_marks.c:609: error: ‘v4fMZeroDotFive’ undeclared (first use in this function)
v4fMZeroDotOne, v4fMZeroDotFive, and mixMask0001 aren't defined anywhere in the patch. Is the patch incomplete?
-
trem doesn't put any real load on the graphics card as long as it's not a geforce 2 or something.
-
trem doesn't put any real load on the graphics card as long as it's not a geforce 2 or something.
Maybe not in the timedemo, which would explain why I had no change after upgrading gpu. But, ingame I now get 400 fps on some parts of some maps with same settings at which my old gpu would max out at 70-90 fps and I never go below the 76 fps I set except for on UNCREATION looking down the hall from humans base. But, at least the map Insanity gives me playable fps now ;-P
Is the timedemo code and the actual ingame code really working on the base of the same engine restrictions?
To me it seems like the timedemo is more CPU dependent whereas ingame it is more dependent on the GPU.
-
trem doesn't put any real load on the graphics card as long as it's not a geforce 2 or something.
Maybe not in the timedemo, which would explain why I had no change after upgrading gpu. But, ingame I now get 400 fps on some parts of some maps with same settings at which my old gpu would max out at 70-90 fps and I never go below the 76 fps I set except for on UNCREATION looking down the hall from humans base. But, at least the map Insanity gives me playable fps now ;-P
Is the timedemo code and the actual ingame code really working on the base of the same engine restrictions?
To me it seems like the timedemo is more CPU dependent whereas ingame it is more dependent on the GPU.
Imo, in game is still more dependant on CPU.
If you have an extremely nice card and a P2, expect shit fps, but if you have a nice E8500 and just a 64 mb integrated you'll still get nice fps.(not 400 but a good 60+ consistantly on atcs at least.)
-
Look at the video cards. 6600GT versus 8800 GTS? The 8800 can smoke the 6600.
I tried out the VBO patch -- Doesn't apply for me. There was a decent amount of errors (mostly SSE conflicts, but I dealt with those). The issue I have right now is
src/renderer/tr_marks.c: In function ‘R_BoxSurfaces_r_sse’:
src/renderer/tr_marks.c:260: error: ‘v4fMZeroDotFive’ undeclared (first use in this function)
src/renderer/tr_marks.c:260: error: (Each undeclared identifier is reported only once
src/renderer/tr_marks.c:260: error: for each function it appears in.)
src/renderer/tr_marks.c: In function ‘R_MarkFragments_sse’:
src/renderer/tr_marks.c:507: error: ‘mixMask0001’ undeclared (first use in this function)
src/renderer/tr_marks.c:507: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:512: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:514: error: too few arguments to function ‘v4fMix’
src/renderer/tr_marks.c:565: error: ‘v4fMZeroDotOne’ undeclared (first use in this function)
src/renderer/tr_marks.c:609: error: ‘v4fMZeroDotFive’ undeclared (first use in this function)
v4fMZeroDotOne, v4fMZeroDotFive, and mixMask0001 aren't defined anywhere in the patch. Is the patch incomplete?
Sure, forgot to include qsse.h and qsse.c again. The complete patch is on the patch tracker now.
If you want to test it on tremfusion, you should use this (http://www.tremfusion.net/forum/download/file.php?id=31) patch.