current svn uses 640x480 for the ui
this makes the ui look crappy. for example you are playing with res. 1024x768 -> all the menus are scaled by 1024.0/640.0 and converted to int
this means a precision lost (also the images)
ie it ruins the ui if you are using a resolution higher than 640x480
Should I bother to quote the UI code just to prove you wrong or will everybody believe me that you have no idea how the UI code works and you just make stupid posts?
A little hint: OpenGL doesn't care about current resolution.
prove it then you map 640x480 points to 1024x768 points this is not onto Thus you get holes
OK.
First some theory about floting point numbers. IEEE854 standard 32bit float ensures precision of 6 valid decimal digits. That means that vector coordinates scaled from 640x480 to 1024x768 have at least two valid digits after decimal point. That's precision of 1/100 of pixel.
And now some code:
void UI_AdjustFrom640( float *x, float *y, float *w, float *h ) {
// expect valid pointers
*x *= uiInfo.uiDC.xscale;
*y *= uiInfo.uiDC.yscale;
*w *= uiInfo.uiDC.xscale;
*h *= uiInfo.uiDC.yscale;
}
This function takes care of rescaling UI primitives. As you can see, x, y, w and h are all float pointers which need to be rescaled. xscale and yscale are also floats set to the right scaling values, eg. xscale = 1024/640.0; yscale = 768/480.0 No precision lost here.
All calls of this function are immediately followed by syscall of rendering functions. Again, all syscall arguments are passed as floats. You can check this in src/ui/ui_syscalls.c
Syscall handlers then take the arguments, still floats, and generate rendering command. Again, rendering command coordinates are float (x, y, w, h):
typedef struct {
int commandId;
shader_t *shader;
float x, y;
float w, h;
float s1, t1;
float s2, t2;
} stretchPicCommand_t;
When renderer backend decides it's time to render a new frame, it copies all rendering commands to the tesselator. x, y, w and h coordinates are copied to tess.xyz[][] array, which is again array of floats.
typedef float vec_t;
typedef vec_t vec4_t[4];
typedef struct shaderCommands_s
{
glIndex_t indexes[SHADER_MAX_INDEXES] ALIGN(16);
vec4_t xyz[SHADER_MAX_VERTEXES] ALIGN(16);
vec4_t normal[SHADER_MAX_VERTEXES] ALIGN(16);
vec2_t texCoords[SHADER_MAX_VERTEXES][2] ALIGN(16);
color4ub_t vertexColors[SHADER_MAX_VERTEXES] ALIGN(16);
...
} shaderCommands_t;
extern shaderCommands_t tess;
Again, no precision loss.
The last step in Q3 renderer core is dumping all commands in tesselator right into GPU. Again, the OpenGL functions used for that take float arguments. Q3 engine doesn't touch textures during rendering, they're all unchanged in memory since they've been loaded. All texturing is done in GPU, Q3 renderer just says which textures to use.
So where's your int conversion precision loss?
Resizing UI from virtual 640x480 to any other resolution would take weeks and all you would get are bigger numbers in menu definition files. The result would look exactly the same.