How can I determine the relationship of C integer type MIN / MAXes in the preprocessor?


I’m trying to determine the relationship of a given compiler’s integer types’ sizes using the preprocessor. My requirement is that I have two types, one of which is unsigned, and one of which is a signed type capable of storing every positive number that said unsigned type can store. i.e. I have to ensure that my ll_ssize type can store at least as many positive and negative integers as the ll_usize can store.

Unfortunately, the exact relationships of long long and long and int aren’t defined by the C standards; on some machines (such as LP64 machines), the data storage of a long is going to be exactly equivalent to a long long.

Thus, I have to use the preprocessor to attempt to determine the largest possible type that also has a single larger type available; the unsigned version of that type becomes ll_usize, and the signed version of the larger type becomes ll_ssize.

Here is the code I’m using now:

#if defined(ULONG_MAX) && defined(LLONG_MIN) && defined(LLONG_MAX) && \
  typedef   unsigned    long int   ll_usize;
  typedef   signed long long int   ll_ssize;
#elif defined(UINT_MAX) && defined(LONG_MIN) && defined(LONG_MAX) && \
  typedef   unsigned    int   ll_usize;
  typedef   signed long int   ll_ssize;
  typedef   signed int   ll_usize;
  typedef   signed int   ll_ssize;

Now, on to my problem. I can’t preform casts in preprocessor expressions, but it seems that ULONG_MAX is being incorrectly converted, as my compiler (clang on Mac OS 10.6 X Snow Leopard) spits out the following warning:

Source/Paws.o/Core/ll.h:21:15: warning: left side of operator converted from
      negative value to unsigned: -9223372036854775808 to 9223372036854775808
    ~~~~~~~~~ ^  ~~~~~~~~~~~~

Does anybody know of a way for me to work around this conversion error? Or, preferably, a better solution to the overall problem, because I really dislike these ugly preprocessor expressions.

Edit: I should also point out why I’m doing this, instead of just using the largest signed type available: I don’t want to waste the memory space for all of those negative integers, when I’m never going to be storing negative numbers. Specifically, the unsigned type (ll_usize) is used for the stored indexes into a linked list; however, some functions that operate on the linked list can take negative index arguments, and work from the opposite end of the linked list: those functions are declared to take ll_ssize instead. The waste is acceptable as the arguments to those functions; however, the waste for the indexes on the actual lists stored in the system is not.

How about -(LLONG_MIN+1) > (ULONG_MAX-1) ?