I am presently working on converting a 32-bit application into a 64-bit application in C. This application is currently working on x86 architecture (Windows, osx, Unix, Linux). So, before starting coding, I wanted to know what do I need to consider while converting the application.
I'm not positive what you mean. Do you have the source of the application you mean to port? Do you propose to translate a ix86 binary to a x86_64 binary?
Since he specified C, I think it's safe to assume that he has the C code for the app. 🙂
Yes, I do have the source of the application, but I think that the author didn't want to change it to 64 bits.
If you used the correct types for your values – eg.
uintptr_t, the fixed sized int types from
stdint.hwhere appropriate – and did not hardcode value sizes, your code should work out of the box.
The main problem you face when switching to 64 bit is that the size of pointers are different (64 bit instead of 32 – duh) The size of integers and the size of longs might differ too, depending on platform.
Why is this a problem? Well, it’s not, unless your code assumes that sizeof(int) == sizeof(void*). This could lead to nasty pointer bugs.
This really depends on the application and how it has been coded. Some code can just be recompiled with a 64-bit compiler and it will just work, but usually this only happens if the code has been designed with portability in mind.
If the code has a lot of assumptions about the size of native types and pointers, if it has a lot of bit packing hacks or of it talks to an external process using a byte specified protocol but using some assumptions about the size of native types then it may require some, or a lot, of work to get a clean compile.
Pretty much every cast and compiler warning is a red flag that needs checking out. If the code wasn’t “warning clean” to start with then that is also a sign that a lot of work may be required.
(int)&xcasts and typing with
char*to do pointer arithmetic with it.
4 = sizeof(void*)
#ifdef RUN64or anything similar. You’ll regret it if 128-bit platforms ever go into vogue.
uintptr_tnote as suggested by comment.
Well, fundamentally, the number of changes are fairly small, but it’ll still be a major task if the application isn’t carefully written to be somewhat portable to begin with.
The main difference is that pointers are 64 bit wide, but most other datatypes are unchanged. An int is still 32 bit, and a long is probably also still 32 bit. So if your code casts between ints and pointers, that’s going to break. Similarly, any struct or similar which depends on a specific offset to a member may break because other members may now be bigger, and so change the offset.
Of course, your code should never rely on these tricks in the first place, so in an ideal world, this wouldn’t be a problem at all, and you could simply recompile and everything would work. But you probably don’t live in an ideal world… 😉
One potential problem not already mentioned is that if your app reads or writes binary data from disk (e.g., read an array of structs using
fread), you are going to have to check very carefully and perhaps wind up having two readers: one for legacy files and one for 64-bit files. Or, if you are careful to use types like
uint32_tand so on from the
<stdint.h>header file, you can redefine your structs to be bit-for-bit compatible. In any case, binary I/O is a thing to watch out for.
The two major differences between 32-bit and 64-bit programming in C are
sizeof(long). The major problem that you will have is that the most Unix systems use the I32LP64 standard which defines a long as 64 bits and Win64 uses the IL32LLP64 standard which defines a long as 32 bits. If you need to support cross-platform compilation, you may want to use a set of architecture based typedefs for 32-bit and 64-bit integers to ensure that all code will behave consistently. This is provided as part of stdint.h as part of the C99 standard. If you are not using a C99 compiler, you may need to roll your own equivalent
As noted elsewhere the primary concerns for conversion will be code that assume
sizeof(int) == sizeof(long) == sizeof(void*), code to support data that has been written to disk and code for cross platform IPC.
For a good review of the history behind this, take a look at this article from ACM Queue.
There are lots of good answers already.
Consider using Gimpel Lint. It can point out exactly the types of constructs that are problematic. If your experience is like mine, it will also show you lots of bugs in the system unrelated to the 32/64 bit port.