109: Data Types: Ints Part 2.
Take Up Code - A podcast by Take Up Code: build your own computer games, apps, and robotics with podcasts and live classes
Categorie:
You’ll probably have one or more ints in almost every method and class you write. They’re everywhere so you really should know how to use them. Knowing how to use the int data type means you need to understand the limits and consequences of using a fixed number of binary digits. The previous episode explained what types of ints exist and their lengths. This episode explains how negative numbers play a huge role when working with ints and an interesting security vulnerability that can result when you don’t properly use ints. The math that computers perform with your variables is different than in real life. Because the computer can’t just add extra digits when it needs to. If I asked you to add 5 plus 5 five on paper, you’ll need to start using a new digit to represent 10. If you limited your view to just that single digit, then you might think that 5 plus 5 is equal to 0. If you had reached your maximum number of bits, then this extra need is called an overflow and will cause the result to wrap around back to the beginning. And computers are also different because they work with the full word size even for small values. In real life, this would be like every time you wanted to write the number 5, you instead always wrote 0,000,000,005. Adding leading zeros doesn’t change the value. At least in real life. And here’s another aspect where computers are different yet again. A computer doesn’t have a place to put a little negative sign so instead it uses two’s complement to represent negative numbers and that means it uses the most significant bit to signal if a number is negative or not. This causes small negative numbers to appear just like large unsigned numbers. You have to know ahead of time how you want to interpret the bits. Listen to this episode for more or read the full transcript below. Transcript Knowing how to use the int data type means you need to understand the limits and consequences of using a fixed number of binary digits. The previous episode explained what types of ints exist and their lengths. The same rules apply no matter if we’re using 4 bits or some other amount. Every bit that you have to work with effectively doubles how high you can count. With a single bit, you can count from 0 to 1. With two bits, you can count from 0 to 3. With three bits, you can count from 0 to 7. And with four bits, you can count from 0 to 15. If we let the letter n be the number of bits, then you can count from 0 up to 2 to the power of n minus 1. For example, if n equals 4 which means we have 4 bits, then 2 to the power of 4 is the same thing as 2 times 2 times 2 times 2 which is 16. Then just subtract 1 from 16 to give the highest number you can count up to with that many bits. Four bits means we can count from 0 up to 15. With 32 bits, a typical unsigned int can hold any value from 0 up to 4,294,967,295. And a typical signed int can hold a value about half of that. As long as you can work with numeric values less than this, then a 32 bit int is a good choice. Even if you expect values up to just a thousand, you’ll still normally use an int. A short int would fit values up to a thousand better but a lot of times, you’ll be doing things with your values that need to interact with other variables. Unless you absolutely need the extra two bytes that you save with a short, go ahead and use an int. Now where you can save enough to make a difference is when you need a lot of variables at the same time in memory. Maybe you have a class with some data members and you need a million of them to be loaded into a collection in memory. Knowing when you can use a short vs. an int in this case is important and could save you two million bytes of memory. Just be aware that the compiler will usually add padding to your classes so that they align better with the processor’s natural word size. If your class has only a single numeric value, then making it a short probab