Hex Calculator

Convert numbers between different bases.

Enter a decimal number (0-9).

What Is a Hex Calculator?

A hex calculator converts numbers between different base systems: hexadecimal (base 16), decimal (base 10), binary (base 2), and octal (base 8). You enter a number in any of these bases, and the calculator instantly shows the equivalent value in all four systems. This is an essential tool for programmers, network engineers, and anyone working with low-level computing concepts.

Number base conversions are fundamental to software development and computer science. Memory addresses are displayed in hexadecimal, network subnet masks use binary representation, file permissions in Unix use octal notation, and everyday arithmetic uses decimal. Being able to move fluidly between these systems is a core skill for technical professionals.

How Number Base Conversions Work

Every positional number system works on the same principle: each digit's value depends on its position. In decimal, the number 347 means 3 hundreds + 4 tens + 7 ones, or (3 x 10^2) + (4 x 10^1) + (7 x 10^0). The same logic applies to every base.

Decimal (Base 10) uses digits 0-9. This is the standard number system for everyday mathematics and human-readable values.

Hexadecimal (Base 16) uses digits 0-9 and letters A-F (where A=10 through F=15). Each hex digit represents exactly 4 binary bits, making it a compact way to represent binary data. The prefix "0x" is commonly used to indicate hexadecimal values.

Binary (Base 2) uses only digits 0 and 1. This is the native language of digital computers, where each digit represents a single bit that is either off (0) or on (1). Binary numbers grow long quickly, which is why hex is preferred for human readability.

Octal (Base 8) uses digits 0-7. Each octal digit represents exactly 3 binary bits. While less common than hex in modern computing, octal is still used in Unix file permissions and some specialized applications.

The conversion process involves multiplying each digit by its base raised to the power of its position, then summing the results. To convert from decimal to another base, you repeatedly divide by the target base and collect the remainders.

How to Use This Calculator

  1. Select the input base. Choose the number system of the value you want to convert: Decimal, Hexadecimal, Binary, or Octal. The hint text below the input field updates to show which digits are valid.

  2. Enter your number. Type the number using valid digits for the selected base. For hexadecimal, you can use uppercase or lowercase letters (A-F or a-f) and optionally include the "0x" prefix. For binary, use only 0s and 1s.

  3. View all conversions. The calculator instantly displays the number in all four bases: decimal, hexadecimal (with 0x prefix), binary (with grouping every 4 bits for readability), and octal (with 0o prefix). Each value has a copy button for easy clipboard access.

  4. Review bit information. For non-negative numbers, the calculator also shows how many bits and bytes are needed to represent the value, and the maximum value that fits in that many bits.

Worked Examples

Example 1: Web Color Code

The CSS color #3B82F6 (a shade of blue) can be broken down by entering hex 3B82F6. Decimal: 3,899,126. Binary: 0011 1011 1000 0010 1111 0110. The individual RGB components are: R=3B (59), G=82 (130), B=F6 (246).

Example 2: Unix File Permissions

The permission value 755 in octal represents: owner can read, write, and execute (7 = 111 in binary); group can read and execute (5 = 101); others can read and execute (5 = 101). In decimal this is 493, and in binary it is 111 101 101.

Example 3: Byte Boundaries

Enter decimal 255 to see FF in hex and 1111 1111 in binary, which is the maximum value of a single unsigned byte. Enter 256 to see 100 in hex and 1 0000 0000 in binary, which requires 9 bits and overflows a single byte.

Example 4: Network Address

An IPv4 address like 192.168.1.1 can be understood by converting each octet. 192 = C0 hex = 1100 0000 binary. 168 = A8 hex = 1010 1000 binary. This is useful when calculating subnet masks and understanding network boundaries.

Tips and Common Patterns

Memorize powers of 2. Knowing that 2^8 = 256, 2^16 = 65536, and 2^32 = 4294967296 helps you quickly understand value ranges and detect overflow conditions. These correspond to FF, FFFF, and FFFFFFFF in hex.

Use hex for readability. When examining binary data such as memory dumps, network packets, or file contents, hex is far more practical than binary. Two hex digits represent one byte, making it easy to identify individual bytes at a glance.

Group binary digits in fours. When writing binary numbers by hand, separate every 4 bits with a space for readability. Each group of 4 maps to exactly one hex digit, making mental conversion straightforward.

Watch for signed vs. unsigned interpretation. The same bit pattern can represent different values depending on whether it is interpreted as signed or unsigned. For example, the byte FF is 255 unsigned but -1 in signed two's complement. This calculator uses signed decimal representation.

Remember common hex-decimal pairs. 0A=10, 10=16, 20=32, 40=64, 80=128, FF=255, 100=256, 400=1024, 3E8=1000. These appear frequently in programming and debugging.

Use prefixes to avoid ambiguity. The value 10 means ten in decimal, sixteen in hex, two in binary, and eight in octal. Always use standard prefixes (0x for hex, 0b for binary, 0o for octal) when communicating number values to prevent confusion.

Frequently Asked Questions

What is hexadecimal and why is it used in computing?

Hexadecimal (base 16) is a number system that uses 16 symbols: 0-9 and A-F. It is widely used in computing because it provides a compact representation of binary data. Each hex digit represents exactly 4 binary bits, so one byte (8 bits) can be represented by exactly two hex digits. This makes hexadecimal far more readable than binary for tasks like memory addresses, color codes, MAC addresses, and debugging.

How do I convert hexadecimal to decimal manually?

Multiply each hex digit by 16 raised to the power of its position (counting from right to left starting at 0), then sum the results. For example, hex 1A3 = (1 x 16^2) + (A x 16^1) + (3 x 16^0) = (1 x 256) + (10 x 16) + (3 x 1) = 256 + 160 + 3 = 419 in decimal. Remember that A=10, B=11, C=12, D=13, E=14, F=15.

How do I convert decimal to hexadecimal manually?

Repeatedly divide the decimal number by 16 and record the remainders. Read the remainders from bottom to top (last to first). For example, 255 / 16 = 15 remainder 15. Since 15 = F in hex, the result is FF. For a larger number like 1000: 1000 / 16 = 62 R 8, 62 / 16 = 3 R 14 (E), 3 / 16 = 0 R 3. Reading bottom to top: 3E8.

What is the relationship between hexadecimal and binary?

Each hexadecimal digit maps to exactly 4 binary digits (bits). This is because 16 = 2^4. The mapping is: 0=0000, 1=0001, 2=0010, 3=0011, 4=0100, 5=0101, 6=0110, 7=0111, 8=1000, 9=1001, A=1010, B=1011, C=1100, D=1101, E=1110, F=1111. To convert hex to binary, simply replace each hex digit with its 4-bit equivalent.

What is octal and where is it used?

Octal (base 8) uses digits 0-7 and each digit represents exactly 3 binary bits. Octal was historically important in computing when systems used word sizes that were multiples of 3 bits (6-bit, 12-bit, 24-bit, 36-bit). Today it is primarily used in Unix/Linux file permissions (e.g., chmod 755) and some legacy systems. Most modern applications prefer hexadecimal over octal.

What are common hex values I should memorize?

Useful hex values include: FF = 255 (maximum byte value), 100 = 256 (one more than max byte), FFFF = 65535 (max 16-bit unsigned), 7FFFFFFF = 2147483647 (max 32-bit signed integer), FFFFFFFF = 4294967295 (max 32-bit unsigned). For web colors, common hex codes include 000000 (black), FFFFFF (white), FF0000 (red), 00FF00 (green), 0000FF (blue).

Can this calculator handle negative numbers?

Yes, you can enter negative numbers by prefixing with a minus sign. The calculator will display the negative value in all bases. Note that in actual computer hardware, negative numbers are typically represented using two's complement notation, where the most significant bit indicates the sign. The negative representations shown here use a simple minus prefix for readability.

What is the largest number this calculator can handle?

This calculator uses JavaScript's native number handling, which safely represents integers up to 2^53 - 1 (9007199254740991, or about 9 quadrillion). For hexadecimal, this is 1FFFFFFFFFFFFF. Numbers beyond this limit may lose precision due to floating-point representation. For cryptographic or arbitrary-precision math, specialized big integer libraries are recommended.