Convert between binary, octal, decimal, and hexadecimal. Bit visualization, powers of 2 reference, even/odd detection, and grouped binary output.
Binary (base 2) is the fundamental language of computers, using only two digits — 0 and 1. Every piece of data stored or processed by a computer is ultimately represented in binary. Understanding binary conversion is essential for programmers, computer science students, network engineers, and anyone working with low-level computing concepts.
This converter handles all four major number bases: binary (base 2), octal (base 8), decimal (base 10), and hexadecimal (base 16). Enter a number in any base and instantly see the equivalent in all others. The tool provides grouped binary output (4-bit groups for readability), bit count, byte count, and even detects whether the number is a power of 2.
The interactive bit visualization shows each bit position with its corresponding power of 2, making it easy to understand how positional notation works. Whether you are debugging bitwise operations, reading memory addresses, interpreting network masks, or studying for a CS exam, this tool makes base conversion intuitive and visual.
Manual base conversion is error-prone, especially for large numbers. This calculator converts instantly between all four common bases, provides visual bit representation, groups binary for readability, and includes a powers-of-2 reference table — everything you need for computer science work in one place during study, debugging, exam prep, and review.
Decimal to Binary: Repeatedly divide by 2, read remainders bottom-up. Binary to Decimal: Sum of (bit × 2^position) for each bit. Binary to Hex: Group binary digits in fours, convert each group. Binary to Octal: Group binary digits in threes, convert each group.
Result: 101010 (binary), 52 (octal), 2A (hex)
42 in decimal = 32 + 8 + 2 = 2⁵ + 2³ + 2¹ = 101010 in binary. Grouped as 0010 1010 = 0x2A in hex. In octal: 5×8 + 2 = 52.
A number base (or radix) determines how many unique digits are used. Binary uses 2 digits (0, 1), octal uses 8 (0-7), decimal uses 10 (0-9), and hexadecimal uses 16 (0-9, A-F). The value of each digit depends on its position: in decimal, 42 = 4×10¹ + 2×10⁰; in binary, 101010 = 1×2⁵ + 0×2⁴ + 1×2³ + 0×2² + 1×2¹ + 0×2⁰.
Programmers use binary for bitwise operations (AND, OR, XOR, NOT, shifts), bit flags, permissions, and low-level hardware control. Most programming languages support binary literals (0b1010), hex (0xFF), and sometimes octal (0o77). Understanding binary is essential for systems programming, embedded development, and network engineering.
Hexadecimal is popular because each hex digit represents exactly 4 bits (a nibble). This makes it compact while maintaining easy binary conversion. Color codes (#FF0000), memory addresses (0x7FFF0000), and MAC addresses (00:1A:2B:3C:4D:5E) all use hexadecimal.
Repeatedly divide by 2 and record the remainder. Read the remainders from bottom to top. Example: 13 ÷ 2 = 6 R 1, 6 ÷ 2 = 3 R 0, 3 ÷ 2 = 1 R 1, 1 ÷ 2 = 0 R 1 → 1101.
Electronic circuits have two stable states: on (1) and off (0). Binary maps directly to these states, making it robust against electrical noise. Higher bases would require distinguishing more voltage levels, increasing error rates.
255 in unsigned binary (1111 1111). For signed 8-bit (two's complement), the range is -128 to 127.
Group binary digits into groups of 4 from right to left, then convert each group to its hex equivalent. Example: 1010 1100 = A C = 0xAC.
Octal is commonly used in Unix/Linux file permissions (e.g., chmod 755) and some legacy computing systems. Each octal digit represents exactly 3 binary bits.
A byte is 8 bits, representing values 0-255 in unsigned notation. A nibble is 4 bits (half a byte), representing values 0-15 (one hex digit).