home

Zig's @bitCast

Jan 03, 2025

In an older post, we explored Zig's @ptrCast and then looked at a concrete usage in the shape of std.heap.MemoryPool.

As a brief recap, when we use @ptrCast, we're telling the compiler to treat a pointer as a given type. For example, this code runs fine:

const std = @import("std");

pub fn main() !void {
    var user = User{.power = 9001, .name = "Goku", .active = true};

    const cat: *Cat = @ptrCast(&user);
    std.debug.print("{any}\n", .{cat});
}

const User = struct {
    power: i64,
    name: []const u8,
    active: bool,
};

const Cat = struct {
    id: i64,
    breed: []const u8,
};

The compiler knows that user is a User, but we're saying "trust me and treat that memory as though it's a Cat". The reason this "works" is because User and Cat have a similar shape. If we swapped the order of Cat's two fields, the code would almost certainly crash. Why? because user.power would become cat.breed.len, causing std.debug.print to try to print a 9001 long string, reaching into memory it does not have access to.

It's possible that you'll never use @ptrCast, but I find it a fascinating example of how the compiler works.

As the name implies, @ptrCast works with pointers, but what if we want to do something for no-pointers? Unfortunately, if we modify the above, removing the pointers and using @bitCast instead of @ptrCast, we get a compile-time error:

pub fn main() !void {
    const user = User{.power = 9001, .name = "Goku"};
    const cat: Cat = @bitCast(user);
    std.debug.print("{any}\n", .{cat});
}

cannot @bitCast to 'Cat'; struct does not have a guaranteed in-memory layout

I don't think there's a technical reason the above doesn't work. It's just that, given Zig structures don't have a guaranteed memory layout, it isn't a good idea and thus is forbidden (although, I can't quite figure out why @ptrCast allows it but @bitCat doesn't!)

This does work though:

pub fn main() !void {
    const n: i64 = 1234567890;
    const f: f64 = @bitCast(n);
    std.debug.print("{}\n", .{f});
}

And prints 6.09957582e-315. If we try this between a boolean an integer, we'll get an error:

pub fn main() !void {
    const b = true;
    const n: i64 = @bitCast(b);
    std.debug.print("{d}\n", .{n});
}

@bitCast size mismatch: destination type 'i64' has 64 bits but source type 'bool' has 1 bits

If we change the type of n from i64 to u1, it works:

pub fn main() !void {
    const b = false;
    const n: u1 = @bitCast(b);
    std.debug.print("{d}\n", .{n});
}

This prints 1. If we changed true to false, we'd get 0.

@bitCast tells the compiler to take a value and treat it as the given type. For example, say that we have the value 1000 as u16. The binary representation for that is 1111101000. If we @bitCast this to an f16, it's the same data, the same binary 1111101000, but interpreted as a float (which is, apparently, 5.96e-5).

Obviously, you generally won't @bitCast from ints to floats or booleans. But it is useful when dealing with binary data. For example, the 4-byte little-endian representation of the number 1000 is [4]const u8{ 232, 3, 0, 0 }. We can use @bitCast to take that binary representation and treat it as an integer:

pub fn main() !void {
    const data = [_]u8{232, 3, 0, 0};
    const n: i32 = @bitCast(data);
    std.debug.print("{}\n", .{n});
}

There are two important things to keep in mind. The first is that this isn't a conversion. There isn't runtime code executing that takes data and turns it into an integer. Rather, this is us telling the compiler to treat the data as an integer.

Secondly, there are various ways to represent any type of data. Above, we said that 1000 is represented as [_]u8{232, 3, 0, 0}, but it could also be represented as [_]u8{0, 0, 3, 232} (e.g. using big-endian) - and there are more than just these 2 ways. So @bitCast only makes sense if you're sure about the bit-representation of the data. Generally, you only want to @bitCast data that comes from your own running program. If you're parsing an external protocol, you should use std.mem.readInt instead. readInt lets you specify the endianness of the data and, interestingly, if the endianness of the data happens the be the same endianness of the platform, it ends up just being a call to @bitCast.