Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You could probably do that relatively safely with bit operations, but I expect to be proved wrong.

Something like this should work:

    /* Swap char for uint<register_size> for speed */
    #define CMP_TYPE char

    /* Assuming char* sha256(char* input); */
    CMP_TYPE* input_hash=(CMP_TYPE*) sha256(input);
    CMP_TYPE* token_hash=(CMP_TYPE*) sha256(token); /* Precomputed? */
    /* max_pos=32 for char on most systems */
    int max_pos=256 >> (3 + sizeof(CMP_TYPE));

    char cmp=0;
    for (int i=0;i<max_pos;i++) {
        cmp = cmp | (token_hash[i] ^ input_hash[i]);
    }
    return (cmp != 0);
The reason for the CMP_TYPE #define is so that you can optimise the comparison by replacing char with uint32 or uint64.


Again, the compiler is free to optimize that to data-dependent time. It's relatively unlikely to do so on current machines (the time saved is unlikely to be worth the potential of a pipeline stall), but it's free to do so.

For instance, it's legal for the compiler to insert a strcmp fallthrough (`if input == token: return true`, or rather the strcmp equivalent) at the start, I'm pretty sure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: