I need to swap two variables without both XOR and arithmetic operations. All I can use are bitwise operations like ~, &, |, <<, >>, etc. I understand the XOR approach, but can't figure out the other way around this.. EDIT: Temporary variab

I'm writing C++ code in an environment in which I don't have access to the C++ standard library, specifically not to std::numeric_limits. Suppose I want to implement template <typename T> constexpr T all_ones( /* ... */ ) Focusing on unsigned integr

I'm curious to know what actually happens on a bitwise comparison using binary literals. I just came across the following thing: byte b1 = (new Byte("1")).byteValue(); // check the bit representation System.out.println(String.format("%8s&qu

I am writing a simple BigInteger type in Delphi. This type consist of an array of unsigned 32 bit integers (I call them limbs), a count (or size) and a sign bit. The value in the array is interpreted as absolute value, so this is a sign-magnitude rep

I didn't have an education in programming, I learned on my own. But what I couldn't find on the internet is what's the difference between a flag and a mask. I understand the logic of bitwise operators, I just don't understand the terminology. ie.: in

var reverseBits = function(n) { var re = 0; for( var i = 0; i < 32; i++ ) { re = (re << 1) | (n & 1); n >>>= 1; } return re; }; This is my code to reverse bit in Javascript, but when n = 1, it gives -2147483648 (-10000000000000000000

Here is some code I have been looking at: public static long getUnsignedInt(ByteBuffer buff) { return (long) (buff.getInt() & 0xffffffffL); } Is there any reason to do buff.getInt() & 0xffffffffL (0xffffffffL has 32 bits of 1's in the 32 least sig

How would one rotate a 4 bit binary number 4 places using only AND, OR, XOR gates? The inputs could be called x_0, x_1, x_2, x_3 where x_3 is MSB and x_0 is LSB. For example 1010 rotated right 4 places would be 0101. I can't seem to find any sources

So in my table i have id and bitwise columns like so +----+---------+ | id | bitwise | +----+---------+ | 1 | 1 | | 2 | 6 | | 4 | 60 | +----+---------+ From my c# code i'm setting these names to these binary values Name1 = 0x0001, Name2 = 0x0002, Nam

For some reason, I am simply not understanding (or seeing) why this works: UInt32 a = 0x000000FF; a &= ~(UInt32)0x00000001; but this does not: UInt16 a = 0x00FF; a &= ~(UInt16)0x0001; it gives the error 'constant value -(some number) cannot be con

I've read the other posts on BitArray conversions and tried several myself but none seem to deliver the results I want. My situation is as such, I have some c# code that controls an LED strip. To issue a single command to the strip I need at most 28

I'm a little confused on how to normalize numbers in C. I know that if you have something like the floating-point binary value 1101.101, it is normalized as 1.101101 x 2^3 by moving the decimal point 3 positions to the left. However, I am not sure ho

floatToIntBits and intBitsToFloat are methods in Java's Float class. Does Scala have those functionalities?Since Scala is a JVM language, you can access any and all features of whichever Java runtime you're using. This is a trait of all languages tha

Can anybody help me to understand, how the following code works ? I know it will return 1 if for odd number and 0 for even number. echo (7 & 1); // result 1 echo (6 & 1); // result 0 I think the numbers are converted to its binary. Please correct

I am making my own simple drawing engine. I am trying to determine if a variable has been set to a specific value using what I think is called bitwise comparison but I maybe wrong. I've always been a bit confused about what the following is and how I

Given a matrix of size n x m filled with 0's and 1's e.g.: 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0 if the matrix has 1 at (i,j), fill the column j and row i with 1's i.e., we get: 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 Required complexity: O(n*m) ti

I've been trying to parse them for a couple days, and I can't quite grok it. Here they are: int left = S->buflen >> 3; int fill = 64 - left; if(left && (((datalen >> 3) & 0x3F) >= (unsigned)fill)){ some code here } If it help

I can't seem to find logical negation of integers as an operator anywhere in Python. Currently I'm using this: def not_(x): assert x in (0, 1) return abs(1-x) But I feel a little stupid. Isn't there a built-in operator for this? The logical negation

I'm working on a program that generates a lot of instances of a class (millions). The class is really simple, it just holds 3 floats. Now the range these floats can live in exists between 0 and 1 (color values) and most often they are pretty simple v

Sought is an efficient algorithm that finds the unique integer in an interval [a, b] which has the maximum number of trailing zeros in its binary representation (a and b are integers > 0): def bruteForce(a: Int, b: Int): Int = (a to b).maxBy(Integer.

Suppose I have x &(num-1) where x is an unsigned long long and num a regular int and & is the bitwise and operator. I'm getting a significant speed reduction as the value of num increases. Is that normal behavior? These are the other parts of the

I'm trying to set bits in Java byte variable. It does provide propper methods like .setBit(i). Does anybody know how I can realize this? I can iterate bit-wise through a given byte: if( (my_byte & (1 << i)) == 0 ){ } However I cannot set this po

I have found one example in Data and Communication Networking book written by Behrouza Forouzan regarding upper- and lowercase letters which differ by only one bit in the 7 bit code. For example, character A is 1000001 (0x41) and character a is 11000

Ex. typedef struct { bool streamValid; dword dateTime; dword timeStamp; stream_data[800]; } RadioDataA; Ex. Where stream_data[800] contains: **Variable** **Length (in bits)** packetID 8 packetL 8 versionMajor 4 versionMinor 4 radioID 8 etc.. I need t

Assuming I have char "C" whose ascii code is 0110 0111. How can I iterate over its bits? I would like to build a vector from these 1's and 0's....You can easily iterate over them using bitwise operators: char c = 'C'; for (int i = 0; i < 8; +

Given input of 0 to 32, representing the number of one-bits in an IP4 network mask (corresponding to a CIDR block size as in /19), what's An elegant way to turn that into a four-byte long net mask A fast way way to turn that into a four-byte long net

What is the best solution for getting the base 2 logarithm of a number that I know is a power of two (2^k). (Of course I know only the value 2^k not k itself.) One way I thought of doing is by subtracting 1 and then doing a bitcount: lg2(n) = bitcoun

How do I bitwise shift right/left in VB.NET? Does it even have operators for this, or do I have to use some utility method?VB.NET has had bit shift operators (<< and >>) since 2003.

I have an existing data set that utilizes an integer to store multiple values; the legacy front end did a simple bitwise check (e.g. in C#: iValues & 16 == 16) to see if a particular value was set. Is it possible to do bitwise operations in XSL, and

I saw the following posted by one of the fellow stackoverflower and it sort of dumbfounds me. Would someone explain the shifting operations in the following code snippet: std::vector<bool> a; a.push_back(true); a.push_back(false); //... for (auto it