Exchange of bits without XOR

I need to swap two variables without both XOR and arithmetic operations. All I can use are bitwise operations like ~, &, |, <<, >>, etc. I understand the XOR approach, but can't figure out the other way around this.. EDIT: Temporary variab

Bitwise xor behavior using binary literals

I'm curious to know what actually happens on a bitwise comparison using binary literals. I just came across the following thing: byte b1 = (new Byte("1")).byteValue(); // check the bit representation System.out.println(String.format("%8s&qu

Bit operations on big integers sign

I am writing a simple BigInteger type in Delphi. This type consist of an array of unsigned 32 bit integers (I call them limbs), a count (or size) and a sign bit. The value in the array is interpreted as absolute value, so this is a sign-magnitude rep

Terminology: What is a mask and what is a flag?

I didn't have an education in programming, I learned on my own. But what I couldn't find on the internet is what's the difference between a flag and a mask. I understand the logic of bitwise operators, I just don't understand the terminology. ie.: in

Reverse Bits: Why does it give me a negative value?

var reverseBits = function(n) { var re = 0; for( var i = 0; i < 32; i++ ) { re = (re << 1) | (n & 1); n >>>= 1; } return re; }; This is my code to reverse bit in Javascript, but when n = 1, it gives -2147483648 (-10000000000000000000

buff.getInt () & amp; 0xffffffffL is an identity?

Here is some code I have been looking at: public static long getUnsignedInt(ByteBuffer buff) { return (long) (buff.getInt() & 0xffffffffL); } Is there any reason to do buff.getInt() & 0xffffffffL (0xffffffffL has 32 bits of 1's in the 32 least sig

Bitwise rotation with AND, OR, XOR gates

How would one rotate a 4 bit binary number 4 places using only AND, OR, XOR gates? The inputs could be called x_0, x_1, x_2, x_3 where x_3 is MSB and x_0 is LSB. For example 1010 rotated right 4 places would be 0101. I can't seem to find any sources

Bitwise SQL operation (MySQL)

So in my table i have id and bitwise columns like so +----+---------+ | id | bitwise | +----+---------+ | 1 | 1 | | 2 | 6 | | 4 | 60 | +----+---------+ From my c# code i'm setting these names to these binary values Name1 = 0x0001, Name2 = 0x0002, Nam

Error while running a bitwise no (~) on a UInt16 in C #

For some reason, I am simply not understanding (or seeing) why this works: UInt32 a = 0x000000FF; a &= ~(UInt32)0x00000001; but this does not: UInt16 a = 0x00FF; a &= ~(UInt16)0x0001; it gives the error 'constant value -(some number) cannot be con

Convert BitArray to a small byte array

I've read the other posts on BitArray conversions and tried several myself but none seem to deliver the results I want. My situation is as such, I have some c# code that controls an LED strip. To issue a single command to the strip I need at most 28

Normalizing Binary Float Values

I'm a little confused on how to normalize numbers in C. I know that if you have something like the floating-point binary value 1101.101, it is normalized as 1.101101 x 2^3 by moving the decimal point 3 positions to the left. However, I am not sure ho

Does Scala have floatToIntBits and intBitsToFloat methods?

floatToIntBits and intBitsToFloat are methods in Java's Float class. Does Scala have those functionalities?Since Scala is a JVM language, you can access any and all features of whichever Java runtime you're using. This is a trait of all languages tha

How & amp; works in 7 & amp; 1 in php

Can anybody help me to understand, how the following code works ? I know it will return 1 if for odd number and 0 for even number. echo (7 & 1); // result 1 echo (6 & 1); // result 0 I think the numbers are converted to its binary. Please correct

Check if the indicator is defined in an integer variable

I am making my own simple drawing engine. I am trying to determine if a variable has been set to a specific value using what I think is called bitwise comparison but I maybe wrong. I've always been a bit confused about what the following is and how I

Microsoft Interview: transforming a matrix

Given a matrix of size n x m filled with 0's and 1's e.g.: 1 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 0 if the matrix has 1 at (i,j), fill the column j and row i with 1's i.e., we get: 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 Required complexity: O(n*m) ti

What exactly do these 3 lines of code C do?

I've been trying to parse them for a couple days, and I can't quite grok it. Here they are: int left = S->buflen >> 3; int fill = 64 - left; if(left && (((datalen >> 3) & 0x3F) >= (unsigned)fill)){ some code here } If it help

Binary negation in python

I can't seem to find logical negation of integers as an operator anywhere in Python. Currently I'm using this: def not_(x): assert x in (0, 1) return abs(1-x) But I feel a little stupid. Isn't there a built-in operator for this? The logical negation

Condense 3 floats to a uint64_t

I'm working on a program that generates a lot of instances of a class (millions). The class is really simple, it just holds 3 floats. Now the range these floats can live in exists between 0 and 1 (color values) and most often they are pretty simple v

Integer in an interval with the maximum number of zero bits end

Sought is an efficient algorithm that finds the unique integer in an interval [a, b] which has the maximum number of trailing zeros in its binary representation (a and b are integers > 0): def bruteForce(a: Int, b: Int): Int = (a to b).maxBy(Integer.

Speed ​​of bit operations with bit operators

Suppose I have x &(num-1) where x is an unsigned long long and num a regular int and & is the bitwise and operator. I'm getting a significant speed reduction as the value of num increases. Is that normal behavior? These are the other parts of the

Set a specific bit in bytes

I'm trying to set bits in Java byte variable. It does provide propper methods like .setBit(i). Does anybody know how I can realize this? I can iterate bit-wise through a given byte: if( (my_byte & (1 << i)) == 0 ){ } However I cannot set this po

How do uppercase and lowercase letters differ from one bit?

I have found one example in Data and Communication Networking book written by Behrouza Forouzan regarding upper- and lowercase letters which differ by only one bit in the 7 bit code. For example, character A is 1000001 (0x41) and character a is 11000

Iterate bits of a character

Assuming I have char "C" whose ascii code is 0110 0111. How can I iterate over its bits? I would like to build a vector from these 1's and 0's....You can easily iterate over them using bitwise operators: char c = 'C'; for (int i = 0; i < 8; +

VB6 IP4 - Calculate net mask (long) from the number of bits

Given input of 0 to 32, representing the number of one-bits in an IP4 network mask (corresponding to a CIDR block size as in /19), what's An elegant way to turn that into a four-byte long net mask A fast way way to turn that into a four-byte long net

How to get lg2 from a number that is 2 ^ k

What is the best solution for getting the base 2 logarithm of a number that I know is a power of two (2^k). (Of course I know only the value 2^k not k itself.) One way I thought of doing is by subtracting 1 and then doing a bitcount: lg2(n) = bitcoun

How to change bitwise in VB.NET?

How do I bitwise shift right/left in VB.NET? Does it even have operators for this, or do I have to use some utility method?VB.NET has had bit shift operators (<< and >>) since 2003.

XSLT binary logic

I have an existing data set that utilizes an integer to store multiple values; the legacy front end did a simple bitwise check (e.g. in C#: iValues & 16 == 16) to see if a particular value was set. Is it possible to do bitwise operations in XSL, and

Offset operations

I saw the following posted by one of the fellow stackoverflower and it sort of dumbfounds me. Would someone explain the shifting operations in the following code snippet: std::vector<bool> a; a.push_back(true); a.push_back(false); //... for (auto it