I'm doing a bitwise operations on 2 bytes in a buffer, then storing the result in a variable. However, I sometimes get a non-zero value for the variable even though I'm expecting a zero value.
The relevant portion of the code is as follows.
unsigned int result = 0;
long j = 0, length;
unsigned char *data;
data = (unsigned char *)malloc(sizeof(unsigned char)*800000);
[Code] ......
I'm expecting result to be zero when my data[j] and data[j+1] are 0xb6 and 0xab respectively, which is the case for most of the time. However, for certain values of j, my result is strangely not zero.
j = 62910, result = 64
j = 78670, result = 64
j = 100594, result = 64
j = 165658, result = 512
j = 247990, result = 128
j = 268330, result = 512
j = 326754, result = 1
j = 415874, result = 256
j = 456654, result = 1024
j = 477366, result = 512
It appears that these strange result values are all powers of 2, with a 1 bit appearing somewhere in the unsigned int.
I'm not changing the value of result anywhere else in the code, and when I print out (unsigned int)(((data[j]^0xb6)<<8)|(data[j+1]^0xab)), I get 0, but somehow when it gets stored in result, it's no longer zero.
I'm doing a bitwise operations on 2 bytes in a buffer, then storing the result in a variable. However, I sometimes get a non-zero value for the variable even though I'm expecting a zero value. The relevant portion of the code is as follows.
Code:
unsigned int result = 0; long j = 0, length; unsigned char *data; data = (unsigned char *)malloc(sizeof(unsigned char)*800000);
[Code] ....
I'm expecting result to be zero when my data[j] and data[j+1] are 0xb6 and 0xab respectively, which is the case for most of the time. However, for certain values of j, my result is strangely not zero.
Code:
j = 62910, result = 64 j = 78670, result = 64 j = 100594, result = 64 j = 165658, result = 512 j = 247990, result = 128 j = 268330, result = 512 j = 326754, result = 1 j = 415874, result = 256 j = 456654, result = 1024 j = 477366, result = 512
It appears that these strange result values are all powers of 2, with a 1 bit appearing somewhere in the unsigned int.
I'm not changing the value of result anywhere else in the code, and when I print out
Trying to write 4 bytes ints in a binary file and extract them after... I'm using the exclusive or (^) to isolate single bytes to write to and extract from the file since the write() function accepts only chars, only the beginning and end results are not the same...
So, I think that the above expression converts to 0x49 | 0x00 ... and the complete expression should be 0x49 for me.
But, the compiler gives me the result of 0x4949 as two bytes.How does the compiler calculate this expression as two bytes?show me the steps included in the calculation of this expression?
How to do this program i can easily do it in a simple for loop but i have to do this program with the following directions:
1. Write a function called bitN() that returns the value of bit N in number, where number is the first parameter, and N is the second. Assume N of the least significant bit is zero and that both parameters are unsigned int's. (A simple one-liner will suffice)
2. Write a main() function that uses bitN() to convert a decimal integer into its binary equivalent. Obtain the integer to convert from the first command-line argument.
3. Use the expression unsigned int numBits = sizeof(unsigned int)*CHAR_BIT; to get the number of bits in an unsigned int. (Include limits.h to get the definition for CHAR_BIT.)
what order a CPU would process the following arithmetic problem: 5 - (-9) = 14? Would the CPU recognize that the 'minus a minus' combination simply represents 5 + 9, and proceed with that addition, or would the CPU have to first calculate the 2's complement of -9, and then proceed to take the 2's complement of that first result in order to complete the calculation of the addition of the 'double negative'?
1.The operands from << and >> may be any of integer type (including char) The integer promotions are performed on both operands the result has the type of the left operand after promotion.
It means that if we have z = x >> y then sizeof(z) == sizeof(x) ?
2. The ~ operator is unary the integer promotions are performed on its operand.
So if I have short int y; and int x=1; y = ~x what is the meaning here?
I have a project assignment for school to write a program that does number conversions using bitwise operators. The premise is that the user enters a number with one of three letter prefixes -- Q1232, O6322, H762FA, etc. -- and the program will take that number and convert it to the other two number bases. Q is for quarternary, O is for octal, and H is for hexadecimal. The transformations should be done using bitwise operators and bit shifting. I am guessing I need to scan the number, convert it to binary, then convert it to the other two bases.
However, I am completely new to bitwise operators and bit shifting, so how to convert numbers of different bases to binary and then binary to other bases using these bit and bitwise functions. I don't have much code done yet, since I am still unsure of how to approach it, but I'll post what little I have.
Here it is:
#include <stdio.h> #include <string.h> int main() { char numType; printf(" The user will enter a number up to 32 digits in quarternary "); printf("(base 4), octal (base 8), or hexadecimal (base 16). If in ");
[Code] ....
I figure in each case I can write a function that converts the entered number to binary, then maybe two more functions that convert said binary number to the other bases. For default in the switch I will tell the user they entered an invalid number. I don't have the program looping until the user types 'EXIT' yet, but I will once I figure out anything about these bitwise operators.
I have a 32 bit integer variable with some value (eg: 4545) in it, now I want to read first 8 bits into uint8_t and second 8 bits into another uint8_t and so on till the last 8 bits.
And while moving the bits till the size of the integer, it fills the LSB with 0's and as 1 crosses the limit of integer, i was expecting the output to be 0.
Currently I am trying to convert RGB to HSL. Everything is working but the saturation value. It is always close to the correct value (usually less than 10 off). For example:
I am writing a program where I read in data from a file into an array and a 2D array. However, when I cout that data to insure that it was all read in correctly, I get only the first full line of that input file(where there are actually 25 rows and 12 columns).
What am I doing wrong or should be doing differently?
ifstream fin; //open the input file fin.open("store_data.txt"); //If input file was opened, read input file data into array and 2d array if(fin){
In a .h file there is a function that takes in this parameter:
void (^callback)(float * arg)=NULL
as in a function definition:
void func(void (^callback)(float * arg)=NULL);
What I am able to read is that it takes a function pointer and if not defined it overrides with NULL. The part I do not get is the ^ in (^callback). I only know ^ as a bitwise XOR operator. It also generates issues in my VS2012 compiler (something with CLR). So I would really like to rewrite this part to something else, without the bitwise operator...
I'm writing a program to read in a Master.txt file and then update it through a Transaction.txt file that contains various transaction types [Adds (A), Deletes (D), and Edits (E1-E4)]. The records in both files are in ascending order based on Item#. Ultimately, the original Master.txt and updated Master file (Master2.txt) will be merged to reflect all valid transactions, and an errorLog.txt file will be created to indicate all invalid transactions. I feel I have all of the code written correctly, but I am still getting errors on my operands and identifiers.
PROGRAM:- #include<fstream.h> //for reading and writing files #include<conio.h> //for clrscr() #include<string.h> //for string characters #include<stdio.h> //for gets and puts function #include<process.h> //for exit function #include<iomanip.h> //for setw function #include<dos.h> //for delay and sleep function
Were are to implement a method countValue() that counts the number of times an item occurs in a linked list. Remember to use the STL <list>
int countValue(list<int> front ,const int item);
Generate 20 random numbers in the range of 0 to 4, and insert each number in the linked list. Output the list by using a method which you would call writeLinkedList which you would add to the ListP.cpp.
In a loop, call the method countValue() , and display the number of occurrences of each value from 0 to 4 in the list.
Remember that all the above is to be included in the file ListP.ccp
Program: I have 2 arrays: 1 for the correct answers to a quiz, 1 for the user. I then have a vector to hold the incorrect answers. It keeps outputting what looks like alt characters, why.
Here is the code:
#include <iostream> #include <vector> using namespace std; int main()
My loop is outputting data incorrectly. I have inbound web data that come in sets of 7. I am trying to in insert the 7 records into a vector and then display the vector content followed by a new line.
We have to make a code to detect the frequency of printable characters. But when I run the code, sometimes it can't detect the Uppercase Letters. But it can sometimes. It's a bit buggy, and it really can't get the frequency of the space character.
Does textcolor affect the outcome?
Code: #include<stdio.h>#include<conio.h> int main () { clrscr(); char name[40],sentence[1000],ch; int c=0,count[95]={0}; textcolor(LIGHTCYAN);
[Code] ....
I also noticed that it stops detecting the frequency when there is a space between character/s.
I'd wrote a program to encrypt a message within a bmp file using my own structs and all for everything (yes, call me a ........head) The program works but for some weird ........ing reason I was forced to subtract 2 bytes from the header size to get the correct value. I've narrowed down the issue to my BmpFileHeader struct.
Here's a short program that demonstrates the issue:
Code: #include <stdio.h> #include <stdlib.h>
#define BYTE unsigned char #define WORD unsigned short #define DWORD unsigned long #define LONG signed int
[Code] .....
Tried with both gcc and TinyCC and got the same result so it doesent seem to be a compiler bug. Microsoft's structures though are giving the correct size, even though they have the exact same definition.
Microsoft's defines:
Code: // windef.h typedef unsigned long DWORD; typedef unsigned char BYTE; typedef unsigned short WORD;