I am trying to store each char of a string(string a ="1100") into a byteArray ( byte[] byteArray = new byte[4]. its not showing any error but its storing like below:
I don't know why but its replacing 1 with 49 and 0 with 48.what am I doing wrong or how to do this?
my code is as below
byte[] byteArray = new byte[4];)/> int binArrayAdd = 0; string a ="1100"; foreach (char Character in a) { byteArray [binArrayAdd] = Convert.ToByte(Character); binArrayAdd++; }
I am writing a program where I need to read a byte of char data and convert it into a text string of binary data that represents the hex value...
i.e. The char byte is 0x42 so I need a string that has 01000010 in it. I've written the following subroutine....
------------- My Subroutine ---------------------------------------------------------------------- void charbytetostring(char input, char *output){ int i, remainder; char BASE=0x2; int DIGITS=8; char digitsArray[3] = "01";
[Code] ....
When I submitted the byte 0x42 to the subroutine, the subroutine returned to the output variable 01000010... Life is good.
The next byte that came in was 0x91. When I submit this to the subroutine I get garbage out.
I am using a debugger and stepped through the subroutine a line at a time. When I feed it 0x42 I get what I expect for all variables at all points in the execution.
When I submit 0x91 When the line remainder = input % BASE; gets executed the remainder variable gets set to 0xFFFF (I expected 1). Also, when the next line gets executed..
input = input / BASE; I get C9 where I expected to get 48.
My question is, are there data limits on what can be used with the mod (%) operator? Or am I doing something more fundamentally incorrect?
I have done some programming or rather have written and implemented some algorithms in code (is there a difference? :-)) but am having some teething problems with a project I am currently working on.
To start with I am simply reading in the data from a .txt file into a vector. This is the code that I currently have:
This compiles fine but currently outputs a different integer everytime I run the program. I can remember having this problem when I started coding before using a different language and if I remember correctly it was quite easy to resolve. I thought it was because I had missed off the 'return 0' but it is something similar I think.
// Creating and joining string objects #include <iostream> #include <string> using std::cin; using std::cout; using std::endl; using std::string; using std::getline; // List names and ages void listnames(string names[], string ages[], size_t count) {
[Code] ....
I may be wrong, but the problem seems to be in the function "listnames". Specifically, the output statement inside the while loop. I don't understand , how the ++ operator is behaving in this statement. The output produced does not match what's printed in the book. I usually just type all the examples, but with this one I also downloaded the source code from the book's website to make sure the error wasn't due to mistyping.
I'm currently writing a chunk of code that will take inputs from the user and push them into a vector until 0 is entered, at which point it will break the loop and continue on with the rest of the program. This is nothing I haven't done before, but I have never encountered this error.
The code chunk looks like this:
typedef vector <int> ivec; int main() { ivec nums; int input; while (true) { cout << "Enter a positive integer, or 0 to quit" << endl;
[Code] ....
My standard testing input has been 3 5 6 3 8 (then 0 to quit), so one would expect my sequence to be 3 5 6 3 8...but instead after the 8 I get a random number value that is usually quite large and I cannot figure out where it comes from (ex. 3 5 6 3 8 201338847).
I'm new with working with random binary files. I have a class with a char* pointer stored inside of it, I also have a constructor that takes in a string (of any size) from the user. I then simply store this string into the char *. Once the string is stored in the char *. I reinterpret the instance, and I store the information into the random binary file. Everything works.
Question: Random files must know the size of the object that is being stored inside of it, so why when I enter strings of different sizes into the file, it appears to still be working. for example this is an example of the code:
class info { private: char *phrase; public: info(string n ="unknown"){ phrase = new char[n.size()+1];
[Code] ....
My point is, lets just say for example the object ETC, was some long string, this would still work for me. My question is, I don't believe each object is the same size because I allocate memory for the char pointer in the constructor.
Should I not do this just to be safe, and just use a char array instead of a pointer? (Even tho I would have set a pre-defined size for the string) or is something happening in the back to prevent this from not working?
Right now im creating a program that use xbox inputs and then send out keyboard functions using sendInput();
It works fine, but now im creating a system wich lets the user of the program change the settings in a textfile. Wich will then change what the controllers bindings are.
example, if settings.txt says: Y=button(0xIE)
i want the program to know that when xbox button Y is pressed, execute a sendInput function to the Key: A (0xIE).
The problem is that the virtual keycode (0xIE) i take from the settings.txt is stored as a string (lets say it is stored in string Y_event)and input.ki.wScan has to be of type BYTE. i made a function wich changes string into int. (because input.ki.wScan seems to be fine getting a int?)
int stringToInt(string insert) { char back[20]; for(unsigned int e=0;e < insert.length();e++) { back[e]=insert[e]; } return atoi(back); }
but when i run the code nothing happend...
In the code i have a function wich executes the keypress: void pressbutton(int key, int time)
when i send in the converted string it doesn't work but when i send in: (0xIE) it works.
pressButton(stringToInt(Y_Event),50) // doesn't work pressButton(0xIE,50) // works
I'm trying to get the current time for a game and print that to the games chat window. I'm already injected into the process do I don't think I need ReadProcessMemory.
The value i'm trying to read is the game hours 1-12.
//doesn't work byte time_hour_get ( void ) { return *( byte *)0x00B70153; //the address to the memory containing the }
addMessageToChatWindow((char*)time_hour_get); -> when I call this function it looks like s , . I want it to return the integer value like cheat engine. I used byte when scanning for this address
I am writing a program to hide files behind other files using Alternate Data Streams in Windows NTFS file systems.
The program is as follows:
Code:
#include <stdio.h> #include <stdlib.h> int main(void){ char hostfile[75], hiddenfile[75], hiddenFileName[15] ; printf("Enter the name(with extension) and path of the file whose behind you want to hide another file: "); scanf("%75s", hostfile);
[Code]...
The complier is showing error as "Extra Perimeter in call to system" but I am not getting where?
I am writing a piece of code that requires me to display the last 1000 lines from a multiple text files (log files). FYI, I am running on Linux and using g++.
I have a log file from which - if it contains more than 1000 lines, I need to display the last 1000 lines. However, the log file could get rotated. So, in case where the current log file contains less than 1000 lines, I have to go to older log file and display the remaining. For e.g., if log got rotated and new log file contains 20 lines, I have to display the 980 lines from old log file + 20 from current log files.
What is the best way to do this? Even an outline algorithm will work.
Basically it has to do with the byte ordering in a binary buffer vs the typing of a variable used to hold it.
To give you an example, if I have a buffer (say of indefinite length), and a ptr "ptr" pointing to a byte in the buffer (say, C0), such that if I open the buffer in a binary viewer it reads like this: Code: C0 DD FE 1F Such that this is true:
Code: /*ptr is uint8_t*/ *ptr == 0xC0
Then I do this:
Code: uint16_t var; var = *(ptr+1);
I would expect the result to be:
Code: DD FE /*56830*/
Though if I print that out with:
Code: printf("%u ", var);
It'll print:
Code: 65245 /*(FE DD)*/
Now obviously it's byte swapped, but what is causing that? I'm assuming if I just stream that out to a file byte by byte it'll be fine, so it's something with the 16 bit data type (also have seen this issue with a 32 bit data type, where all 4 are in reverse order). Is there any way to 'fix' it except bit shifts & masks?
I basically want to create a save editor application that will enable people to alter various values in the save by clicking on releveant buttons and then also for the editor to auto update the checksum when changes are done.
The save file is in hex so from what I can gather I would need to create a button to open the file using 'open file dialogue' and then read the file into a byte array so that the values can be called at any time when a particular butto is pressed and the application will then seek to the point in the file to make the required changes.
I have an application that has its own embedded web server. I am trying to add jQuery/Ajax file upload capabilities to the application however, I am running into issues getting the posted file. The jQuery/Ajax portion is similar to this method here. Due to the way the webserver was written (its in a dll and I do not have access to the source), the posted file comes in as a byte[]. If I try to save the byte array directly to file using:
File.WriteAllBytes("path", ByteArray)
I end up with a corrupt file that I cannot open. I believe this is because the byte array also contains the posted file header info (Content-Disposition, name, filename, etc.). If I view the contents of the byte array using:
System.Text.Encoding.Default.GetString(ByteArray) the header info can be viewed as: ------WebKitFormBoundaryQfPjgEVpjoWgA5JL Content-Disposition: form-data; name="0"; filename="someimage.png" Content-Type: image/png ‰PNG
Based on the selected file size and the size of the byte array, the entire file is in the byte array. How can I go about extracting and saving the posted file from the byte array?
to read byte from the serial port.This function is part of the DLL that came with the chip(FT2232D) I am using on my board.I want to use the function to read a byte from the serial port and then send the value to the Graphic user interface.Unfortunately I was unable to get the expected value on my GUI.If I send for instance 40,what I get on the GUI are letters instead of the number 40 or at times the GUI will not even respond.Below are my lines of code I used to read the byte from the serial port:
The following instructions are executed whenever the CHECKBOX is checked
Trying to write very simple code to read a 4 byte int from a file.
code:
int tester; FILE *fp; fp=fopen("/home/bdluser/skeleton.blx","rb"); fread(&tester,sizeof(int),1,fp); printf("tested 1 byte read should be 1: %i",tester);
I have tried editing the binary file....it outputs similar large numbers