C++ :: Data Type Size - Performance With References
Aug 16, 2012
Here's what I'm trying to do : A simple readout that shows the input/feedback values for 10 different sensors (i.e. a motor, a thermocouple, light sensor, etc).
What I got so far:
The data is stored in 2 different arrays:
One array is a 2D string array that stores descriptions, and won't be changed:
The second array is another 2D int array that stores all the data values:
Input Signal, Feedback Signal
[0][0] // for Sensor A, Input is 0 PWM, 0 RPM read from sensor
[0][25] // for Sensor B, Input is 0, 25C read from sensor
etc
My question: I'd like to re-write the code to incorporate the new things I learned in c++. Right now, the descriptions for all 10 sensors are in 1 array and the sensor values are in another array. If I use pointers to access the values, is there a performance difference between:
1. Keeping it as is, with 2 2d arrays
2. 1 big structure that has descriptions and sensor values for all 10 sensors (i.e. combining everything into 1)
3. 1 parent class, and 10 different objects for each sensor (i.e. splitting into 10)
I have this piece of code from the book "Modern C++ Design" that checks for compile-time error. When i tried to compile it, i get the error "invalid application of size of to function type". How to make this compiler-time checker work?
I have started to move over to using Unicode, wide character null-terminated strings in my Windows programmes. Accordingly I set the Use Unicode Character Set Visual C++ compiler option. It is my understanding that once you do that the many macros which determine whether you transparently call ...A() or ...W() API functions automatically shift over to calling the wide character variants. As this is a compiler directive, all the choices are made and hardcoded in to the resultant executable at compile/link-time BEFORE it is ever run. Therefore using for example the macro OpenFileName() in the source code instead of specifically calling OpenFileNameW() has no impact on run-time performance.
The next logical step, instead of explicitly using wchar_t is to declare null-terminated string character arrays as TCHAR*. Then, so long as I also employ the tcn... variants of CRT string functions and call TEXT() or _T() macros to create string literals the preprocessor will chose, again transparently whether to create an executable using standard multibyte or unicode wide characters - and their associated functions - all determined by the Use Unicode Character Set switch. That way I can cover both eventualities with the same source code.
So, with all that - I THINK!!! - properly under by belt, I am fairly sure that using TCHAR and its friends will not effect run-time performance at all. However, in his otherwise excellent article the author makes it sound as if using Unicode EXPLICITLY through wchar_t, ...W() API functions and tcn... CRT calls is faster than the TCHAR alternative.
At the end of the day my question is - have I got the right end of the stick; TCHAR makes no difference to executable performance?
class Element { public: .. virtual unsigned NumberOfNodes() = 0;
[Code] ....
Is it possible to implement this better? All the element stuff can be static, but this is not possible with the abstract class. I want to have Mesh independent of a specific element. With the code above, if I have multiple meshes I have one instance of an element, e.g., Triangle for each mesh. Although they are all exactly the same.
I am trying to modify a PerformanceCounter I have created in C#. But it doesn't seem to be that it is being changed. This counter needs actualy to be a flag : 0 or 1.
I took the following code from the web. It created the collectors category along with the counters well. But the RawValue always shows 0!
I am working on Win7/64.
using System; using System.Diagnostics; using System.Runtime.InteropServices; namespace PerformanceCounterSample
I have an school assignment that asks me to measure the most famous sorting algorithms for performance in terms of number of steps and CPU running time. ( Here I'm testing for running time)
I decided to test for bubble sort first:
#include <iostream> #include <ctime> using namespace std; void bubbleSort(int ar[], int size) { int temp;
[Code] ....
So basically what I want to know is:
1. Is this clock function giving the correct CPU running time?
2. Is there any way to write code that would measure the number of steps for each algorithm?
3.I need to test it for number of integers=100 then 200, then 300... Well you get my point, and I don't want to have to actually input 200 numbers with my keyboard. Is there any way to generate as many entries as I want?
Trying to do a homework assignment for a class and how to read a file into an array. I've looked in our book and on several other forums and cant seem to find any examples of this. Below is the assignment I'm working on. I have a shell of the program that I can get to run, but getting a .txt file to read into an array is something I cant seem to figure out how to do.
Write a program to read N data items into two arrays, X and Y, of size 20. Store the product of the corresponding pairs of elements of X and Y in a third array Z, also of size 20. Print a three column table that displays the arrays X, Y, and Z. Then compute and print the square root of the sum of the items in array Z. Compute and print the average of the values in array Z and print all values above the average of array Z. Determine the smallest value in each array using only one function.
Use the two data files named DATAX.TXT and DATAY.TXT.
You must use functions for the reading of the data, computing the average, printing the three column table and printing the values above average.
Is it generally better to initialize string data members as nullptr or as a zero-size array?
I can understand the former is superior from a memory-use perspective and also not requiring the extra allocation step. However, many string management functions will throw an exception - wcslen for instance - if you pass them a null pointer. Therefore I am finding any performance gained is somewhat wiped out by the extra if(pstString==nullptr) guards I have to use where it is possible a wchar_* may still be at null when the function is called.
I want to create a new data type called an inf_t. It's basically infinity (which for C++ is 1.7e+308). The only reason I want this is because I want to overload the cout << operation to print out INF/inf. Should I do this in a struct?
If I am asked to declare a data type for Date which should be in the format DD/MM/YY, which data type should i use for it? Is there any data type known as Date in C?
I have two char variables, m_GPSOffset[13] and m_FileName[100]. When m_GPSOffset has a value assigned to it, say for instance +11:25:30. The first entry of the value, in this case +, is always stored in m_FileName. I am clueless on why this is occurring.
I know that an int is usually 4 bytes, ranging from -2^31 to 2^31-1 for a signed int and 0 to 2^32-1 for an unsigned int. My question is simply, bit-wise (I know they are labelled in the code), how does it determine whether to show -2^31 or 2^32-1 if it was 11111111 11111111 11111111 11111111 in bits? Is there a 5th byte to tell the compiler what data type to treat the input as?
Working on a Project Euler problem and the question asks for the largest prime number that is a factor of 600851475143. As you can see, this is significantly larger than the maximum of a long data type, which maxes out at 2147483647.
I'm running on Windows 32, so int64 is not a valid option for me. It seems like I'll likely have to use a different language to solve this problem.
I'm currently trying to solve a programming assignment and i got the logic of it, however i find it hard to implement.
What i need to do basically is fill an array with objects. Each object is a class that contains only one type of data. This means i can place int, double and string for example in one simple array.
However i can't figure out how to read data and then decide what it is. Even if i use templates once i call the function i have to give it a type, so getType<int> for example will not work with double or string.
I know about typeID and how to use it, i just can't figure out where to use it.
I know how to store numeric data using keywords int, long, float, and so on. I'm making my own program called "Who is your soul-mate".The only question I want to ask is what's the keyword for storing alphabet data? As you can see below on my source file. I want to replace "int" keyword with another keyword that can store alphabet data. It's all in standard C.
#include <stdio.h> int soulm01, soulm02, soulm03; int year_of_birth; int main(void)
Ok so I have this simple program that gets input from a user. I just want to put in a line of code to make sure that hte user can't type in something like "pizza" , I want to make it say that if the user puts in something that is NOT a number they will get a error back saying "Wrong! try again!" Here is my code :
#include <iostream> using namespace std; //Summation Program //Function Prototypes int get_num(); void compute_sum(int num, int &sum);
Assume the user has already put in the number of students (hence my variables numStuds, which will most likely be irrelevant to my problem).
So suppose I have this:
void inputStudentInfo(string *names, int *movies, const int numStuds) { for(int i =0; i < numStuds; i++) { cout << "Enter student name: "; getline(cin, names[i]); read_string(names[i]);
[Code] ....
Then I have my data type checking function:
//Data-Type Checking for strings string read_string(string Sname) { while(!cin.good())
[Code] ....
I am getting errors. I know the problem I think is that I am trying to data type check for a string made up of pointers* with just a string but I don't know how I am supposed to check this?