i have a spec for fetch the files from server and predict the un-used files from the directory in this situation i am going to fetch the files from server it will return huge files, the problem is the cpu usage will increase while i am fetching large files, so i like to eliminate this scenario.
I am trying to understand what techiques can be used to sort really huge files (larger than available memory). I did some googling and came across one technique.
1. Are there any better ways to get this done?
2. Is there some tweaking that can be done to make this itself better?
Large enough so that you get a lot of records, but small enough such that it will comfortably fit into memory
3. How do you decide on this value? Consider, memory is 4 GB and currently about 2GB is consumed, and file to sort is 10GB in size. (Consumed memory could of course change dynamically during execution - consumed more/less by other apps.)
I fetch maxid row to view in datagridview using the following code :
public static DataTable GetMaximpID() { string strconn = AlShehabi.Properties.Settings.Default.NewSalariesDBConnectionString; SqlConnection conn = new SqlConnection(strconn); if (conn.State == ConnectionState.Closed)
[Code] ....
But I want to fetch all rows inserted and view in datagridview.... For example if user insert 4 rows I want to view them in datagridview after insert them using insertbutton_click....
I have a tcp client - server implementation running in the same program, on different background worker threads. There will be instances of this program on multiple computers so they can send and receive files between each other. I can send files sequentially between computers using network stream, but how would I send multiple files at the same time from computer A to B.
Sending multiple files over one connection ( socket ) is fine, but having multiple network streams sending data to a client, the client doesn't know which chunk of data is apart of which file ?
Now I would like to extract the data inside these comment tags and insert into excel sheet. Need to extract the comments in the xml tags shown in the able format? For example I want the value inside the <Autor> tag.
We are using the DownloadFile method of the WebClient class to download artwork via URLs for purchase orders, this works well for the most part, but hangs with certain file types. We never have problems with .pdf files, but .ai files (Illustrator) just hang with 0 bytes being downloaded, and then eventually times out after the default 60 seconds. Any similar activity while downloading files with the WebClient class? The files that are being downloaded are hosted on the same server, in the same directory.
if (validURL == true) { WebClient client = new WebClient(); try { client.Headers.Add("Accept: text/html, application/xhtml+xml, */*");
I can not understand huge pointer, how its working.
#include<stdio.h>/ *How its working &decleration*/ int main(){ int huge *a =(int huge *)0x59990005; int huge *b =(int huge *)0x59980015; if(a == b) printf("power of pointer"); else printf("power of c"); return 0; }
I have the following code which allows me to add the pdf files if I give the exact name and path of the file. But I have multiple pdf files in a folder which I want to add to each cell in the first column. I am having trouble thinking of the logic to have a for loop for a folder which contains the pdf files.
But this is very very rough draft. I have say 20 files in the folder pdf with different names(Title). How can I add all the 20 pdf files in the excel sheet?
I’m writing an application for raw image processing but I cannot allocate the necessary block of memory, the following simple code gives me an allocation error.
double (*test)[4]; int block = 32747520; test = new double[block][4];
off course with smaller block size (i.e. int block = 327475;) it works fine. Is there an allocation limit? How it is possible to deal with big blocks of memory?
I have been coding a while on a 2D random terrain game. How would I go about saving the maps? I have an array for blocks, lighting, background and background lighting. The lighting is done in real-time, so exclude that.
But with two 32000x3200 arrays, I still need to store 204800000 separate numbers in a file. Oh god. How can I have this? I could write down the separate numbers, but...yeah.
I'm trying to add a function now that lets the user open 1 or more files and import them to the database and dataGridView. Now the way it is now (should) work. But when it has finished with FILE1, it won't add FILE2 as it then gives me an error that the Column Date Already exists.
I have a desktop application in which i want to copy files from my local computer to an online server. I have the user name and the password of the server. is there any way like
file.copy(sourcePath,destinationPath)
to copy the files where the destinationPath will be something like
I want to build a server which holds hundreds of thousands of active users in memory. To keep all the users organized i would like to store them in a Vector.
The problem is how i could quickly and easy find the object whenever i need it? All users will have a unique ID. Would it be possible to keep some form of a Vector Index on the unique id number?
I am Using a fingerprint Scanner for attendance posting.
finger print reader vendor is essl,model is[URL] The device is connected on lan,It has option to download the users list and attendance list to pendrive in .dat format.
I am looking for fetching these data from device through any of the network computers other than manually copying to pendrive from the device each time.
Is it possible to download data from device through any of the network machines Using C# code ?
I am getting a excel sheet in datatable with some column "Program,Response,T,Timein".There is 3 condition for filtering datatable.
1.fetch unique program.so i got dis. 2.T not equal to "B". I also got dis. 3.Response should be > 5000. I also got dis data in datatable.
Now in Datatable I have same program name presented 3 times with response>5000 nd T<>B.Now I want to fetch only the maximum respnse among 3 of them. so every time my program should be change and for that program I need to pickup max response so How can I do this? for the same I put two loops
for (int i = 0; i < DataFilter.Rows.Count; i++) { DataRow dr = DataFilter.Rows[i]; DataView dv2 = new DataView();
I am having some column say "Response" column in my Datatable.Now I want to fetch this particular column value and compare this value with the maximum response how to fetch and compare it in C#.net .. This is my code.
for (int i = 0; i < DataFilter.Rows.Count; i++) { DataRow dr = DataFilter.Rows[i]; DataView dv2 = new DataView(); dv2 = DataFilter.DefaultView;
I have a thread that fetch elements from a std:queue, which are pushed from other threads. My question is how can I fetch elements from the queue in a blocking mode, what I mean is if there are no elements in the queue then the fetch call will block until at least one element is pushed. Of course I want to do it this way to avoid polling.
I want to write my own application which would fetch some data from a web site. I need to parse the HTML code of the web site and get the data.
About the application:A form (main form) that will contain data arranged in rows. Some of the data comes from a web site and some of it comes from a database.The data that comes from the web site needs to be updated every few seconds so I need to keep fetching the data from the web site.The main form will contain a button "add" which when clicked will add a new row. New data can be added to this row by the user.
I am not sure what to use for this. I have been writing the application as a Windows Form Application (Visual C#) but I do not know whether this is the best choice. Should it be a windows form application or web application? Should I use something else?
I have a SSD and I am trying to use it to simulate my program I/O performance, however, IOPS calculated from my program is much much faster than IOMeter.
My SSD is PLEXTOR PX-128M3S, by IOMeter, its max 512B random read IOPS is around 94k (queue depth is 32). However my program (32 windows threads) can reach around 500k 512B IOPS, around 5 times of IOMeter!!! I did data validation but didn't find any error in data fetching. It's because my data fetching in order?
I paste my code belwo (it mainly fetch 512B from file and release it; I did use 4bytes (an int) to validate program logic and didn't find problem).
#include <stdio.h> #include <Windows.h> /* ** Purpose: Verify file random read IOPS in comparison with IOMeter **/
//Global variables long completeIOs = 0; long completeBytes = 0; int threadCount = 32; unsigned long long length = 1073741824; //4G test file
I am writing a program to hide files behind other files using Alternate Data Streams in Windows NTFS file systems.
The program is as follows:
Code:
#include <stdio.h> #include <stdlib.h> int main(void){ char hostfile[75], hiddenfile[75], hiddenFileName[15] ; printf("Enter the name(with extension) and path of the file whose behind you want to hide another file: "); scanf("%75s", hostfile);
[Code]...
The complier is showing error as "Extra Perimeter in call to system" but I am not getting where?
I am writing a piece of code that requires me to display the last 1000 lines from a multiple text files (log files). FYI, I am running on Linux and using g++.
I have a log file from which - if it contains more than 1000 lines, I need to display the last 1000 lines. However, the log file could get rotated. So, in case where the current log file contains less than 1000 lines, I have to go to older log file and display the remaining. For e.g., if log got rotated and new log file contains 20 lines, I have to display the 980 lines from old log file + 20 from current log files.
What is the best way to do this? Even an outline algorithm will work.