C++ :: Is UNICODE Enabled By Default In VC++ Express
Feb 25, 2014
For example if using FindFirstFile(...) it assumes your passing LCPWSTR and not LPCSTR.
I know I can use FindFirstFileA or FindFirstFileW so what is point of default if always UNICODE.
Which brings to my second question. If I say
FindFirstFile("C:", &fdat);
I get error cannot convert parameter 1 from 'const char [7]' to 'LPCWSTR'
I could say WCHAR fName = "C:"; and pass this variable instead. However is there a way to cast "C:" on-the-fly to LPCWSTR, I tried,
FindFirstFile((LCPWSTR)"C:", &fdat);
But it outputs a stream of LONGs to the console instead of filenames.
View 5 Replies
Aug 29, 2013
I have some code that was compiled without Unicode turned on in the Preprocessor Definitions. I need to access an API that had Unicode turned on in the Preprocessor Definitions (I believe that it is on by default for DLL's) .
I need to call a function in the DLL that requires a structure like:
struct READERINFO {
TCHAR serial[32];
TCHAR altSerial[32];
TCHAR name[32];
TCHAR fccId[48];
TCHAR hwVersion[16];
int swVerMajor;
int swVerMinor;
char devBuild;
};
It returns some information in the structure some of it is Unicode based however the program that is calling it is not Unicode. The preprocessors are not turned on because if they were there would be a lot of things to change in this code. The code is old code that I inherited and now I must interface to some new devices.
I declare my structure as :
READERINFO info; Then I call the function in the DLL which looks like: ApiGetReaderInfo(hAPI, &info, sizeof(into));
Which is defined as:
ApiGetReaderInfo(HANDLE hApi,
Struct READERINFO * ri,
DWORD riSize);
Parameters:
hApiHandle to valid Api object instance
riPointer to the READERINFO structure.
riSizeSize of ri structure in bytes. Usually: sizeof(struct READERINFO).
When I call it from my program that does not have UNICODE defined in the Pre-Processors I get :
Characters like : ÌÌÌÌÌ in the TCHAR fields and invalid numbers in the integer fields.
int ModuleVersion(HANDLE hApi) {
struct READERINFO info;
ApiGetReaderInfo(hApi, &info, sizeof(info));
[Code] ....
When I call it from my program that has some sample code just for this and has the UNICODE defined in the Preprocessors it works just fine. how I can call this from my old code and get the correct information. I have already tried to do the follow without success:
int ModuleVersion(HANDLE hApi) {
#define UNICODE
struct READERINFO info;
#undef UNICODE
ApiGetReaderInfo(hApi, &info, sizeof(info));
[Code] .....
View 4 Replies
View Related
Jul 13, 2014
I'm using Visual Studio Express 2013 to create a Windows program that will upgrade my micro-controller firmware. I have a .exe program to upgrade it. What I normally do is I drag and drop a .txt file on the .exe program and it will be done. I want to write a program that will do the exact same thing. Where when I click on a button, it will run the .exe program with the .txt file.
What I got so far is just run the .exe program when i press the button. I do not know how to write a code to let the .exe start with the .txt file. Here's what I got so far.
Process.Start(@"C:UsersJayDocumentsVisual Studio 2013ProjectsWindowsFormsApplication1BSL_FilesBSL ScripterBSL_Scripter.exe");
That line only manage to open up my .exe file. How do I make it run with the .txt at this location?
C:UsersJayDesktopBSL_FilesBSLSCRIPT.txt
View 2 Replies
View Related
Jan 28, 2012
Is there a way to check if a compiler has c++11 enabled?
I have a library and it has converters between std strings and the internal string type. I current have preprocessor surrounding the converters for u8string, u16string, and u32strings, but it requires the end user flip the switch manually. It would be nice if I could know at compile time without being told whether or not those types exist.
View 3 Replies
View Related
Jan 25, 2015
I'm trying to have a button marked by the sqrt sign, '√'.
I wrote below code and typed that sign by holding down "alt" and typing 251 using numpad. But result is the question mark instead of sqrt mark!
My machine is Windows 7 x86 and IDE is visual studio 2012.
#include <GUI.h>
using namespace Graph_lib;
//---------------------------------
class Test : public Window {
public:
Test(Point, int, int, const string&);
[Code] .....
View 3 Replies
View Related
Sep 13, 2013
I have a problem when i try to save unicode to a .txt file.
I need to store in a file names that will have letters like "ăĂâÂșȘțȚîÎ"
wchar_t name []=L"ăĂâÂșȘțȚîÎ";
FILE* fang;
fang= _wfopen( L"test.txt",L"wt+,ccs=UNICODE");
fwprintf (fang, L"%ls ",name);
When i open my text file i get this: ??âÂ????îÎ
if i use
fang=fopen("test.txt","a");
I get the same result
and for
fang=fopen("ang.txt","a,css=UNICODE");
I get a runtime eroror "invalid file open mode"
View 2 Replies
View Related
Jan 6, 2013
I'm having some problems in receiving fileNames from Server to Client(C++) in Mac OS X. I send a serialized object , which has a char pointer with the fileName or sometimes a string object, when i receive it in the client, it seems to be having %F6 or %E9 ,etc . This issue don't arise in Windows OS though, even thought it's the same code. Is there anyway decoding these '%' characters back to their original form in Mac OS & Linux ..?
Fex characters i got into problems with : ǡ ȅ ȉ
It would be difficult to change the code in server, so if there's a way decoding the characters back to its original form, it would be easier.I'm using Boost Library for Serialization and i'm just looking for ways to decode %F6 back to ȅ in C++, like if some library is available ..?
View 1 Replies
View Related
Nov 7, 2013
I had a file which has name like SIRAO.wav Since this file name has special unicode character all file API's are failed.
I would like to rename this file using Windows API. How can achieve this?
std::string filename variable hold this value as SIRÃO.wav.
I try to read the file using file API after perform a conversion.
Code:
const int utf16_length = MultiByteToWideChar(CP_UTF8,0,filename.data(),filename.length(),NULL,0);
std::wstring utf16;
utf16.resize(utf16_length);
MultiByteToWideChar(CP_UTF8,0,filename.data(),filename.length(),&utf16[0],utf16.length());
const wchar_t *name = utf16.c_str();
rename the file with unicode character?
View 3 Replies
View Related
Jul 17, 2013
I am trying to write data in Russian language to the serial (RS-232) port. My display device is already set to that character code page.
But output on the device is not exactly what I require.
My code snippet is like this below
CString pBuffer = L"английский"; //Russian Language
LPBYTE pByte = new BYTE[pBuffer.GetLength() + 1];
memcpy(pByte, (VOID*)LPCTSTR(pBuffer), pBuffer.GetLength());
long nBuffer=pBuffer.GetLength()+1;
DWORD dwWritten=0;
WriteFile(pHandle , pByte, nBuffer ,&dwWritten , NULL);
pHandle is a valid handle.
View 4 Replies
View Related
Dec 22, 2013
I just wonder whether 0xFFFF is a valid Unicode character.
When I using the following code:
CStringW strTempW;
CString strTemp1;
INT_PTR nLen;
strTempW.Format(L"%c", 0xFFFF);
nLen = strTempW.GetLength();
strTemp1 += strTempW;
nLen = strTemp1.GetLength();
After executing the first codeline strTempW.Format(L"%c", 0xFFFF), I will get strTempW of length 1, but cannot see it first character in Visual Studio watch window.
After executing the codelilne strTemp1 += strTempW, I will get strTemp1 of length 0.
Whether 0xFFFF is taken as a valid Unicode or not?
View 1 Replies
View Related
Feb 18, 2014
I am using VC++ 2005, Multibyte char set. I am getting hex values from stream and i have to show it in respective language.
in below example
char mt[] = { 0x0C, 0x85, 0x0C, 0x86, 0x0C, 0x87,0x0C, 0x88,0x0C, 0x89, 0x0C, 0x8A,0x0C, 0x8B,
0x0c, 0x85 , 0x0c , 0x86 , 0x0c , 0x87 , 0x0c , 0x88 , 0x0c , 0x89 , 0x0c , 0x8a , 0x0c , 0x8b , 0x00 , 0x20,
0x0c, 0x8e , 0x0c , 0x8f , 0x0c , 0x90 , 0x0c , 0x92 , 0x0c , 0x93 , 0x0c , 0x94 , 0x00 , 0x20 , 0x0c , 0x95,
0x0c, 0x96 , 0x0c , 0x97 , 0x0c , 0x98 , 0x0c , 0x99 , 0x00 , 0x20 , 0x0c , 0x9a , 0x0c , 0x9b , 0x0c , 0x9c,
0x0c, 0x9d , 0x0c , 0x9e , 0x00 , 0x20 , 0x0c , 0x9f , 0x0c , 0xa0 , 0x0c , 0xa1 , 0x0c , 0xa2 , 0x0c , 0xa3,
0x00, 0x20 , 0x0c , 0xa4 , 0x0c , 0xa5 , 0x0c , 0xa6 , 0x0c , 0xa7 , 0x0c , 0xa8 , 0x00 , 0x20 , 0x0c , 0xaa,
0x0c, 0xab , 0x0c , 0xac , 0x0c , 0xad , 0x0c , 0xae , 0x00 , 0x20 , 0x0c , 0xaf , 0x0c , 0xb0 , 0x0c , 0xb2,
0x0c, 0xb5 , 0x0c , 0xb6};
How can i convert the mt string to below string?
"ಅಆಇಈಉಊಋ ಎಏಐಒಓಔ ಕಖಗಘಙ ಚಛಜಝಞ ಟಠಡಢಣ ತಥದಧನ ಪಫಬಭಮ ಯರಲವಶ"
To cross check the mt array, If you place the above string to the below link you get mt array [URL] ....
Cant i do it in "Multibyte char set" settings? or should i use Unicode settings.
View 3 Replies
View Related
Sep 22, 2012
How to open a file which its name is unicode letters ? usually :
Code:
basic_ifstream<wchar_t> src("source.txt");
Work well to read file with unicode content not filename, so that example doesn't work :
Code:
basic_ifstream<wchar_t> src(L"source.txt");
Also, I have seen some alternatives for using open function but it doesn't work as well.
Code:
basic_ifstream<wchar_t> src;
src.open(L"source.txt");
I use g++ compiler.
View 3 Replies
View Related
Nov 20, 2012
Working in Win32 console app (VS 2010) I have been trying to convert several Unicode (UTF-16) C++ functions to Ansi C (UTF-8). The test app includes two tokenizer classes, each of which work perfectly well in their respective environments, CTokA and CTokW (UTF-8 and UTF-16).
A problem arises when I attempt to run the UTF-8 functions when the Character Set properties is set to 'Use Unicode Character Set' in that std::string manipulations do not perform as expected, e.g.,
printf("start
");
gets reproduced as
printf("start
");══════════ ²²²²
Attempting to null terminate the string where it is supposed to end simply results in a space in that position and the garbage end persists, e.g.,
printf("sta t
");══════════ ²²²²
Code:
sline[11] = 0x0000;
If I attempt to change the Character Set property to 'Use Multibyte Character Set' or 'Not Set', the app will not compile and hundreds of errors occur. Of course, I can eliminate all of the UTF-16 code, but it strikes me that it should not be necessary. Perhaps if M$ made everything UTF-16 without all of the necessary decorations like 'L' and '_T(', life would be much simpler. Unfortunately, I have a very extensive UTF-8 app under 10 years of development that works quite well, but my UTF-16 (Unicode) conversion doesnt work as well because of the mixing of pointers (I think), so I have had to revert much of the code back to UTF-8. (All of which has nothing to do with my question but is simply psychotheraputic for me to ventilate on.)
My question is this: Can UTF-8 and UTF-16 code coexist in a single Win32 console app?
View 6 Replies
View Related
Mar 20, 2015
I'm transferring a unicode string from one program to another with UTF-8 encoding.
Program that is sending:
Code:
// Convert path
std::wstring_convert<std::codecvt_utf8<wchar_t>> utf8_converter;
CString arg = L" /PATH="" + CString(utf8_converter.to_bytes(path).c_str()) + L""";
Program that is retrieving:
Code:
// Restore original path
std::wstring_convert<std::codecvt_utf8<wchar_t>> utf8_converter;
std::wstring path = utf8_converter.from_bytes( argument );
Everything has worked fine, until running on a Japanese edition of Windows.
The "byte path" then looks something like "C:¥Users¥d✝?✝a ,?¥AppData¥Local¥Temp¥file.txt".
"from_bytes()" will throw an std::range_error exception "bad conversion".
The program works fine when working with Japenese writing inside paths in the English edition.
What could be causing the "bad conversion"?
View 10 Replies
View Related
Nov 12, 2013
I intent to use this mechanism for rename the file because the file name consists Unicode characters . I would like to know why the return value of "MoveFileExW" is false for file name consists 'space','hyphen' etc.(sometimes even without a Unicode character).for accepting 'space','hyphen' what type of conversion I would use [I.e.: Does the root cause of failure is due to CP_UTF8 type use].
Code:
//! inputPath & final_inputPath consist source and destination file name and are std::string
//! Both are in same directory F: est_files
std::wstring unicode_input_original;
int unicode_input_length_original = 0
[Code] ....
View 4 Replies
View Related