The --gpuid # or --gpu # has been around since keyhunt cuda or VanitySearch days.
Understood that gpu timing and getting all the results in sequence is a pain for threading control since each gpu runs a bit faster/slower than others.
The alternate solution is 1 gpu per instance and easily selected by the user by passing argc and argv on the command line in Linux or Windows.
Define an uint and pass that selected gpu for the calls to the CUDA engine and monitoring thereof.
Simply?
line number:
481 uint32_t gpuid = 0; //Added gpu id as a value to be passed on argv argc for command line. -dev_nullish
509
auto usage = &{
std::cerr
<< "Usage: " << argv[0]
<< "--gpuId G --grid A,B --slices N --ip <SERVER_IP> --port [--userid <ALNUM_6_9>] [--worker <ALNUM_ID>]\n" //Added gpuid dev_nullish
<< "Example: " << argv[0] << " --gpuId 0 --grid 512,256 --slices 8 --ip 127.0.0.1 --port 15935 --worker DooKoo2\n";
};
522
// if (strcmp(argv[a], "-gpuId") == 0) { //From vantiybitcrack fixed paul on how to read the args from main.cpp for gpuId-dev_nullish
// The question is, how does one launch to a different gpu number(gpuid) and monitor the results on said gpuid?
// Also does this program even check to see if there are any Nvidia gpus to begin with?
// Obviously to run the compiled cuda for a given set of cards, they will all have to be the same to match the parameters for grid, number of cores.etc.
// If deploying separate instances on a rig with mixed cards, each instance will have to be compiled(make) to use said gpuid card on the pcie bus.
// So there's another parameter to be passed with the make file to compile for a specific card in the set on a given rig. -dev_nullish
// a++;
// gpuParsed = string(argv[a]);
// a++;
603
// Немного инфы по GPU Some information about GPUs
{
int device = 0; //Would gpuid be here instead of 0 so you can select n number of gpus? dev_nullish
731
// Полный CUDA reset между заданиями Full CUDA reset between tasks
cudaDeviceReset();
cudaSetDevice(0); //Is this where the gpu is selected from multiple? dev_nullish Should G be there instead of 0?
cudaDeviceSetLimit(cudaLimitStackSize, 64 * 1024);
Suggestions?
The --gpuid # or --gpu # has been around since keyhunt cuda or VanitySearch days.
Understood that gpu timing and getting all the results in sequence is a pain for threading control since each gpu runs a bit faster/slower than others.
The alternate solution is 1 gpu per instance and easily selected by the user by passing argc and argv on the command line in Linux or Windows.
Define an uint and pass that selected gpu for the calls to the CUDA engine and monitoring thereof.
Simply?
line number:
481 uint32_t gpuid = 0; //Added gpu id as a value to be passed on argv argc for command line. -dev_nullish
509
auto usage = &{
std::cerr
<< "Usage: " << argv[0]
<< "--gpuId G --grid A,B --slices N --ip <SERVER_IP> --port [--userid <ALNUM_6_9>] [--worker <ALNUM_ID>]\n" //Added gpuid dev_nullish
<< "Example: " << argv[0] << " --gpuId 0 --grid 512,256 --slices 8 --ip 127.0.0.1 --port 15935 --worker DooKoo2\n";
};
522
// if (strcmp(argv[a], "-gpuId") == 0) { //From vantiybitcrack fixed paul on how to read the args from main.cpp for gpuId-dev_nullish
// The question is, how does one launch to a different gpu number(gpuid) and monitor the results on said gpuid?
// Also does this program even check to see if there are any Nvidia gpus to begin with?
// Obviously to run the compiled cuda for a given set of cards, they will all have to be the same to match the parameters for grid, number of cores.etc.
// If deploying separate instances on a rig with mixed cards, each instance will have to be compiled(make) to use said gpuid card on the pcie bus.
// So there's another parameter to be passed with the make file to compile for a specific card in the set on a given rig. -dev_nullish
// a++;
// gpuParsed = string(argv[a]);
// a++;
603
// Немного инфы по GPU Some information about GPUs
{
int device = 0; //Would gpuid be here instead of 0 so you can select n number of gpus? dev_nullish
731
// Полный CUDA reset между заданиями Full CUDA reset between tasks
cudaDeviceReset();
cudaSetDevice(0); //Is this where the gpu is selected from multiple? dev_nullish Should G be there instead of 0?
cudaDeviceSetLimit(cudaLimitStackSize, 64 * 1024);
Suggestions?