Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MPI version installation error #23

Open
shaman-narayanasamy opened this issue Sep 29, 2017 · 1 comment
Open

MPI version installation error #23

shaman-narayanasamy opened this issue Sep 29, 2017 · 1 comment

Comments

@shaman-narayanasamy
Copy link

Dear authors,

It is me again :)

The install for the regular version of nonpareil. I am now attempting the install of nonpareil-mpi. However, below is the error that I receive:

$ make nonpareil-mpi
cd enveomics/ && make nonpareil-mpi
make[1]: Entering directory '/mnt/gaiagpfs/projects/ecosystem_biology/local_tools/nonpareil/enveomics'
mpic++ -DENVEOMICS_MULTI_NODE universal.cpp -c -Wall -std=c++11
mpic++ -DENVEOMICS_MULTI_NODE multinode.cpp -c -Wall -std=c++11
multinode.cpp: In function ‘void init_multinode(int&, char**&, int&, int&)’:
multinode.cpp:16:4: error: ‘MPI’ has not been declared
    MPI::Init(argc, argv);
    ^~~
multinode.cpp:17:16: error: ‘MPI’ has not been declared
    processes = MPI::COMM_WORLD.Get_size();
                ^~~
multinode.cpp:18:16: error: ‘MPI’ has not been declared
    processID = MPI::COMM_WORLD.Get_rank();
                ^~~
multinode.cpp: In function ‘void finalize_multinode()’:
multinode.cpp:21:4: error: ‘MPI’ has not been declared
    MPI::Finalize();
    ^~~
multinode.cpp: In function ‘void barrier_multinode()’:
multinode.cpp:25:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Barrier();
    ^~~
multinode.cpp: In function ‘size_t broadcast_int(size_t)’:
multinode.cpp:31:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::INT, 0);
    ^~~
multinode.cpp:31:37: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::INT, 0);
                                     ^~~
multinode.cpp: In function ‘double broadcast_double(double)’:
multinode.cpp:40:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::DOUBLE, 0);
    ^~~
multinode.cpp:40:37: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::DOUBLE, 0);
                                     ^~~
multinode.cpp: In function ‘char* broadcast_char(char*, size_t)’:
multinode.cpp:47:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(value, size, MPI::CHAR, 0);
    ^~~
multinode.cpp:47:39: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(value, size, MPI::CHAR, 0);
                                       ^~~
multinode.cpp: In function ‘char broadcast_char(char)’:
multinode.cpp:53:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::CHAR, 0);
    ^~~
multinode.cpp:53:37: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Bcast(buffer, 1, MPI::CHAR, 0);
                                     ^~~
multinode.cpp: In function ‘void reduce_sum_int(int*, int*, int)’:
multinode.cpp:60:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::INT, MPI::SUM, 0);
    ^~~
multinode.cpp:60:48: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::INT, MPI::SUM, 0);
                                                ^~~
multinode.cpp:60:58: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::INT, MPI::SUM, 0);
                                                          ^~~
multinode.cpp: In function ‘void reduce_sum_int(int, int)’:
multinode.cpp:65:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send_ar, receive_ar, 1, MPI::INT, MPI::SUM, 0);
    ^~~
multinode.cpp:65:51: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send_ar, receive_ar, 1, MPI::INT, MPI::SUM, 0);
                                                   ^~~
multinode.cpp:65:61: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send_ar, receive_ar, 1, MPI::INT, MPI::SUM, 0);
                                                             ^~~
multinode.cpp: In function ‘void reduce_sum_double(double*, double*, int)’:
multinode.cpp:70:4: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::DOUBLE, MPI::SUM, 0);
    ^~~
multinode.cpp:70:48: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::DOUBLE, MPI::SUM, 0);
                                                ^~~
multinode.cpp:70:61: error: ‘MPI’ has not been declared
    MPI::COMM_WORLD.Reduce(send, receive, size, MPI::DOUBLE, MPI::SUM, 0);
                                                             ^~~
Makefile:32: recipe for target 'nonpareil-mpi' failed
make[1]: *** [nonpareil-mpi] Error 1
make[1]: Leaving directory '/mnt/gaiagpfs/projects/ecosystem_biology/local_tools/nonpareil/enveomics'
Makefile:29: recipe for target 'nonpareil-mpi' failed
make: *** [nonpareil-mpi] Error 2

Below is the MPI version available on my cluster.

$ mpic++ --version
g++ (GCC) 6.3.0
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Let me know if you need more information. It would be nice to implement the mpi version of the tool.

Best regards,
Shaman

@lmrodriguezr
Copy link
Owner

I believe the problem is missing libraries on your machine. Which OS are you using? If you have apt-get you can do:

sudo apt-get install libopenmpi-dev

Note that since Nonpareil v3.0 there is a new kmer-based kernel available (-T kmer) that makes computations much faster, but does not support MPI. MPI can only be used with the traditional alignment kernel (-T alignment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants