We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
std::vector<std::pair<ncnn::Layer*, ncnn::Option>> net;//手动创建的net。net是串联的 std::vectorncnn::Mat blob_mats;//用于net的输入与输出 ncnn::Option opt;
bool infer(const ncnn::Mat& input, ncnn::Mat& output) { blob_mats[0] = input; for (size_t i = 0; i < net.size(); i++) { net[i].first->forward(blob_mats[i], blob_mats[i+1], net[i].second); } output = blob_mats[net.size()]; return true; } 同样的模型,为什么这样操作会比使用onnx转ncnn的模型慢了三倍? @maxint @cook
The text was updated successfully, but these errors were encountered:
No branches or pull requests
std::vector<std::pair<ncnn::Layer*, ncnn::Option>> net;//手动创建的net。net是串联的
std::vectorncnn::Mat blob_mats;//用于net的输入与输出
ncnn::Option opt;
bool infer(const ncnn::Mat& input, ncnn::Mat& output) {
blob_mats[0] = input;
for (size_t i = 0; i < net.size(); i++) {
net[i].first->forward(blob_mats[i], blob_mats[i+1], net[i].second);
}
output = blob_mats[net.size()];
return true;
}
同样的模型,为什么这样操作会比使用onnx转ncnn的模型慢了三倍?
@maxint @cook
The text was updated successfully, but these errors were encountered: