libTorch的安装
在官网https://pytorch.org/选择与cuda版本对应的libtorch下载,然后解压即可
TorchVision编译
升级CMake
wget https://cmake.org/files/v3.20/cmake-3.20.0-linux-x86_64.tar.gz tar -xzvf cmake-3.20.0-linux-x86_64.tar.gz #解压出来的包,将其放在 /opt 目录下,其他目录也可以,主要别以后不小心删了 sudo mv cmake-3.20.0-linux-x86_64 /opt/cmake-3.20.0 #建立软链接 sudo ln -sf /opt/cmake-3.20.0/bin/* /usr/bin/ #查看 cmake 版本 cmake --version
编译安装TorchVision
下载:https://github.com/pytorch/vision
注意:因为Ubuntu默认是python3.5,故最好下载Vision0.6.1及以下版本 如果python的版本不对,是python2.7的话,需要通过下面设置优先级来设置默认的Python为python3.5:
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2.7 1 sudo update-alternatives --install /usr/bin/python python /usr/bin/python3.5 2
编译安装
mkdir build cd build cmake -DCMAKE_PREFIX_PATH=/home/alex/github/libtorch -DWITH_CUDA=ON -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_STANDARD=14 .. make -j4 sudo make install
报错
cmake的时候:
links to target "Python3::Python" but the target was not found. Perhaps a find_package() call is missing for an IMPORTED
解决:这个问题很奇怪,明明已经是默认python3了,还是说找不到。再次安装python3就好了 sudo apt install python3-dev
make的时候
shufflenetv2.cpp:83:33: error: call of overloaded ‘channel_shuffle(at::Tensor&, int)’ is ambiguous
解决:这是多个代码冲突了,将报错地方改为:vision::models::channel_shuffle()
代码实例
CMakeLists.txt
cmake_minimum_required(VERSION 3.19) project(LibTorchHelloWorld) set(CMAKE_CXX_STANDARD 14) set(CMAKE_PREFIX_PATH "/usr/local/lib;/home/alex/github/libtorch") find_package(TorchVision REQUIRED) find_package(Torch REQUIRED) include_directories(${TORCH_INCLUDE_DIRS}) add_executable(LibTorchHelloWorld main.cpp) target_compile_features(LibTorchHelloWorld PUBLIC cxx_range_for) target_link_libraries(LibTorchHelloWorld ${TORCH_LIBRARIES} TorchVision::TorchVision) set_property(TARGET LibTorchHelloWorld PROPERTY CXX_STANDARD 14)
main.cpp
#include <iostream> #include <torch/torch.h> #include <torchvision/vision.h> #include <torchvision/models/resnet.h> // Define a new Module. struct Net : torch::nn::Module { Net() { // Construct and register two Linear submodules. fc1 = register_module("fc1", torch::nn::Linear(784, 64)); fc2 = register_module("fc2", torch::nn::Linear(64, 32)); fc3 = register_module("fc3", torch::nn::Linear(32, 10)); } // Implement the Net's algorithm. torch::Tensor forward(torch::Tensor x) { // Use one of many tensor manipulation functions. x = torch::relu(fc1->forward(x.reshape({x.size(0), 784}))); x = torch::dropout(x, /*p=*/0.5, /*train=*/is_training()); x = torch::relu(fc2->forward(x)); x = torch::log_softmax(fc3->forward(x), /*dim=*/1); return x; } // Use one of many "standard library" modules. torch::nn::Linear fc1{nullptr}, fc2{nullptr}, fc3{nullptr}; }; int main() { auto model = vision::models::ResNet18(); model->eval(); // Create a random input tensor and run it through the model. auto in = torch::rand({1, 3, 10, 10}); auto out = model->forward(in); std::cout << out.sizes(); if (torch::cuda::is_available()) { // Move model and inputs to GPU model->to(torch::kCUDA); auto gpu_in = in.to(torch::kCUDA); auto gpu_out = model->forward(gpu_in); std::cout << gpu_out.sizes(); } }