0%

Series

Guide

requirements:

  • windows: 10
  • caffe: caffe-windows
  • nvidia driver: gtx 1060 382.05 (gtx 970m)
  • GPU arch(s): sm_61 (sm_52)
  • cuda: 8.0
  • cudnn: 5.0.5
  • opencv: 3.1.0 WITH_CUDA (compiled from source)
  • other libs: libraries_v140_x64_py27_1.1.0.tar.bz2

cuda+cudnn

  1. download and install driver by standalone for GTX 970 or GTX 1060 from here.
  2. download and install cuda_8.0.61_win10.exe, skip install nvidia driver and install toolkit only.
  3. download and install cudnn-8.0-windows10-x64-v5.0-ga.zip.

nvidia driver

driver can be installed by standalone or from cuda_xxx_win10.exe.
we choose to install by standalone

download proper driver for GTX 970 or GTX 1060 eg: 398.36-notebook-win10-64bit-international-whql.exe from here

download driver

cuda toolkit

ref: cuda install guides for windows

download cuda_8.0.61_win10.exe from here

The CUDA Toolkit installs the CUDA driver and tools needed to create, build and run a CUDA application as well as libraries, header files, CUDA samples source code, and other resources

cuda_8.0.61_win10.exe includes: Nvidia driver + toolkit.

install to

  • driver install to C:/Program Files/NVIDIA Corporation and C:/ProgramData/NVIDIA Corporation
  • tookit install to C:/Program Files/NVIDIA GPU Computing Toolkit,which contains headers,libs,tools for compiling CUDA applications. C:/ProgramData/NVIDIA GPU Computing Toolkit contains cuda plugins for Visual Studio.

cuda driver

cuda toolkit

cuda driver data

cuda toolkit data

verify

1
2
cd C:\ProgramData\NVIDIA Corporation\CUDA Samples\v9.2\bin\win64\Release
./deviceQuery.exe

cudnn

extract cudnn-8.0-windows10-x64-v5.0-ga.zip and copy include,liband bin to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0

cudnn

check cuda

nvidia driver and cuda software installation

compile

download

  1. place caffe-windows at C:/compile/caffe-windows
  2. extract libraries_v140_x64_py27_1.1.0.tar.bz2 to C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries

config

edit C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\caffe-builder-config.cmake

1
2
3
4
5
6
7
# BOOST config
set(BOOST_ROOT "C:/Boost/")
set(BOOST_INCLUDEDIR ${BOOST_ROOT}/include/boost-1_64 CACHE PATH "")
set(BOOST_LIBRARYDIR ${BOOST_ROOT}/lib CACHE PATH "")
set(Boost_USE_MULTITHREADED ON CACHE BOOL "")
set(Boost_USE_STATIC_LIBS ON CACHE BOOL "")
set(Boost_USE_STATIC_RUNTIME OFF CACHE BOOL "")

edit caffe-windows/cmake/Dependencies.cmake

1
2
set(Boost_USE_STATIC_LIBS ON)
find_package(Boost 1.64 REQUIRED COMPONENTS system thread filesystem)

Tips:
(1) we use C:\Boost\ 1.64 to replace caffe dependencies C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\ 1.61, because we have compile PCL 1.8.1 with Boost 1.64 static.
(2) we use caffe C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\x64\vc14\lib to replace C:/Program Files/opencv. (opencv3.1 <====opencv3.4)

configure caffe with

1
2
cd caffe
mkdir build && cd build && cmake-gui ..

with options

BLAS                 Open # Atlas, Open, MKL
BUILD_SHARED_LIBS        OFF # build static library
CMAKE_CONFIGURATION_TYPES   Release
CMAKE_CXX_RELEASE_FLAGS    /MD /O2 /Ob2 /DNDEBUG /MP

CUDA_ARCH_BIN  3.0 3.5 5.0 5.2 6.0 6.1 # very time-consuming
CUDA_ARCH_NAME Manual
CUDA_ARCH_PTX 3.0

Use Boost 1.64

caffe cuda arch

cudnn

opencv with cuda

configure and output

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
Selecting Windows SDK version 10.0.14393.0 to target Windows 10.0.15063.
Boost version: 1.64.0
Found the following Boost libraries:
system
thread
filesystem
chrono
date_time
atomic
Found gflags (include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include, library: gflags_shared)
Found glog (include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include, library: glog)
Found PROTOBUF Compiler: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/bin/protoc.exe
Found lmdb (include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include, library: lmdb)
Found LevelDB (include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include, library: leveldb)
Found Snappy (include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include, library: snappy_static;optimized;C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/lib/caffezlib.lib;debug;C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/lib/caffezlibd.lib)
CUDA detected: 8.0
Found cuDNN: ver. 5.0.5 found (include: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/include, library: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/cudnn.lib)
Added CUDA NVCC flags for: sm_61
OpenCV found (C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries)
Found OpenBLAS libraries: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/lib/libopenblas.dll.a
Found OpenBLAS include: C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/include
NumPy ver. 1.11.3 found (include: C:/Python27/lib/site-packages/numpy/core/include)
Boost version: 1.64.0
Found the following Boost libraries:
python

******************* Caffe Configuration Summary *******************
General:
Version : 1.0.0
Git : unknown
System : Windows
C++ compiler : C:/Program Files (x86)/Microsoft Visual Studio 14.0/VC/bin/x86_amd64/cl.exe
Release CXX flags : /MD /O2 /Ob2 /DNDEBUG /MP /DWIN32 /D_WINDOWS /W3 /GR /EHsc
Debug CXX flags : /MDd /Zi /Ob0 /Od /RTC1 /DWIN32 /D_WINDOWS /W3 /GR /EHsc
Build type : Release

BUILD_SHARED_LIBS : OFF
BUILD_python : ON
BUILD_matlab : OFF
BUILD_docs :
CPU_ONLY : OFF
USE_OPENCV : ON
USE_LEVELDB : ON
USE_LMDB : ON
USE_NCCL : OFF
ALLOW_LMDB_NOLOCK : OFF

Dependencies:
BLAS : Yes (Open)
Boost : Yes (ver. 1.64)
glog : Yes
gflags : Yes
protobuf : Yes (ver. 3.1.0)
lmdb : Yes (ver. 0.9.70)
LevelDB : Yes (ver. 1.18)
Snappy : Yes (ver. 1.1.1)
OpenCV : Yes (ver. 3.1.0)
CUDA : Yes (ver. 8.0)

NVIDIA CUDA:
Target GPU(s) : Auto
GPU arch(s) : sm_61
cuDNN : Yes (ver. 5.0.5)

Python:
Interpreter : C:/Python27/python.exe (ver. 2.7.13)
Libraries : C:/Python27/libs/python27.lib (ver 2.7.13)
NumPy : C:/Python27/lib/site-packages/numpy/core/include (ver 1.11.3)

Install:
Install path : C:/car_libs/caffe

Configuring done

build and install

tips: Visual Studio 2015 can not generate shared library. So we build static caffe library.

CMake Error at CMakeLists.txt:66 (message):
  The Visual Studio generator cannot build a shared library.  Use the Ninja
  generator instead.
  

Build with Release x64 with Visual Studio 2015 and 38 modules will be generated and We Install to C:/car_libs/caffe/.
build with vs

build result.
build result

install to C:/car_libs/caffe.

caffe usage

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Boost
if(MSVC)
# use static boost on windows
set(Boost_USE_STATIC_LIBS ON) #
else()
# use release boost on linux
set(Boost_USE_STATIC_LIBS OFF)
endif(MSVC)

set(Boost_USE_MULTITHREAD ON)
# Find Boost package 1.64 (caffe also use Boost 1.64)
find_package(Boost 1.64 REQUIRED COMPONENTS serialization date_time system filesystem thread timer math_tr1)

# opencv
SET(OpenCV_DIR "C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/")
find_package(OpenCV REQUIRED COMPONENTS core highgui imgproc features2d calib3d) # nofree for 2.4

# caffe
set(Caffe_DIR "C:/car_libs/caffe/share/Caffe/")
find_package(Caffe)

when we use caffe lib in our program, errors will occur. And we need to fix CaffeTargets-release.cmake file。

usage error fix

(1) error with shared.lib

LNK1181	unable to open“gflags_shared.lib” 

solution:

vim C:/car_libs/caffe/share/Caffe/CaffeTargets-release.cmake

# remove _shared -shared
:1,$s/_shared//g
:1,$s/-shared//g

(2) error with hdf5

hdf5.lib===>libcaffehdf5.lib
hdf5_hl.lib===>libcaffehdf5_hl.lib

 :1,$s/hdf5/libcaffehdf5/g

(3) error with libopenblas

LNK1181	unable to open“libopenblas.dll.a.lib”

solution:

cd C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\lib and

  • copy libopenblas.a ===> libopenblas.a.lib
  • copy libopenblas.dll.a ===> libopenblas.dll.a.lib

(4) error NtClose

error LNK2019: 无法解析的外部符号 NtClose,该符号在函数 mdb_env_map 中被引用

solution:

copy `C:/Program Files (x86)/Windows Kits/10/Lib/10.0.14393.0/um/x64/ntdll.lib` to `C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\lib`
copy `C:\Windows\SysWOW64\ntdll.dll` to `C:\Users\zunli\.caffe\dependencies\libraries_v140_x64_py27_1.1.0\libraries\bin`

CaffeTargets-release.cmake

edit C:\car_libs\caffe\share\Caffe\CaffeTargets-release.cmake

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#----------------------------------------------------------------
# Generated CMake target import file for configuration "Release".
#----------------------------------------------------------------

# Commands may need to know the format version.
set(CMAKE_IMPORT_FILE_VERSION 1)

# Import target "caffe" for configuration "Release"
set_property(TARGET caffe APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(caffe PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_RELEASE "CXX"
IMPORTED_LINK_INTERFACE_LIBRARIES_RELEASE
"caffeproto;C:/Boost/lib/libboost_system-vc140-mt-1_64.lib;C:/Boost/lib/libboost_thread-vc140-mt-1_64.lib;C:/Boost/lib/libboost_filesystem-vc140-mt-1_64.lib;C:/Boost/lib/libboost_chrono-vc140-mt-1_64.lib;C:/Boost/lib/libboost_date_time-vc140-mt-1_64.lib;C:/Boost/lib/libboost_atomic-vc140-mt-1_64.lib;C:/Boost/lib/libboost_python-vc140-mt-1_64.lib;caffehdf5.lib;caffehdf5_cpp.lib;caffehdf5_hl.lib;caffehdf5_hl_cpp.lib;caffezlib.lib;caffezlibstatic.lib;gflags;glog;leveldb.lib;libcaffehdf5.lib;libcaffehdf5_cpp.lib;libcaffehdf5_hl.lib;libcaffehdf5_hl_cpp.lib;libprotobuf.lib;libprotoc.lib;lmdb.lib;snappy.lib;snappy_static.lib;libopenblas.dll.a.lib;ntdll.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/cudart.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/curand.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/cublas.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/cublas_device.lib;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v8.0/lib/x64/cudnn.lib;opencv_core;opencv_highgui;opencv_imgproc;opencv_imgcodecs;C:/Python27/libs/python27.lib;"
IMPORTED_LOCATION_RELEASE "${_IMPORT_PREFIX}/lib/caffe.lib"
)

list(APPEND _IMPORT_CHECK_TARGETS caffe )
list(APPEND _IMPORT_CHECK_FILES_FOR_caffe "${_IMPORT_PREFIX}/lib/caffe.lib" )

# Import target "caffeproto" for configuration "Release"
set_property(TARGET caffeproto APPEND PROPERTY IMPORTED_CONFIGURATIONS RELEASE)
set_target_properties(caffeproto PROPERTIES
IMPORTED_LINK_INTERFACE_LANGUAGES_RELEASE "CXX"
IMPORTED_LINK_INTERFACE_LIBRARIES_RELEASE "C:/Users/zunli/.caffe/dependencies/libraries_v140_x64_py27_1.1.0/libraries/lib/libprotobuf.lib"
IMPORTED_LOCATION_RELEASE "${_IMPORT_PREFIX}/lib/caffeproto.lib"
)

list(APPEND _IMPORT_CHECK_TARGETS caffeproto )
list(APPEND _IMPORT_CHECK_FILES_FOR_caffeproto "${_IMPORT_PREFIX}/lib/caffeproto.lib" )

# Commands beyond this point should not need to know the version.
set(CMAKE_IMPORT_FILE_VERSION)

comiple errors with caffe.pb.h

tips: sometimes we not need to do this.

CMakeLists.txt

1
2
3
4
add_definitions( -DGLOG_NO_ABBREVIATED_SEVERITIES ) 
add_definitions( -DNOMINMAX ) # for pcl min,max
add_definitions( -DWIN32_LEAN_AND_MEAN )
#add_definitions( -DNO_STRICT ) # no use for caffe.pb.h

vim C:\car_libs\caffe\include\caffe\proto\caffe.pb.h

1
2
3
4
5
6
7
typedef ParamSpec_DimCheckMode DimCheckMode;
static const DimCheckMode STRICT = ParamSpec_DimCheckMode_STRICT;
static const DimCheckMode PERMISSIVE = ParamSpec_DimCheckMode_PERMISSIVE;

typedef V1LayerParameter_DimCheckMode DimCheckMode;
static const DimCheckMode STRICT = V1LayerParameter_DimCheckMode_STRICT;
static const DimCheckMode PERMISSIVE = V1LayerParameter_DimCheckMode_PERMISSIVE;

replace STRICT and PERMISSIVE to _STRICT and _PERMISSIVE.

1
2
3
4
5
6
7
typedef ParamSpec_DimCheckMode DimCheckMode;
static const DimCheckMode _STRICT = ParamSpec_DimCheckMode_STRICT;
static const DimCheckMode _PERMISSIVE = ParamSpec_DimCheckMode_PERMISSIVE;

typedef V1LayerParameter_DimCheckMode DimCheckMode;
static const DimCheckMode _STRICT = V1LayerParameter_DimCheckMode_STRICT;
static const DimCheckMode _PERMISSIVE = V1LayerParameter_DimCheckMode_PERMISSIVE;

caffe.pb.h compile errors

run exe

  • copy C:/car_libs/caffe/bin/*.dll dlls to bin/release folder.
  • copy Opencv dlls to bin/release folder.

Errors and Solutions

nvidia driver not compatible with windows 10

problem: install nvidia driver failed on windows 10
nvidia driver not compatible with windows 10

solution

  1. download Windows10Upgrade
  2. run Windows10Upgrade.exe to upgrade windows 10 to latest.
  3. install nvidia driver again.
  4. OK.

Reference

History

  • 20180413 created.

Guide

install softwares

DB creation & initial data registration

  1. add variable PGPASSWORD with value postgres to System Path to avoid input password with psql.

  2. start pgAdmin and create database mago3d with following options.

    1
    2
    3
    4
    5
    6
    Name:mago3d, 
    Encoding:UTF-8,
    Template:template0,
    Collation:C,
    Character type:C,
    Connection Limit:-1

    or with sql

    1
    2
    3
    4
    5
    6
    7
    8
    9
    CREATE DATABASE mago3d
    WITH
    OWNER = postgres
    TEMPLATE = template0
    ENCODING = 'UTF8'
    LC_COLLATE = 'C'
    LC_CTYPE = 'C'
    TABLESPACE = pg_default
    CONNECTION LIMIT = -1;

    tips: how to change postgres password?

    1
    2
    3
    psql -h 192.168.1.100 -p 5432 -U postgres -d mydb
    psql> alter user postgres with password 'new password';
    psql> \h \q

  3. create extension for mago3d.

mago3d with postgis extension

  1. load data into mago3d
    edit mago3d-core/src/doc/en/database/dbinit.bat

    1
    2
    :: cd C:\PostgreSQL\9.6\bin\
    cd C:\Program Files\PostgreSQL\10\bin

    and run dbinit.bat from command.

    1
    C:\git\repository\mago3d\mago3d-core\src\doc\en\database> .\dbinit.bat
  2. db symbolic link

Execute Command Line Prompt (cmd.exe) with administrative privileges

1
C:\git\repository\mago3d\mago3d-user\src\main\webapp > mklink /d "C:\git\repository\mago3d\mago3d-user\src\main\webapp\f4d" "C:\f4d"

lombok plugin with eclipse

1
2
wget http://projectlombok.googlecode.com/files/lombok.jar 
java -jar ./lombok.jar

select path to eclipse and click install/update
install lombok plugin for eclipse

check the result:
(1) -javaagent:C:\Users\zunli\eclipse\jee-oxygen\eclipse\lombok.jar in C:\Users\zunli\eclipse\jee-oxygen\eclipse\eclipse.ini
(2) lombok.jar has been copied to eclipse plugins folder C:\Users\zunli\eclipse\jee-oxygen\eclipse\plugins.

restart eclipse and import settings.gradle.

install lombok plugin for IDEA

Install lombok plugin by File->Settings->Plugins->Browse repositories... and search for lombok online and install.

install lombok plugin

enable anonation processing

File->Settings->Compiler->Anonation Processers->Enable Anonation Processing (Check ON)

enable anonation processing

import lombok libraries

File->Project Structure->Libraries->...

import lombok as library

run adminApplication

config

By default, program use postgres as username,and postgres as password for magoed database.

1
2
spring.datasource.username=nvdm6E6o5Fr3x2a877fl/w==
spring.datasource.password=nvdm6E6o5Fr3x2a877fl/w==

run admin

start /mago3D-admin/src/main/java/com/gaia3d/mago3DAdminApplication.java

and access http://localhost:9090/login/login.do
login with admin as username and admin as password.

Reference

History

  • 20180331: created.

Series

Guide

compile

1
2
3
git clone https://github.com/davisking/dlib.git
cd dlib && mkdir build && cd build
cmake-gui ..

with options

CMAKE_INSTALL_PREFIX  C:/Program Files/dlib

configure and compile with Visual Studio 2015 and install to C:/Program Files/dlib.

By default, dlib19.10.0_release_64bit_msvc1900.lib will be generated.

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
cmake_minimum_required(VERSION 2.8.12)
# Every project needs a name. We call this the "examples" project.
project(examples)


# Tell cmake we will need dlib. This command will pull in dlib and compile it
# into your project. Note that you don't need to compile or install dlib. All
# cmake needs is the dlib source code folder and it will take care of everything.
add_subdirectory(../dlib dlib_build)

add_executable(demo demo.cpp)
target_link_libraries(demo dlib::dlib)

or

1
2
3
4
5
6
7
8
9
10
11
12
13
14
find_package(dlib REQUIRED)

if(MSVC)
set(dlib_LIBRARIES "C:/Program Files/dlib/lib/dlib.lib") # replace dlib::dlib
else()
endif(MSVC)
# ${dlib_INCLUDE_DIRS} and ${dlib_LIBRARIES} are deprecated, simply use target_link_libraries(your_app dlib::dlib)
MESSAGE( [Main] " dlib_INCLUDE_DIRS = ${dlib_INCLUDE_DIRS}")
MESSAGE( [Main] " dlib_LIBRARIES = ${dlib_LIBRARIES}")


add_executable(demo demo.cpp)
#target_link_libraries(demo ${dlib_LIBRARIES})
target_link_libraries(demo dlib::dlib)

dlib for python api

1
2
3
cd tools/python
mkdir build && cd build
cmake-gui ..

compile dlib_python with Visual Studio 2015 and dlib.pyd will be generated.

copy dlib.pyd to C:\Python27\Lib\site-packages.

test dlib for python

1
2
import dlib
dir(dlib)

Reference

History

  • 20180330: created.

Guide

header files

1
2
3
#include <pcl/octree/octree_search.h>
#include <pcl/octree/octree_pointcloud_changedetector.h>
#include <pcl/compression/octree_pointcloud_compression.h>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
void octree_search()
{
//srand((unsigned int)time(NULL));
srand(1234);

pcl::PointCloud<pcl::PointXYZ>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data
cloud->width = 1000;
cloud->height = 1;
cloud->points.resize(cloud->width * cloud->height);

for (size_t i = 0; i < cloud->points.size(); ++i)
{
cloud->points[i].x = 1024.0f * rand() / (RAND_MAX + 1.0f);
cloud->points[i].y = 1024.0f * rand() / (RAND_MAX + 1.0f);
cloud->points[i].z = 1024.0f * rand() / (RAND_MAX + 1.0f);
}

float resolution = 128.0f;

pcl::octree::OctreePointCloudSearch<pcl::PointXYZ> octree(resolution);

octree.setInputCloud(cloud);
octree.addPointsFromInputCloud();

pcl::PointXYZ searchPoint;

searchPoint.x = 1024.0f * rand() / (RAND_MAX + 1.0f);
searchPoint.y = 1024.0f * rand() / (RAND_MAX + 1.0f);
searchPoint.z = 1024.0f * rand() / (RAND_MAX + 1.0f);

// Neighbors within voxel search

// These indices relate to points which fall within the same voxel
std::vector<int> pointIdxVec;

if (octree.voxelSearch(searchPoint, pointIdxVec))
{
std::cout << "Neighbors within voxel search at (" << searchPoint.x
<< " " << searchPoint.y
<< " " << searchPoint.z << ")"
<< std::endl;

for (size_t i = 0; i < pointIdxVec.size(); ++i)
std::cout << " " << cloud->points[pointIdxVec[i]].x
<< " " << cloud->points[pointIdxVec[i]].y
<< " " << cloud->points[pointIdxVec[i]].z << std::endl;
}

// K nearest neighbor search

int K = 10;

std::vector<int> pointIdxNKNSearch;
std::vector<float> pointNKNSquaredDistance;

std::cout << "K nearest neighbor search at (" << searchPoint.x
<< " " << searchPoint.y
<< " " << searchPoint.z
<< ") with K=" << K << std::endl;

if (octree.nearestKSearch(searchPoint, K, pointIdxNKNSearch, pointNKNSquaredDistance) > 0)
{
for (size_t i = 0; i < pointIdxNKNSearch.size(); ++i)
std::cout << " " << cloud->points[pointIdxNKNSearch[i]].x
<< " " << cloud->points[pointIdxNKNSearch[i]].y
<< " " << cloud->points[pointIdxNKNSearch[i]].z
<< " (squared distance: " << pointNKNSquaredDistance[i] << ")" << std::endl;
}

// Neighbors within radius search

std::vector<int> pointIdxRadiusSearch;
std::vector<float> pointRadiusSquaredDistance;

float radius = 256.0f * rand() / (RAND_MAX + 1.0f);

std::cout << "Neighbors within radius search at (" << searchPoint.x
<< " " << searchPoint.y
<< " " << searchPoint.z
<< ") with radius=" << radius << std::endl;


if (octree.radiusSearch(searchPoint, radius, pointIdxRadiusSearch, pointRadiusSquaredDistance) > 0)
{
for (size_t i = 0; i < pointIdxRadiusSearch.size(); ++i)
std::cout << " " << cloud->points[pointIdxRadiusSearch[i]].x
<< " " << cloud->points[pointIdxRadiusSearch[i]].y
<< " " << cloud->points[pointIdxRadiusSearch[i]].z
<< " (squared distance: " << pointRadiusSquaredDistance[i] << ")" << std::endl;
}
}

change detection

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
void octree_change_detection()
{
//srand((unsigned int)time(NULL));
srand(1234);

// Octree resolution - side length of octree voxels
float resolution = 32.0f;

// Instantiate octree-based point cloud change detection class
pcl::octree::OctreePointCloudChangeDetector<pcl::PointXYZ> octree(resolution);

pcl::PointCloud<pcl::PointXYZ>::Ptr cloudA(new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data for cloudA
cloudA->width = 128;
cloudA->height = 1;
cloudA->points.resize(cloudA->width * cloudA->height);

for (size_t i = 0; i < cloudA->points.size(); ++i)
{
cloudA->points[i].x = 64.0f * rand() / (RAND_MAX + 1.0f);
cloudA->points[i].y = 64.0f * rand() / (RAND_MAX + 1.0f);
cloudA->points[i].z = 64.0f * rand() / (RAND_MAX + 1.0f);
}

// Add points from cloudA to octree
octree.setInputCloud(cloudA);
octree.addPointsFromInputCloud();

// Switch octree buffers: This resets octree but keeps previous tree structure in memory.
octree.switchBuffers();

pcl::PointCloud<pcl::PointXYZ>::Ptr cloudB(new pcl::PointCloud<pcl::PointXYZ>);

// Generate pointcloud data for cloudB
cloudB->width = 128;
cloudB->height = 1;
cloudB->points.resize(cloudB->width * cloudB->height);

for (size_t i = 0; i < cloudB->points.size(); ++i)
{
cloudB->points[i].x = 64.0f * rand() / (RAND_MAX + 1.0f);
cloudB->points[i].y = 64.0f * rand() / (RAND_MAX + 1.0f);
cloudB->points[i].z = 64.0f * rand() / (RAND_MAX + 1.0f);
}

// Add points from cloudB to octree
octree.setInputCloud(cloudB);
octree.addPointsFromInputCloud();

std::vector<int> newPointIdxVector;

// Get vector of point indices from octree voxels which did not exist in previous buffer
octree.getPointIndicesFromNewVoxels(newPointIdxVector);

// Output points
std::cout << "Output from getPointIndicesFromNewVoxels:" << std::endl;
for (size_t i = 0; i < newPointIdxVector.size(); ++i)
std::cout << i << "# Index:" << newPointIdxVector[i]
<< " Point:" << cloudB->points[newPointIdxVector[i]].x << " "
<< cloudB->points[newPointIdxVector[i]].y << " "
<< cloudB->points[newPointIdxVector[i]].z << std::endl;
}

Reference

History

  • 20180328: created.

Series

Guide

download

1
wget https://github.com/google/googletest/archive/release-1.8.0.zip 

compile

1
2
3
mkdir build 
cd build
sudo cmake-gui ..

with options

BUILD_SHARED_LIBS ON
CMAKE_CONFIGURATION_TYPES Release

compile and install gtest to C:\Program Files\gtest.

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
if(MSVC) 
SET(GTEST_ROOT "C:/Program Files/gtest")
else()
# BOOST_THREAD_LIBRARY /usr/lib/x86_64-linux-gnu/libpthread.so
MESSAGE( [Main] " BOOST_THREAD_LIBRARY = ${BOOST_THREAD_LIBRARY}")
endif(MSVC)

find_package(GTest REQUIRED) # GTest 1.8.0

find_package(GTest REQUIRED) # GTest 1.8.0
include_directories(${GTEST_INCLUDE_DIRS})
target_link_libraries(demo ${GTEST_LIBRARIES} ${BOOST_THREAD_LIBRARY})

Reference

History

  • 20180301: created.

Series

Guide

requirements:

  • pybind11 v2.3.dev0
  • python 2.7

install pytest

1
pip install pytest 

compile

1
2
3
4
5
git clone https://github.com/pybind/pybind11.git
cd pybind11
mkdir build
cd build
cmake-gui ..

with options

PYBIND11_CPP_STANDARD /std:c++11 # default c++14
PYTHON_EXECUTABLE C:/Python27/python.exe
CMAKE_INSTALL_PREFIX C:/Program Files/pybind11

compile with VS 2015 with x64 Release

install to C:\Program Files\pybind11 with only include and share

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
$ tree .

.
├── include
│   └── pybind11
│   ├── attr.h
│   ├── buffer_info.h
│   ├── cast.h
│   ├── chrono.h
│   ├── common.h
│   ├── complex.h
│   ├── detail
│   │   ├── class.h
│   │   ├── common.h
│   │   ├── descr.h
│   │   ├── init.h
│   │   ├── internals.h
│   │   └── typeid.h
│   ├── eigen.h
│   ├── embed.h
│   ├── eval.h
│   ├── functional.h
│   ├── iostream.h
│   ├── numpy.h
│   ├── operators.h
│   ├── options.h
│   ├── pybind11.h
│   ├── pytypes.h
│   ├── stl.h
│   └── stl_bind.h
└── share
└── cmake
└── pybind11
├── FindPythonLibsNew.cmake
├── pybind11Config.cmake
├── pybind11ConfigVersion.cmake
├── pybind11Targets.cmake
└── pybind11Tools.cmake

6 directories, 29 files

Usage

pybind11 (cpp—>python)

  • module: examplelib
  • target: examplelib
  • cpp: example.cpp

example.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
#include <pybind11/pybind11.h>

namespace py = pybind11;

int add(int i, int j) {
return i + j;
}

/*
#include <pybind11/pybind11.h>

namespace py = pybind11;

int add(int i, int j) {
return i + j;
}

struct Pet {
Pet(const std::string &name) : name(name) { }
void setName(const std::string &name_) { name = name_; }
const std::string &getName() const { return name; }

std::string name;
};


/*
module: examplelib
target: examplelib

cpp: example.cpp
*/
PYBIND11_MODULE(examplelib, m)
{
// optional module docstring
m.doc() = "pybind11 example plugin";

// FUNCTIONS
// expose add function, and add keyword arguments and default arguments
m.def("add", &add, "A function which adds two numbers", py::arg("i") = 1, py::arg("j") = 2);

// DATA
// exporting variables
m.attr("the_answer") = 42;
py::object world = py::cast("World");
m.attr("what") = world;


// CLASSES
py::class_<Pet>(m, "Pet")
.def(py::init<const std::string &>())
.def("setName", &Pet::setName)
.def("getName", &Pet::getName);

/*
python3
> help(examplelib)
*/
}

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
cmake_minimum_required (VERSION 2.6)

project (pybind)
enable_language(C)
enable_language(CXX)

find_package(pybind11 CONFIG REQUIRED)
include_directories(${pybind11_INCLUDE_DIRS})
message([MAIN] "Found pybind11 v${pybind11_VERSION}: ${pybind11_INCLUDE_DIRS}")

MESSAGE( [Main] " pybind11_INCLUDE_DIRS = ${pybind11_INCLUDE_DIRS}")
MESSAGE( [Main] " pybind11_LIBRARIES = ${pybind11_LIBRARIES}")

#
# # Create an extension module
# add_library(mylib MODULE main.cpp)
# target_link_libraries(mylib pybind11::module)
#
# # Or embed the Python interpreter into an executable
# add_executable(myexe main.cpp)
# target_link_libraries(myexe pybind11::embed)

# method (1): generate `examplelib.pyd`
pybind11_add_module(examplelib example.cpp)

# method (2): generate `examplelib.dll` rename to `examplelib.pyd`
#add_library(examplelib MODULE example.cpp)
#target_link_libraries(examplelib pybind11::module)

MESSAGE( [Main] " pybind11_INCLUDE_DIRS = ${pybind11_INCLUDE_DIRS}")
MESSAGE( [Main] " pybind11_LIBRARIES = ${pybind11_LIBRARIES}")

#add_executable(cpp_use_python cpp_use_python.cpp)
#target_link_libraries(cpp_use_python PRIVATE pybind11::embed)

cmake and config
cmake

build with vs and we get 3 files:

examplelib.lib 
examplelib.exp
examplelib.cp35-win_amd64.pyd

python import examplelib

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
 python3
Python 3.5.3 (v3.5.3:1880cb95a742, Jan 16 2017, 16:02:32) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import examplelib
>>> help(examplelib)
Help on module examplelib:

NAME
examplelib - pybind11 example plugin

CLASSES
pybind11_builtins.pybind11_object(builtins.object)
Pet

class Pet(pybind11_builtins.pybind11_object)
| Method resolution order:
| Pet
| pybind11_builtins.pybind11_object
| builtins.object
|
| Methods defined here:
|
| __init__(...)
| __init__(self: examplelib.Pet, arg0: str) -> None
|
| getName(...)
| getName(self: examplelib.Pet) -> str
|
| setName(...)
| setName(self: examplelib.Pet, arg0: str) -> None
|
| ----------------------------------------------------------------------
| Methods inherited from pybind11_builtins.pybind11_object:
|
| __new__(*args, **kwargs) from pybind11_builtins.pybind11_type
| Create and return a new object. See help(type) for accurate signature.

FUNCTIONS
add(...) method of builtins.PyCapsule instance
add(i: int = 1, j: int = 2) -> int

A function which adds two numbers

DATA
the_answer = 42
what = 'World'

FILE
e:\git\car\extra\pybind11\build\release\examplelib.cp35-win_amd64.pyd


>>> p = examplelib.Pet('kzl')
>>> print(p)
<examplelib.Pet object at 0x0000025EED9E3D18>
>>> p.getName()
'kzl'
>>> examplelib.add(1,2)
3
>>> examplelib.the_answer
42
>>> examplelib.what
'World'
>>>

embed

example.py

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15

def add(i, j):
print("hello, pybind11")
return i+j

class MyMath:

def __init__(self,name):
self.name = name

def my_add(self,i,j):
return i + j

def my_strcon(self,a,b):
return a + b

cpp_use_python.cpp

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#include <pybind11/embed.h>  
#include <iostream>

namespace py = pybind11;

int main() {
py::scoped_interpreter python;

/*
import sys
print sys.path
print "Hello,World!"
*/
py::module sys = py::module::import("sys");
py::print(sys.attr("path"));
py::print("Hello, World!"); // use the Python API

/*
import example
n = example.add(1,2)
*/
py::module example = py::module::import("example");
py::object result = example.attr("add")(1, 2);
int n = result.cast<int>();
assert(n == 3);
std::cout << "result from example.add(1,2) = " << n << std::endl;

/*
from example import MyMath
obj = MyMath("v0")
obj.my_add(1,2)
*/
py::object MyMath = py::module::import("example").attr("MyMath"); // class
py::object obj = MyMath("v0"); // class object
py::object my_add = obj.attr("my_add");// object method
py::object result2 = my_add(1, 2); // result
int n2 = result2.cast<int>(); // cast from python type to c++ type
assert(n2 == 3);
std::cout << "result from obj.my_add(1,2) = " << n2 << std::endl;

/*
from example import MyMath
obj = MyMath("v0")
obj.my_strcon("abc","123");
*/

py::object my_strcon = obj.attr("my_strcon"); // object method
py::object result3 = my_strcon("abc", "123");
std::string str3 = result3.cast<std::string>();
std::cout << "result from obj.my_strcon(abc,123) = " << str3 << std::endl;

return 0;
}

CMakeLists.txt

1
2
3
4
5
6
7
8
9
10
11
12
13
14
cmake_minimum_required (VERSION 2.6)

project (pybind)
enable_language(C)
enable_language(CXX)

find_package(pybind11 CONFIG REQUIRED)
include_directories(${pybind11_INCLUDE_DIRS})

MESSAGE( [Main] " pybind11_INCLUDE_DIRS = ${pybind11_INCLUDE_DIRS}")
MESSAGE( [Main] " pybind11_LIBRARIES = ${pybind11_LIBRARIES}")

add_executable(cpp_use_python cpp_use_python.cpp)
target_link_libraries(cpp_use_python PRIVATE pybind11::embed)

Reference

History

  • 20180301: created.

Series

Guide

download

1
2
wget https://dl.bintray.com/boostorg/release/1.66.0/source/boost_1_66_0.zip
unzip boost_1_66_0.zip

compile

1
2
3
4
5
6
7
8
9
10
11
cd boost_1_66_0

./bootstrap.bat

./b2 --help

./b2 --clean

./b2 -j8 toolset=msvc-14.0 address-model=64 architecture=x86 link=static threading=multi runtime-link=shared --build-type=minimal stage --stagedir=stage/x64 debug release

#./b2 -j8 toolset=msvc-14.0 address-model=32 architecture=x86 link=static threading=multi runtime-link=shared --build-type=minimal stage --stagedir=stage/win32 debug release

Reference

History

  • 20180301: created.

Series

Guide

build requirements

  • autoconf 2.56 or later
  • automake 1.7 or later
  • libtool 1.4 or later
  • NASM 2.13 x86-64
  • libjpeg-turbo latest

install tools

1
sudo apt-get install autoconf automake libtool

compile nasm

1
2
3
4
5
wget http://www.nasm.us/pub/nasm/releasebuilds/2.13.03/nasm-2.13.03.tar.gz
cd nasm
./configure
make -j8
sudo make install

this will install nasm to /usr/local/bin/nasm

compile libjpeg

We can not use cmake to build libjpeg on linux.
Platform not supported by this build system. Use autotools instead.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git

cd libjpeg-turob

# generate configure
autoreconf -fiv

# exec-prefix=/usr/local/ for /bin and /lib
# prefix=/usr/local/include/libjpegturbo for /include
./configure --exec-prefix=/usr/local --prefix=/usr/local/include/libjpegturbo --with-jpeg8 --disable-static

make -j8
make test
sudo make install

libjpegturbo-config.cmake

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# - Try to find LIBJPEGTURBO
#
# The following variables are optionally searched for defaults
# LIBJPEGTURBO_ROOT_DIR: Base directory where all LIBJPEGTURBO components are found
#
# The following are set after configuration is done:
# LIBJPEGTURBO_FOUND
# LIBJPEGTURBO_INCLUDE_DIRS
# LIBJPEGTURBO_LIBRARIES
# LIBJPEGTURBO_LIBRARYRARY_DIRS

include(FindPackageHandleStandardArgs)

set(LIBJPEGTURBO_ROOT_DIR "" CACHE PATH "Folder contains mysqlcppconn")

if(WIN32)
find_path(LIBJPEGTURBO_INCLUDE_DIR turbojpeg.h
PATHS ${LIBJPEGTURBO_ROOT_DIR})
else()
find_path(LIBJPEGTURBO_INCLUDE_DIR turbojpeg.h
PATHS ${LIBJPEGTURBO_ROOT_DIR})
endif()

if(MSVC)
find_library(LIBJPEGTURBO_LIBRARY_RELEASE turbojpeg
PATHS ${LIBJPEGTURBO_ROOT_DIR}
PATH_SUFFIXES Release)

find_library(LIBJPEGTURBO_LIBRARY_DEBUG turbojpeg
PATHS ${LIBJPEGTURBO_ROOT_DIR}
PATH_SUFFIXES Debug)

set(LIBJPEGTURBO_LIBRARY optimized ${LIBJPEGTURBO_LIBRARY_RELEASE} debug ${LIBJPEGTURBO_LIBRARY_DEBUG})
else()
find_library(LIBJPEGTURBO_LIBRARY turbojpeg
PATHS ${LIBJPEGTURBO_ROOT_DIR}
PATH_SUFFIXES lib lib64)
endif()

find_package_handle_standard_args(LIBJPEGTURBO DEFAULT_MSG LIBJPEGTURBO_INCLUDE_DIR LIBJPEGTURBO_LIBRARY)

if(LIBJPEGTURBO_FOUND)
set(LIBJPEGTURBO_INCLUDE_DIRS ${LIBJPEGTURBO_INCLUDE_DIR})
set(LIBJPEGTURBO_LIBRARIES ${LIBJPEGTURBO_LIBRARY})
message(STATUS "Found mysqlcppconn (include: ${LIBJPEGTURBO_INCLUDE_DIR}, library: ${LIBJPEGTURBO_LIBRARY})")
mark_as_advanced(LIBJPEGTURBO_ROOT_DIR LIBJPEGTURBO_LIBRARY_RELEASE LIBJPEGTURBO_LIBRARY_DEBUG
LIBJPEGTURBO_LIBRARY LIBJPEGTURBO_INCLUDE_DIR)
endif()

copy libjpegturbo-config.cmake to /usr/local/lib/cmake/libjpegturbo/

1
2
sudo mkdir -p /usr/local/lib/cmake/libjpegturbo/
sudo cp libjpegturbo-config.cmake /usr/local/lib/cmake/libjpegturbo/

CMakeLists.txt

1
2
3
find_package(LIBJPEGTURBO REQUIRED)
include_directories(${LIBJPEGTURBO_INCLUDE_DIRS})
target_link_libraries (example_jpeg ${LIBJPEGTURBO_LIBRARIES})

Reference

History

  • 20180223: created.

Series

Guide

prerequiests

  • Visual Studio 2015
  • LLVM 5.0.1
  • eigen3
  • cmake
  • opengv latest

install llvm+clang

wget http://releases.llvm.org/5.0.1/LLVM-5.0.1-win64.exe

and install llvm to system.

compile

git clone https://github.com/laurentkneip/opengv

cd opengv 
mkdir build && cd build && cmake-gui ..

Configure and choose generator Visual Studio 14 2015 Win64 and set toolset LLVM-vs2014 (by default Visual Studio 2015 (v140))

llvm toolset for vs2015
OK.

with options

CMAKE_CONFIGURATION_TYPES Release
EIGEN_INCLUDE_DIR C:/Program Files/PCL 1.8.1/3rdParty/Eigen/eigen3
BUILD_PYTHON OFF
BUILD_TESTS OFF

generate opengv.sln.

we can see toolset has benn changed from Visual Studio 2015 (v140) to LLVM-vs2014.
toolset

compile project and opengv.lib and random_generators.lib will be generated.

install to C:/Program Files/opengv.

Reference

History

  • 20180124: created.

Copyright