Sunday 31 January 2016

ROS multi sensor fusion

ETH has a very good ROS package called multi sensor fusion here. To run its tutorial, we need to follow the steps below

Create a catkin workspace
Create a package called sensor_fusion_comm, and then put all the files in the msf source code into this package, and catkin make
Do the same to create packages ssf_core and ssf_updates

At the catkin workspace folder, launch the node by roslaunch ssf_updates viconpos_sensor.launch, and we need to put the viconpos_sensor_fix.yaml in this folder as well.

To open the gui, use rosrun rqt_reconfigure rqt_reconfigure.

In the script to plot, i.e. plot_relevant, rxplot is no longer supported and we have to use rqt_plot. The legend seems could not be changed anymore, and we should delete -b $T -t "position & velocity" -l px,py,pz,vx,vy,vz.

For using your own dataset, first run roslaunch, then open reconfig gui, and then play the rosbag. After some time, we need to hit the init filter for multiple times, and then open the plot to see the output.

We need to run plot_relevant every time we start to run ssf, otherwise there will be no data displayed.

Friday 29 January 2016

Convert roll pitch yaw to quaternion

In ROS, we could use tf::createQuaternionFromRPY to convert roll pitch yaw to quaternion. We need to include tf in the find_package, and  <build_depend>tf</build_depend <run_depend>tf</run_depend> in the package.xml.

In the source file, add #include "tf/transform_datatypes.h", the use tf::Quaternion q = tf::createQuaternionFromRPY( roll, pitch, yaw); for conversion.

To publish the quaternion in IMU message, use

imu_msg.orientation.x = q.x();

imu_msg.orientation.y = q.y();

imu_msg.orientation.z = q.z();

imu_msg.orientation.w = q.w();

Thursday 21 January 2016

Deep learning and SLAM

Today I encounter several good sources for SLAM. This publishes notes on papers, while this introduces deep learning and SLAM. It is mentioned that the maps built by SLAM could be used to fuel the ConvNets in deep learning. Meanwhile, the computer vision research can be classified into two schools, namely geometry and recognition. So far SLAM is mostly focused on geometry.

In fact, Tombone's blog is really fascinating. It is also mentioned here that a great source to start learning deep learning is this.

Wednesday 20 January 2016

Notes on using ROVIO

Besides SVO, ROVIO is a recently released VO. To build it, first git clone the source code and the kindr, then download the lightweight_filtering from bitbucket and replace the empty folder in the rovio source code.

Next, install kindr and create a new catkin workspace. In the src folder, create a new package called rovio, and replace it with the source code of rovio. Then catkin_make rovio -DCMAKE_BUILD_TYPE=Release, note that we do not use --cmake-args. Then catkin_make again to build the nodes.

To run the code, use roslaunch rovio_rosbag_node.launch, and the path for the rosbag can be changed.

Wired issue about git checkout, catkin make

When I try to go back to a previous version of the code, what worked before does not work anymore. It turns out that catkin make does not compile one of the .cpp file. I have to include a cout in that .cpp, which changes it, to force catkin make to include the .cpp file in the compilation.

Sunday 17 January 2016

Levenberg-Marquardt algorithm

I find one implementation of Levenberg-Marquardt that solves Ax = b using least square, which does not seem to make sense. To learn more about Levenberg-Marquardt algorithm, I find one fascinating blog here

Note that LM is very sensitive to the initial guess, as stated in wikipedia, and it could not deal with outliers. If the data contains a lot of outliers, LM is not able to converge. 

In cases with only one minimum, an uninformed standard guess likeβT=(1,1,...,1) will work fine; in cases with multiple minima, the algorithm converges to the global minimum only if the initial guess is already somewhat close to the final solution.

A simpler version of LM, without using lambda is \mathbf{(J^{T}J)\boldsymbol \delta  = J^{T} [y - f(\boldsymbol \beta)]} \!.

Wednesday 13 January 2016

5 point algorithm

I have a hard time using the supposedly easy to use findEssentialMat, since I could not understand the basics of 5 point algorithm. There is a chronology about the algorithm here, which mentions that 8 point algorithm is better for front looking camera, but it could not apply to planar scene. Also, check out the fundamental matrix song here!

In this post, the difference between E and F is explained. It mentions that the points to construct E are normalized, in the sense that they are centered at the principle point and divided by the focal length.


Tuesday 12 January 2016

findEssentialMat threshold issue

I find a peculiar issue, that to use the findEssentialMat, one has to normalize the points based on the discussion here, by multiplying the inverse camera matrix. However, in reality, we could undistort the points first and then still use focal = 1 and pp = (0, 0) as if the points are already normalized, even though they are not, as reported here and here.

The I come across the code for monocular VO in paper named Autonomous Visual Mapping and Exploration With a Micro Aerial Vehicle here.

It includes both solve5PointRansac and solveP3PRansac. The solve5PointRansac uses findEssentialMat and recoverPose. The solveP3PRansac seems to use the RANSAC implemented by the authors.

Note that in motion_estimation/npoint/fivepoint/src/main.cpp, the focal passed to findEssentialMat is 300, while the threshold is 1.0 - 1e-10. In the findEssentialMat, the threshold is changed according to the focal as threshold /= focal; therefore it seems that we could not normalize points first and then use focal = 1, as this would leave the threshold unchanged.

Saturday 2 January 2016

Kalman filter for dummies

Here are some good resources for learning KF, such as Kalman Filter For Dummies, the video by student dave, and the book.

Several other sources include this, which talks about KF in terms of UAV and sensor fusion, and this, this, and this.