I've seen the package has changed a lot since I used it the last time.
I have a doubt:
Is it correct the structure shown in the picture (with two map frames)?

↧
robot_localization
↧
rtabmap odometry source
I have been experimenting with rtabmap_ros lately, and really like the results I am getting. Awesome work Mathieu et al.! First, let me describe my current setup:
Setup
-----
- ROS Indigo/Ubuntu 14.01
- rtabmap from apt-binary (ros-indigo-rtab 0.8.0-0)
- Custom robot with two tracks (i.e. non-holonomic)
- Custom base-controller node which provides odometry from wheel encoders (tf as /odom-->/base_frame as well as nav_msgs/Odometry messages)
- Kinect2 providing registered rgb+depth images
- XSens-IMU providing sensor_msgs/Imu messages (not used at the moment)
- Hokuyo laser scanner providing sensor_msgs/LaserScan messages
Problem Description
-------------------
The problem I am having is the quality of the odometry from wheel encoders: while translation precision is good, precision of rotation (depending on the ground surface) is pretty bad.
So far, I have been using gmapping for SLAM/localization. This has been working good, gmapping subscribes to the /odom-->/base_frame tf from the base_controller as well as laser scan messages. In my experiments, gmapping does not have any problems in indoor environments getting the yaw-estimate right.
Using rtabmap's SLAM instead of gmapping works good as long as I don't perform fast rotations or drive on surfaces on which track slippage is high (i.e. odom quality from wheel encoders is poor). This results in rtabmap getting lost. To improve rtabmap performance, I would like to provide it with better odometry information. My ideas are:
1. Use [laser_scan_matcher](http://wiki.ros.org/laser_scan_matcher) subscribing to laser scan + imu/data + wheel_odom OR
2. Use [robot_pose_ekf](http://wiki.ros.org/robot_pose_ekf) subscribing to imu/data + wheel_odom OR
3. Use [robot_localization](http://wiki.ros.org/http://wiki.ros.org/robot_localization) subscribing to imu/data + wheel_odom OR
4. Use [gmapping](http://wiki.ros.org/gmapping) subscribing to tf + laser scans OR
5. Use [hector_mapping](http://wiki.ros.org/hector_mapping) subscribing to laser scans
Solution (5) only uses laser scans and does not work reliably enough according to my experiments. (4) works great, but is overkill for this purpose and uses too many resources. (3) and (2) only use relative sensor sources (i.e. do not do their own ICP matching using laser scans), but might be worth a try. (1) would be my preferred solution as it uses laser scans as absolute reference. However, laser_scan_matcher only provides geometry_msgs/Pose2D and no nav_msgs/Odometry which is required by rtabmap.
Question
--------
@matlabbe Can you advise what would be you recommendation in my case? I have been looking at the ideas from [this thread](http://answers.ros.org/question/185548/convert-tf-to-nav_msgsodometry/) as well as the laser_scan_matcher mod listed [here](http://answers.ros.org/question/12489/obtaining-nav_msgsodometry-from-a-laser_scan-eg-with-laser_scan_matcher/), but I am unsure whether rtabmap uses the pose and twist information contained in nav_msgs/Odometry, or if providing pose only would suffice. Please advise.
Thanks you!
↧
↧
Simulating robot_localization on pioneer3at with xsens 300 AHRS
I am testing `robot_localization` using the `receive_xsens` stack listed here: [link text](http://wiki.ros.org/receive_xsens), and the `ros-pioneer3at` stack listed here: [link text](https://github.com/dawonn/ros-pioneer3at).
I set up the frame id's to match those in the pioneer3at simulation stack as follows:
map_frame = Pioneer3AT/map
odom_frame = Pioneer3AT/odom
base_link_frame = Pioneer3AT/base_link
world_frame = Pioneer3AT/odom
I configured xsens imu to produce orientation, angular velocity, and acceleration. The `imu0` configuration looks like this:
I will incorporate p3at hardware later with wheel encoders. For now trying to fuse simulated odometry from the `ros-pioneer3at` stack. Specifically, I have configured the `odom0` as follows:
After configuring, I launch the `ros-pioneer3at` simulation, then launch the `receive_xsens` hardware, then launch the `ekf_localization` to fuse the sensor and simulation data.
There is no data being published to `odometry/filtered`. I am not sure where to start the debugging.
*-----edit 1-----*
Here is the node graph:
[rosgraph.png](/upfiles/14222909183992352.png)
*-----edit 2-----*
Thanks for the insight!
Here is the script for /Pioneer3AT/pose:
import rospy
import tf
from nav_msgs.msg import Odometry
def subCB(msg):
P = (msg.pose.pose.position.x, msg.pose.pose.position.y, msg.pose.pose.position.z )
Q = (msg.pose.pose.orientation.x, msg.pose.pose.orientation.y, msg.pose.pose.orientation.z, msg.pose.pose.orientation.w)
br = tf.TransformBroadcaster()
br.sendTransform(P, Q, rospy.Time.now(), "/Pioneer3AT/base_link", "world")
if __name__ == '__main__':
rospy.init_node('tf_from_pose')
rospy.Subscriber("/Pioneer3AT/pose", Odometry, subCB)
rospy.spin()
Also, here is the `ekf_localization_node` launch file I used to try and fuse the xsens imu hardware with the `ros-pioneer3at` simulation:
[true, true, true,
true, true, true,
false, false, false,
false, false, false,
false, false, false] [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true] -->
[0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]
Here are the sample message for `/imu/data`:
header:
seq: 53
stamp:
secs: 1422541047
nsecs: 981105890
frame_id: /imu
orientation:
x: -0.0338176414371
y: -0.0273063704371
z: -0.260511487722
w: 0.964491903782
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: -0.00170594989322
y: 0.000133183944854
z: 0.0014496203512
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: 0.690724253654
y: -0.452333807945
z: 9.75730419159
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
---
Here is a sample sensor message for `/Pioneer3AT/pose`:
header:
seq: 3203
stamp:
secs: 1422541097
nsecs: 747374621
frame_id: /Pioneer3AT/odom
child_frame_id: /Pioneer3AT/base_link
pose:
pose:
position:
x: 0.000293139455607
y: 0.00299759814516
z: 0.17000131309
orientation:
x: 2.56361946104e-06
y: 7.77009058467e-06
z: 0.0016544693635
w: 0.999998631331
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
twist:
twist:
linear:
x: 0.0
y: 0.0
z: 0.0
angular:
x: 0.0
y: 0.0
z: 0.0
covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
---
---*edit 3*---
Still trying to make it work! I now started trying to get `odometry` working without an imu as you suggest. `robot_localization` improves the estimate as there is less drift than before.
Next, I attempt to integrate the imu. I have an updated launch file here:
[true, true, true,
true, true, true,
false, false, false,
false, false, false,
false, false, false] [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true] [0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0,
0.0, 0.0, 0.0, 0.0, 0.00, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]
I connect *Pioneer3AT/base_link->/imu* with the following:
The tf_tree is here with `ekf` node running: 
One observation:
I notice that in RVIZ the wheels of the P3AT bounce up and down every few seconds. Is this likely because I am trying to set position/orientation with two broadcasters?
↧
Integrate a GPS sensor with robot_localization and use move_base node
Hello,
I’m trying to integrate a `GPS` sensor to my mobile robot with the `robot_localization_node`. I have an IMU sensor added to my `robot_localization_node` but now I have some theoretical questions about how to integrate the `GPS`.
I would like to use the output of the `robot_localization node` to feed the `move_base node` in order to set some waypoints in GPS coordinates (can be in UTM) and then the robot follow them.
I’m a bit lost about how to proceed. This is what I think:
I launch `ek_localization_node` with this inputs: `IMU` and `wheel_odom`. And I remap the output topic to `local_odometry`. It'll be the local odometry in odom frame. (I must use this topic to input to `move_base` if I want to navigate in `odom` frame).
Then I launch `navsat_transform_node` with `IMU`, `wheel_odom`, and `GPS NavSatFix` and the output topic in world frame is `/odometry/gps`. But this topic is publishing only when the GPS signal is received so...
I launch another `ekf_localization node` with `IMU`,`Wheel_odom` and `odom1` (`/odometry/gps` from `navsat_transform node`). The output of this node (`/odometry/filtered`) is in world frame and is what I have to put as input to `move_base`, right?
With this configuration I don’t need to use `gps_common`, right?
When I launch all, I have to call the service `set_pose` in order to give some estimation pose in UTM coordinates to /odom in base to /world, isn’t?
**UPDATE 1:**
Thanks Tom. I've taken into account all your comments and here's my `robot_localization` [launch file](https://www.dropbox.com/s/k7pzj248pjdxwma/robot_localiz.launch?dl=0) and my [tf tree](https://www.dropbox.com/s/1tcikqswf0db1pb/frames.pdf?dl=0).
1. I feed the `move_base` odometry with the `local_odometry` returned by my first instance of `ekf_localization_node`.
2. As I've my waypoints in UTM coordinates, their should be in `utm frame`. I can't see the tf of utm frame publishing, but in the source code I see it's `/utm`.
3. I think the `global planner` of `move_base_node` needs the global frame set to the same frame of the waypoints. So It should be `/utm`, but when I launch this configuration, `move_base_node` says this:
> [ WARN] [1422876635.213009322]:> Waiting on transform from> summit_a/base_footprint to utm to> become available before running> costmap, tf error:
I understand that `move_base` can't find the tf between `utm` and `base_footprint` frames. I don't know if it can be because the tf isn't visible by `move_base_node` or I'm doing something wrong.
- (Just to test.) If I set the global frame of the `global_panner` of `move_base_node` to `summit_a/world` it reports me this:
> [ERROR] [1422873579.326459610]: The> goal pose passed to this planner must> be in the summit_a/world frame. It is> instead in the summit_a/utm frame.
- (other test) Setting the global frame of `global_planner` to `summit_a/odom`:
> [ERROR] [1422876188.622192440]: The> goal pose passed to this planner must> be in the summit_a/odom frame. It is> instead in the utm frame.
So, the `global_planner` frame of `move_base_node` needs to be consistent or the same that the frame of the goals. `/utm` in this case.
- (Another idea) Considering how `move_base_node` works, at first I thought the best way to make the integration of `GPS`, was that the world frame represent real-world coordinates (UTM) and was the father of frame `/odom`. Thus the `move_base_node` receives waypoints respect to frame world in UTM. I understand that for easy viewing in rviz and avoid the problems of continuity, `robot_localization_node` publishes the transform `utm` as son of `/odom`, but I think this can be a little problem for navigation by `move_base_node`.
**UPDATE 2:**
Thanks Tom.
1. Now I get the tf between `/world` and `/utm`. The point was that `navsat_transform_node` doesn't publish the tf until the GPS signal is nonzero.
2. The topic `/odometry/gps` without `child_frame_id` defined is meaningless for my purpose,right? Because it isn’t publish constantly if the GPS signal is lost.
3. Now my waypoints are in base to the `utm` frame.
4. With which topic I must feed the `move_base` node? I think the best option is the `/odometry/filtered` topic from my second instance of `ekf_localization_node`, isn’t it? It takes into account the output of `navsat_transform_node` and therefore the GPS position.
5. I don't really understand the problem with the heading of `IMU`, I'm sorry. My `IMU` hasn’t got compass and it doesn’t provide any kind of orientation data, just the angular velocity and lineal acceleration. In the `imu0_config` param just the angular velocity in Z is set to `true` and the stimation of the heading is enough. Is that a problem if I want to integrate the `GPS`?
Any ideas, corrections or suggestions would be appreciated.
Best regards and thanks in advance.
↧
Fusing 2 sonars / pingers as continous sensors or 'GPS'
Picking up where I left [ekf_localization-node-not-responding](http://answers.ros.org/question/197404/ekf_localization-node-not-responding/), I am trying to fuse position in x and y from 2 sonars (Tritech Micron MK III) with (double integrated accelerometer) position from an IMU (xsens MTi 30).
This happens inside a rectangular pool, so each sonar
- will track a side wall and measure the absolute distance to it (one sonar looking at the short wall, the other at the long wall),
- we have a distance and a heading attached to it
- and report x and/or y position either to use as
- absolute position with x axis || long side of pool, y || short side of pool
- or relative position to the starting point (differential_integration == true)
(The point being trying to get a quicker update rate compared to doing machine vision on a full scan)
The problem now is: **How can I account for the sonar's offset from base_link** since **the measurements aren't really in the body frame?**
I could either
1. report the data with frame_id `odom` or `map`, but how could I account for the mounting offset especially when the sub is not aligned with the side walls.
1. would that be possible with a single instance of robot_localization, or should then tread them as 'GPS'?
2. report the data in a `Sonar1` and `Sonar2` frame and use the current heading of the sonar head (plus the fixed translation) to publish a transform from `Sonar1` to `base_link`
- This means that I will have to use the attitude solution to track the wall, but that's outside the scope of robot_localization.
I believe the latter is the better approach. Does this make sense or how could it be improved?
Thanks,
Rapha
----------
Wrap up on how to fuse the sonars
---------------------------------
I am still working through implementing some of these steps, but I will document the process here.
What are the frames and transforms needed for fusing the sonars:
* Frames:
* `odom`
* `pool`
* `sonar1` (will be fused for x only)
* `sonar2` (will be fused for y only)
* `base_link`
* `imu`
* Transforms:
* `odom->base_link` (generated by ekf_localization)
* `odom->pool` (static transform, accounting for rotational offset of pool walls/axes with ENU)
* `pool->sonar1` (broadcasts a translation of the sonar mounting position relative to `base_link` rotated/trigonometried by the yaw angle of `pool->base_link` transform)
* `pool->sonar2`
* `base_link->imu`
* Data:
* /sonar/position/x in frame `sonar1`
* /sonar/position/y in frame `sonar2`
* /imu/data in frame `imu`
* /svs/depth in frame `svs`
WORK IN PROGRESS
----------
EDIT (regarding 1st edit in Answer):
I have set it up the way you recommended and I publish a transform from `odom-->sonar` every time I publish a new sonar reading (just before).
The rotational part of that transform uses the current attitude as published by the robot_localization node.
Robot_localization sees the following input at the moment:
- attitude from the IMU (imu/data)
- position from the sonars (sonar/position0 and sonar/position1)
However, after a few seconds the robot_localization node slows down to about 1 update every 5 seconds...
**EDIT:** By 'slows down' I mean the message broadcast rate slows down (rostopic hz /odometry/filtered)
I wasn't sure if using the odom attitude was the right thing to use, so I tried it with the direct attitude solution from the imu.
The robot_localization node still slows down.
My [Launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) is below and I've uploaded a rosbag [here](https://dl.dropboxusercontent.com/u/13645240/2015-01-19-15-00-43.bag). I tried to create a debug file, but no file was created anywhere.
Any ideas? Thank you so much,
Rapha
----
EDIT 2: I am running branch master 3c3da5a537
----
EDIT 3: I have investigated a little further. You said it might be to do with missing/broken transforms.
When initially posting this question I was publishing the transforms (odom->sonar) upon every execution of the sonar processing node (whenever a raw message from the sonar arrived) --> immediately before publishing the sonar position (in frame sonar) itself. I wasn't sure how ROS handles timestamps and if robot_localization would receive and process the transform before the position update. So I decided to publish the transform whenever the IMU attitude gets published. This happens at 400 Hz and is thus much more often than then sonar position (25-30Hz).
This improved the problem: Instead of slowing down virtually instantly it now runs for a while before slowing down and eventually stopping altogether.
Also if I now stop the ekf_localization node and restart it, it will start running again for about ~45s.
Stopping the process by ctrl+c takes about 10s to do. When the node is running normally, this is virtually instant.
Sometimes (1/5 tries perhaps) the ekf_localization complains:
at line 240 in /tmp/buildd/ros-indigo-tf2-0.5.6-0trusty-20141014-0329/src/buffer_core.cpp
Error: TF_NAN_INPUT: Ignoring transform for child_frame_id "base_link" from authority "unknown_publisher" because of a nan value in the transform (-nan -nan -nan) (-nan -nan -nan -nan)
I believe there might be a connection with sometimes erratic data from the sonars, but I can't quite pinpoint it. I'll try improving their accuracy in the meantime.
EDIT 4:
After changing the covariance matrix robot_localization does not crash any more. But...
I have looked at more data sources working together and I came across another problem. This very much relates to the initial question: [ekf_localization-node-not-responding](http://answers.ros.org/question/197404/ekf_localization-node-not-responding/).
I am fusing the absolute orientation from the IMU differentially. However, when I am looking at the attitude solution generated (and I've done this for the 1st time in degrees... now) the value does not track the change in attitude.
I.e. a 90° change in yaw gets registered as ~3 to 9° and its quite sluggish.
I've tried this with only the IMU publishing data and robot_localization running and without robot_localization.
Would you mind looking to the attached bag files?
The files show a -90° rotation about yaw followed by +90 ° about yaw. (Ending up in same orientation)
[IMU and robot_localization](https://dl.dropboxusercontent.com/u/13645240/2015-02-01-00-06-39.bag)
[IMU by itself](https://dl.dropboxusercontent.com/u/13645240/2015-02-01-00-28-07.bag)
- The [launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) below is still up to date
- the IMU publishes in the `xsens` frame and (as discussed in the other thread)
- there is a static transform `base_link->xsens` to account for the mounting.
Thank you very much for your help,
Rapha
----
**EDIT 5:**
Since fixing the covariance matrix robot_localization hasn't crashed anymore.
I am a little bit confused as of how the depth sensor (aka absolute z in odom) and the sonars (absolute x,y in odom) get fused. As per your suggestion I am publishing a transform with the offset of the sensor mounts to base_link and the current attitude solution from robot_localization as the rotation. (I hope I understood you correctly here).
I have set ''differential integration = true'' for sonars (x,y in odom), but I don't want to do this for the depth sensor (z in odom).
When only fusing the depth sensor (and only using z in the ekf) I also get small values in the fused x,y. To me this odd, as I thought only the even z measure gets fused, but I can kind of make sense of it: The ekf fuses the untransformed z, then transforms it to base_link which produces x,y distances.
*My question is:*
If I'm trying to fuse the x, y data ''differentially'' integrated, won't the fused x, y position be locked to the value generated from fusing the z measurements?
I could differentially fuse it, which should solve this, but the depth sensor does make for a very nice absolute measurement.
----
**EDIT 5b:**
[2015-02-11-01-37-30.bag](https://www.dropbox.com/s/7wyfxequt858u9z/2015-02-11-01-37-30.bag?dl=0)
*I updated the ekf [launchfile](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch) below, but the rosbag is without the ekf.*
I can currently only move along heave (z) (and due to issues with the buoyancy not that much):
- We started off slightly submerged,
- went down along negative heave / z (no change in attitude intended)
- and up to the surface.
**Z** aka`/svs/depth` represents it exactly.
**X** aka `sonar/positionBMT` and **Y** aka `sonar/positionUWE` should not have changed, but there is a jump in **X** when surfacing (from ~4m to ~2m).
Thank you very much for your help.
----
**@EDIT 6:**
Below is the diagram. For the time being the sub's orientation is fixed in relation to the pool walls, but that will change. The sonars are each tracking one of the pool walls.
I also realised 2 things I am doing wrong:
- I assumed my `odom` frame is aligned with ENU as well as the pool. Since the pool isn't pointing north this is not true.
- The sonars measure distance in relation to the long and short side of the pool.
- I therefore need a `pool` frame that is rotated from `odom`, but shares the same origin.
- The `odom->sonarBMT/sonarUWE/SVS` transforms I am publishing are wrong I believe:
- I think I misunderstood Toms instructions in the initial answer. At the moment my transform looks like this:
- *Rotational component has the rotation of* `odom->base_link`
- *Translational component has the distance of `base_link` to `sonarBMT/sonarUWE/SVS` measured in `base_link`*
- I think it should be the a pure translational transform, where the translation is computed from the current robot heading (aka the rotation in `pool->base_link`) and the mounting position offset in `base_link`.

----
**EDIT 6b:**
I have these new frames and transformations running and the `pool->SONAR_UWE` as well as `pool->SONAR_BMT` transforms look good.
The
Depth data seems to get fused correctly, but neither does the X or Y position from the sonars.
Might the problem be that my X and Y data coming from the sonars is fused separately? After being rotated, each individual measurement is not 'just' x or y. Once in `odom` it has a non-zero x or y component respectively and is then fused absolute... I might be totally misunderstanding part of the process here though.
**EDIT 6c:**
6b might have some truth to it. My `SONAR_BMT`aka the x axis (pool frame) sonar wasn't outputing data. While that is the case `rosrun tf tf_echo pool base_link` returns a fitting solution and so does `rosrun tf tf_echo odom base_link`.
As soon as the sonar outputs data again the ekf solution outputs rubbish again.
I will try to put the sonar measurements together in one message and try to fuse that.
----
[Launchfile on Github](https://github.com/Low-Cost-UW-Manipulation-iAUV/big_red_launch_configs/blob/thrusters_urdf/launch/ekf_localization.launch)
↧
↧
ekf_localization_node strange divergence
I experienced some behavior of the ekf_localization_node that I cannot explain.
To demenstrate this case, I use an ekf_localization_node with only one odometry input.
The configuration is very simple:
frequency: 20
sensor_timeout: 0.5
two_d_mode: true
map_frame: map
world_frame: odom
odom_frame: odom
base_link_frame: base_footprint
odom0_config: [false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false]
odom0_differential: true
And the result looks like this (Red: wheel odometry, Orange: ekf odometetry)

I supposed to get idendentical tracks. What's wrong?
↧
simulate a GPS reading from a file
Good morning:
I'm working with robot_localization, i have a IMU that work well and a GPS. Now I don't have the hardware GPS however i have a file where i have store data GPS. I want test the system with the IMU and simulate read GPS but since file.
Some idea?
Thanks
↧
IMU ignored
Thanks again for your feedback, Tom.
I fined-tuned the rejection_threshold parameters and now the filter is responsive.
Also, enabling the 2D mode helped
But still the drift is way too much. After a few seconds of execution the twuist looks like this:
> twist:
twist:
linear:
x: 0.913357125234
y: 0.430200660862
z: -7.14791210599e-13
angular:
x: -2.81627680375e-24
y: 5.93552733918e-24
z: -0.00848949772664
The IMU must be calibrated and I have to figure out the covariance matrix for the measurements. Is there any ROS package that is particularly suitable for my needs?
thanks,
marcello
↧
map frame from robot_localization
I am trying to create the standard tf frames specified in REP 105 (map, odom, and base_link) using the robot_localization package. I only have GPS and IMU sensors available. So far I have been able to create the odom and base_link frames, but I am having trouble creating the map frame.
This is my launch file:
[false, false, false,
true, true, true,
true, true, true,
true, true, true,
true, true, true] [true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
If I set the world_frame parameter to "odom", then things seem to work but I only have odom and base_link frames. If I set the world_frame to be "map", like above, then I get this error message: Could not obtain transform from odom->base_link.
Note that I am new to ROS, so my trouble may be conceptual. I would appreciate any help someone is willing to offer. Thank you!
↧
↧
robot_localization gravity removal
I am using the last version of robot_localization.
I have a 3DM-GX3-5 OEM IMU sensor in my robot.
I want to remove the effect of gravity in the EKF.
I have set the parameter in the launchfile so to do that. However, my results are not good because (I suppose) the EKF thinks there is some acceleration in Y axis.
I don't know exactcly how does it work, but I can imagine that the filter use the orientation given by the IMU topic to remove the acceleration due to gravity in the 3 axis. If I am right, the problem might be in the orientation calibration.
Any idea of what can be happening?
↧
ekf_localization_node navsat_transform_node
Hi Tom! Remember mi system:
-I have a IMU microstrain 3DMGX2
-GPS
-I want integrate with ekf
My configuration:
**ekf_localization_node**
-Inputs
IMU/DATA output
Tnavsat_transform_node output
-Outputs
Odometry message
**navsat_transform_node**
-Inputs
IMU/DATA output
nmea serial GPS (NavSatFix)
Odometry message
-Outputs
odometry gps
With rqt graph i can see:

Then i include a broadcasting transform base_link -> IMU and when i do this mi rqt graph change:

When i echo gps/odometry not is correct, i need to do other broadcast?
In upload to dropbox my launch file and debug:
https://www.dropbox.com/sh/ab23ou7fo7nw2cd/AAD722h-vmk0dCveJaWuOfZBa?dl=0
Thanks
EDIT 2: when I point to the imu towards magnetic north, on the positive x serial cable the opposite side. imu get data through the following:
X: 0
Y: 0
Z: 0.7
W: 0.7
Is it right?
↧
Robot navigation using ultrasound and IR sensors
Hi all,
I have a mobile robot with multiple ultrasound and IR sensors on the front, side and at the back. It also has Rotational Encoders to generate Odometry data and IMU/GPS for localization. I would like the robot to navigate around the building with a given map using these sensors only (I don't have a laser sensor). My doubt is how can I interpret data from Ultrasound and IR sensors and use that for navigation. So, I would appreciate if someone can point me in the right direction and tell me about existing ROS packages, relevant links and other resources which will be helpful for the above mentioned problem.
I know this is a very general question and I am not looking for specific answers but just pointers to a good source from people who have worked on similar problems before.
Edit 1: One more thing, is it possible to improve localization using proximity sensors (something similar to amcl which uses laser scan). Maybe, generate fake laser scan data using proximity sensors and then use amcl to improve localization of the robot or maybe some other way. Does anyone has some insights on this?
Thanks in advance.
Naman
↧
ekf_localization with gps and imu
Hi everyone:
I'm working with robot localization package be position estimated of a boat, my sistem consist of:
Harware:
-Imu MicroStrain 3DM-GX2 (I am only interested yaw)
- GPS Conceptronic Bluetooth (I am only interested position 2D (X,Y))
Nodes:
-Microstrain_3dmgx2_imu (driver imu)
-nmea_serial_driver (driver GPS)
-ekf (kalman filter)
-navsat_transform (with UTM transform odom->utm)
-tf_transform (broadcast link base_link->imu)
When I launch and conect all I have:


After changed micros_strain node i got that work successful, i connected only imu and ekf and work well.
My problem is when I integrate all system such as I explain above, the topic /odometry/filtered that is publishing for the filter not work well, because without reason some time the position starting to increase until hundreds of meters, i revised the position that i get from GPS and it work well. I think that the filter not work well. Any idea?
I let in dropbox a rosbag with the system working.
https://www.dropbox.com/s/xqc7m47q28dciir/miercoles.bag?dl=0
Thanks!!
EDIT 1:
For now I'm using the laptop as a robot, the imu will be mounted as shown in the picture. In the laptop serial cable I have placed facing the screen, which is the direction of my movement. The fork used is https://github.com/cra-ros-pkg/microstrain_3dmgx2_imu .
Navsat information is correct, the problem is the position in the case of filtered odometry, as sometimes hundreds of meters unexplained increases.

And i ploted odometry/filtered and odometry/gps and the result is this:

↧
↧
Ultrasound and IR sensors vs Kinect for Robot navigation
Hi all,
I have a mobile robot and I would like it to navigate around the building. I already have a 2D map of the building. I have Rotational encoders to get the odometry information and IMU/UWB for localization. I only have Ultrasound, IR sensors and Kinect which I can use for navigation. I want to know which is better for navigation (using Ultrasound and IR sensors or Kinect) given that I am aiming for pretty good accuracy as well as it should not be very computationally expensive. In my opinion, Kinect will do a better job but my concern with Kinect is that it might be computationally very expensive as compared to Proximity sensors given that I have to run it on the NVIDIA Jetson TK1 board (https://developer.nvidia.com/jetson-tk1) but then again if I go with Proximity sensors, I have to use bunch of them and I don't know how effective and efficient that will be. Also, I am little worried about the dead band in case of Kinect which is around 50 cm which is way more than the dead band for Ultrasound sensors (~ 10 - 15 cm).
Any guidance regarding this will be appreciated.
Update 1: Can Kinect sensor be used for mobile robot navigation when there is a glass wall? I think it can not be used but I am not sure.
Thanks in advance.
Naman
↧
Landmark updates in robot_localization
I am currently obtaining landmark updates for odometry which are not continuous position data and provide with discrete jumps in the position data. The '`robot_localization`' documentation mentions that the '`navsat_transform_node`' essentially takes in the GPS data and converts it to an odometry message in the 'world frame'.
I have the two following questions with respect to this:
1. What is the `'world frame`' for the
output odometry in this node?
2. Will
the landmark updates for state
estimation required to be with
respect to the `odom frame` or
`map frame`? (because they are
discrete position updates)
↧
Localizing using AR tags and wheel odometry
Hello everyone,
My team and I are working on a project where we need to localize our robots using AR tags (ar_track_alvar). Our robots will not always see tags, so we are using wheel odometry to track motion during these periods. We are trying to accomplish this through sensor fusion using the robot_pose_ekf or robot_localization package. We have tried both, but neither have given us the desired effect.
We have multiple tags in our environment which all have static transforms with respect to the map frame. The robot looks for tags and finds a transform from the tag to the robot. By extension, this yields a transform from the map to the robot. This map->robot transform is converted to an odometry message using only pose measurements. The covariance for the pose is set to:
|0.1 100 100 100 100 100 |
|100 0.1 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 100 |
|100 100 100 100 100 0.1 |
The twist covariances are all set to 10000. The frame_id for this tag based odom message is map.
The wheel odometry is very standard, it is based off of the turtlebot odometry. One note is that when the first tag estimate is received, a transform is broadcasted which places the wheel odom frame at the initial location of the robot with respect to the map.
Using both of the aforementioned packages and setups, and keeping the robot motionless, it is observed that the filter estimates become increasingly erratic and tend to infinity in no particular direction. Removing the wheel odometry and keeping only the ar tag odometry yields the same effect. The ar tag odometry messages remain completely constant within three significant figures.
Thus we conclude that somehow our consistent tag odometry measurements are causing the kalman filters to behave erratically and tend to infinity. This happens even with the wheel odometry measurements being fused as well. Can anyone explain why this might be, and offer any suggestions to fix this? I would be happy to provide any extra information which I haven't provided.
Side note: As a naive test, we also set all of the covariances to large values and observed that no matter how we moved the robot the differences in the filter outputs were tiny (+/- 0.01), more likely drift than an actual reading.
------------------------------
EDIT:
Thank you for your advice Tom. We have already combed through all of the wiki, and we did see the part about inflating covariances being unhelpful. We were confused though, because we didn't know what to set them to if not an arbitrarily large number. We did use the config to turn them off. You can see this in the posted config file.
We do not have an IMU. We are using a very ragtag group of robots (its what our school had lying around) and we are lucky we even have wheel odometry!
Our AR tag readings give full 6DOF pose, but no velocities. We have already set the 2d param.
Below is our config file for robot_localization. As you can see, we have set world_frame = odom_frame that way we get an estimate of the robot's position with respect to the odom_combined frame. We then transform that into the map frame using a static_transform based on the first observation of a tag. In practice, this usually places the odom_combined frame exactly where we want it. We take only 2D velocity data from odometry and 2D position data from the AR tags.
[false, false, false,
false, false, false,
true, true, false,
false, false, true,
false, false, false] [true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
Thanks for your help!
↧
robot_localization: yaw and position scaled down
We have a setup on a Kobuki where we use the *robot\_localization* package to fuse sensor data consisting of wheel odometry and landmark recognition, which is complemented by *amcl* to correct the odometry data while no landmarks are visible.
For now, the landmark system is not providing input, but the wheel odometry is fed to an *ekf\_localization\_node* which provides the odom -> base_footprint (analogous to base_link) transform. *amcl* receives the filtered odometry output produced by the *ekf\_localization\_node* and provides the map -> odom transform.
The *ekf\_localization\_node* however seems to "filter" the odometry in a weird way: The x and y position moves in the right direction, but only by half the distance of what it should do, whereas the yaw movement is scaled to maybe a 100th of it's original amount. What I would like to have happen is that the odometry input leads to a somewhat similar output on the */filtered/odometry* topic.
Another issue which might be related is that the output on the */filtered/odometry* topic seems to very slowly drift in both the xy-position as well as the yaw.
### robot_localization launch file ###
[true, true, false,
false, false, true,
false, false, false,
false, false, false,
false, false, false]
[0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.03, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.025, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.04, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.02, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01, 0.0,
0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.015]
[0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
↧
↧
ekf_localization_node exhibits unexpected behavior
I have two instances of ekf_localization_node. The first, used for odometry, reads the x velocity (in the base_link frame) from the wheel encoder odometry topic and publishes the odom-to-base_link transform. The second node, used for absolute orientation, reads the x velocity in the same manner as the first node, but also reads the roll, pitch, and yaw angles from the solar compass topic (a PoseWithCovarianceStamped message). The absolute orientation node publishes the map-to-odom transform. Furthermore, the absolute orientation node does not receive poses from the solar compass when the compass is in shadow.
What I am doing:
1. Drive the robot forward until it is in shadow.
2. Make a 90-degree right turn.
3. Drive the robot forward until it is out of shadow.
I am using RViz to display odometry/filtered (published by the odometry ekf_localization node), with the fixed frame set to "map".
What I expect:
1. The odometry display draws arrows that correspond to the robot's path as it moves into shadow.
2. The 90-degree right turn adds no arrows to the odometry path. This is due to the fact that the robot is not translating or, from the absolute orientation node's point of view, rotating. Since the solar compass is in shadow, it does not publish any poses that would be received by the absolute orientation node.
3. When I resume driving the robot forward, arrows should continue to be added to the odometry path, as if the right turn never took place. When the robot moves out of shadow, arrows added after leaving the shadow should be oriented 90 degrees right, relative to the earlier arrows. The new arrows should form a straight line pointing right.
From above, the result should look like this:
X>>>>>>>>>>
^
^
^
R->->->->->->
^
^
E
^
^
^
E is the point where the robot enters the shadow, R is where the right turn is made, and X is the point where the robot leaves the shadow. The "->" symbols show the actual path of the robot after the right turn.
What actually happens:
X
^
^
^
R->->->->->->
^
^
E______Y>>>>>>>>>>
^
^
^
The underscores are supposed to be spaces, but I couldn't get non-breaking spaces to work.
When the robot exits the shadow at point X, it is then translated to point Y. It appears that the robot's path between E and X is rotated, instead of just the robot itself. I suspect that the sum of all the position changes caused by x velocity messages since the last pose (orientation) message are being rotated around the position that was current when the last pose message was received. Is this expected behavior? I am using the robot_localization package from the ROS Indigo binary release.
↧
gmapping/hectormapping map->scanmatcher_frame
I'm building a simple mapping and navigation robot using iRobot create as base and neato xv-11 lidar. The mapping packages I'm testing are gmapping and hector-mapping. Both SLAM packages appear to publish tf mapping from map to scanmatcher_frame. My iRobot create driver (from Brown university) publishes transform from odom to base_link. I have a robot state publisher that publishes tf from base_link to base_laser, and I setup XV-11 to publish on the base_laser frame.
AFAIK, typical laser mapping robot frames are setup to be map -> odom -> base_link -> base_laser, where SLAM or localization packages publishes tf from map to odom.
There are also packages that fuses IMU, odom, visual odometry into one odom.
If I'd run SLAM and have IMU/wheel encoder-based odom, even visual odometry - how to I link them together? And - as titled - where does scanmatcher_frame fit in?
↧
robot_localization with GPS + IMU
I'm trying to setup a localization system for my aerial vehicle, however I never used robot_localization or the TF package, so everything seems a little overwhelming, setup wise. I have a couple of questions:
1) Is it possible to use robot_localization with only GPS and IMU? Would the output be any good?
2) I have a bottom camera that is used from ground target detection. Should I use it for some kind of visual odometry? I'm afraid that this will put the processor under too much strain (odroid X2).
Can I get some tips on how to setup all of this? I find myself a little lost.
EDIT1:
Just for some context, my system is an aerial vehicle supported by a balloon that uses two rotors in order to adjust its position:
(1-baloon; 5-rotors; 4-bottom camera (webcam); 3-main "box" with GPS, IMU, odroid X2, etc)

I had already changed the IMU output to be as specified in REP-103.
Initially I was using "udm_odometry_node" from gps_common, but switched to the "navsat_transforme_node".
For the altitude I'm using the GPS, since the IMU I'm using doesn't have altitude information.
The bottom camera (simple webcam) is meant for ground target detection/tracking, however I could use it for some kind of visual odometry if the odroid is able to handle it.
I think I was able to setup everything correctly since I'm able to "echo" the output on odometry/filtered. However it seems like it has a lot of noise. I still have to do some real tests, since I only tested this with my laptop and the sensors were stationary. (Note: Are the velocities in Twist set as m/s?)
However I got an error when trying to set the process_noise_covariance (type 7 if I recall correctly). Since I was using the covariance from the template launch file, I just deleted it in order to check if everything was working (if I was able to get an output).
EDIT2: (For some reason I'm unable to comment your answer. Whenever I write a comment it disappears when I refresh the page)
One of the problems might be connected to the fact that all covariances on the IMU message are set to 0, since I wasn't able to find information about those parameters for the mpu9150. Besides that, the IMU might need some new calibration.
Sample of the IMU msg:
---
header:
seq: 14983
stamp:
secs: 1429104187
nsecs: 410562312
frame_id: imu
orientation:
x: 0.00824941202457
y: 0.0174910227596
z: 0.0350623021512
w: 0.997508466348
orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
angular_velocity:
x: -0.0017719756579
y: 0.00648836186156
z: 0.000426339596743
angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
linear_acceleration:
x: -0.210282728076
y: 0.174357444048
z: 9.848320961
linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
---
I'm going to do some outdoor's testing, to understand the performance of the gps+imu.
EDIT3:
After an outdoor test I found that something must be wrong with my setup.
This is the plot from the odometry/gps. The path was a rectangle of around 100m for 40m. However, as you can see, although the output is close to that, the angles between parts of the path are not 90degrees.

And the plot from odometry/filtered is even worse. From what I can tell I think the IMU msg is being created as specified in REP-103. However the output is this:

FYI my launch file is:
[true, true, true,
false, false, false,
false, false, false,
false, false, false,
false, false, false] [false, false, false,
true , true , true,
false, false, false,
true , true , true,
true , true , true ] [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
EDIT4:
I will post a 2 bags with the test of the rectangular path of around 100m for 40m. The launch file used was the one posted above, but with 'two_d_mode' set to false:
[bagfiles](https://onedrive.live.com/redir?resid=fca88ffc14f386a6!537&authkey=!ABIfiYezntd4C-U&ithint=folder%2c)
↧