How does robot_localization use the IMU data to update the EKF/UKF in their sensor fusion algorithm ? Robot pose can be obtained from IMU data using multiple techniques, and I am curious which one robot_localization package uses.
↧
Pose from IMU measurements
↧
Navsat_transform in robot_localization gives sightly rotated gps odometry
Hi, I have been having this problem for a week or so.
I am trying to fuse the GPS position with an IMU simulated in Gazebo(hector_sensors) with the robot_localization package (with one EKF node and one navsat node ).
So far it everything was OK, I rotated everything as the REP103 states and the data seems to give good covariances.
However, the GPS(or the navsat node rather) has a steady drift of approximately 1~2º in the /odometry/gps & /odometry/filtered topics and after traveling a long distance , it becomes a problem.
The movement is along the Y axis, but after 100 meters or so, it drifts a meter on the y axis.
It seems to be an extra rotation and I cannot retrieve the proper coordinates( i.e. getting the global reference of the robot).
At first I though it should be rotated with the utm->odom transform published by navsat_transform_node but I can't figure it out how.
Do you know what my problem is?, i am almost certain that it has to be something regarding the map-utm transformation but can't get it to work.
**Launchfiles:**
[Navsat:](https://pastebin.com/tGQe4kPZ) , [EKF](https://pastebin.com/GwHhu3RQ)
**Yaml config parameters** [Navsat](https://pastebin.com/DPxR5hz8) , [EKF](https://pastebin.com/GtK3hZsf)
**Sensor model gazebo** [model.sdf](https://pastebin.com/tQPqT8Us)
[And a photo of the odometry results](http://imgur.com/a/YXd6b)
Thanks a lot, it you need anything else let me know.
↧
↧
robot_localization and angular velocity for better odometry
Dear all,
If I input angular velocity and odometry pose data into robot_localization, can we improve odometry pose estimation due to angular velocity fusion from IMU?
Based on my understanding of robot_localization's code, it will not improve odometry pose estimation. Am I right? The reason of my assumption is that I cannot find correction of pose estimation based on angular velocity fusion in robot_localization's code.
If we want to improve odometry pose estimation, we must input orientation information IMU. Am I right?
Assuming that we can improve odometry pose estimation based on orientation information, does anyone know the minimum accuracy of orientation of IMU to see the meaningful improvement? For example, if my IMU orientation has accuracy of 10 or 20 degrees, is it acceptable to achieve better odometry pose estimation?
Thanks in advance for your reading and answer.
↧
Can navsat_transform be used to transform a pose from latlong to odom?
I'm using a gps to localize a mobile robot and `ekf_localization_node` together with a `navsat_transform_node` to transform the robot location into a local `odom` frame.
I would like to be able to send the robot gps-coordinates and use the functions from `robot_localization` to transform them into the `odom` frame. However, `navsat_transform` only publishes `odom->utm` and I don't think there are any `latlong->utm` conversion exposed to the user.
What I'm trying right now is to use the [utm package](https://pypi.python.org/pypi/utm) in python to convert latlong coodrinates into utm and then using the `utm->odom` transform provided by `navsat_transform_node` to get the local coordinates:
utm_coords = utm.from_latlon(latlong[0], latlong[1])
# create PoseStamped message to set up for do_transform_pose
utm_pose = PoseStamped()
utm_pose.header.frame_id = 'utm'
utm_pose.position.x = utm_coords[0]
utm_pose.position.y = utm_coords[1]
# get the utm->odom transform using tf2_ros
tfbuffer = tf2_ros.Buffer()
tflistener = tf2_ros.TransformListener(tfbuffer)
T = tfbuffer.lookup_transform('odom', 'utm', rospy.Time())
# apply the transform
odom_pose = tf2_geometry_msgs.do_transform_pose(utm_pose, T)
It seems to work, but I would prefer to use the conversions implemented in robot_localization if possible, for consistency.
↧
robot_localization and bag file output
Dear all,
Have anyone tried to run the test1.bag , test2.bag and test3.bag files included in the robot_localization package? Do you have any screen shot for correct output from these bag files and corresponding launch files? I would like to try them and compare the results with mine.
If you have them, could you please share with me?
If you have any other working samples especially (IMU + Raw Odometry), could you please share with me?
Best Regards
Min Latt
↧
↧
robot_localization test bagfiles
Hi!
Does any one know what is the expected output of 3 bagfiles included in robot_localization package?
[test1.bag](https://github.com/cra-ros-pkg/robot_localization/blob/indigo-devel/test/test1.bag)
[test2.bag](https://github.com/cra-ros-pkg/robot_localization/blob/indigo-devel/test/test2.bag)
[test3.bag](https://github.com/cra-ros-pkg/robot_localization/blob/indigo-devel/test/test3.bag)
Like screencapture or plot of robot path?
This may help me check if I am running the robot_localization correctly.
Thanks!
↧
robot_localization output sample
Hi All,
I have found out the following paper from robot_localization package folder.
robot_localization_ias13_revised.pdf
Does anyone have the bag file for the following output from Page 4 of robot_localization_ias13_revised.pdf?

Thanks in advance for your support.
Best Regards
Min Latt
↧
Is there an existing speed control node to connect the robot_localization state estimator with the diff_drive_controller?
I am using ROS Indigo on a differential drive robot for indoor use (no GPS). The path is pre-determined and the velocity along the path is to be controlled. I believe that move_base node would be overkill and probably would not work in this case. Speed along path needs to be at a certain rate and if an obstacle is detected the unit will stop and send an alarm. I think a 2-D PID speed controller (linear and angular vel.) and a path planner that reads from the pre-planned path and provides the commanded 2-D speed is all that is needed. Is there an existing ROS node I could use or modify?
↧
robot_localization and gmapping - gps has offset
Trying to use robot_localization with gps to improve the accuracy. Currently, gmapping is responsible for map->odom transform. So, I have tried the following configuration. We use base_footprint instead of base_link. Otherwise we have tried to follow REP-105.
launch file (for robot_localization):
yaml:
ekf_all:
frequency: 30
sensor_timeout: 0.1
two_d_mode: false
transform_time_offset: 0.0
transform_timeout: 0.0
print_diagnostics: true
debug: false
publish_tf: true
map_frame: map
odom_frame: odom
base_link_frame: base_footprint
world_frame: odom
odom0: odom
odom0_config: [false, false, false,
false, false, false,
true, true, true,
false, false, true,
false, false, false]
odom0_queue_size: 10
odom0_nodelay: true
odom0_differential: false
odom0_relative: false
odom1: odometry/gps
odom1_config: [true, true, false,
false, false, false,
false, false, false,
false, false, false,
false, false, false]
odom1_queue_size: 10
odom1_nodelay: true
odom1_differential: false
odom1_relative: false
imu0: imu/data
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, true,
true, true, true]
imu0_nodelay: false
imu0_differential: false
imu0_relative: false
imu0_queue_size: 10
imu0_remove_gravitational_acceleration: true
use_control: false
process_noise_covariance: [1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.1, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.3]
initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.0]
navsat_transform:
frequency: 30
delay: 3.0
magnetic_declination_radians: 0.23003
yaw_offset: 0 #1.570796327 # IMU reads 0 facing magnetic north, not east
zero_altitude: false
broadcast_utm_transform: false
publish_filtered_gps: true
use_odometry_yaw: true
wait_for_datum: false
The result I see in RViz:
http://imgur.com/a/BP6IU
The yellow is the original odometry. The light blue is the gps odometry, the pink is filtered odometry. The problem seems to be that the gps odometry has offset by the transform of odom->base_footprint. And then filtered odometry tries to fuse is all and then moves further away and then the offset for gps also grows. In the given situation it starts moving off the further I move.
Can it all be fused in one node? What have I missed in the configuration?
↧
↧
robot localization initial position
Hi,
I am using robot localization to fuse odomtery, imu, and rtk gps data. Since the gps data is rtk its published a relative distace to a nearby base station, meaning I don't need to use the navsat transform node. My gps base station is the origin of my map frame and my robot usually initializes around x = -180 y = -80.
I'm finding that when I initialize, the initial position is set to (0,0) and it takes the kalman filter around 40 seconds to drift over to (-180,-80) where the robot actually is. I logged my gps data in a bag file to make sure that it wasn't publishing (0,0) initially, which its not.
How do I tell robot_localization what my actual initial position is? I had the thought that I could increase the covariance on my initial position, here is what it is currently:
[1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9 ,0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9 ,0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9 ,0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
But that would be a waste of valuable knowledge. I start the robot in the same place each time. I'd like to have a very small covariances to represent that and then set the initial position somehow.
↧
robot_localization and gmapping - how the transform should be done?
We are trying to use `robot_localization` and `gmapping`. What we have done so far: we use gmapping to publish the transform between `map` -> `odom`. Then we tried to apply gps, imu and odom using ekf from robot_localization. We use navsat_transform and one ekf node to fuse all the data. This node does `odom` -> `base_link` transform. This setting does not work very well.
Here: https://roscon.ros.org/2015/presentations/robot_localization.pdf (page 4) there is a picture: 
This indicates, that I should fuse gmapping and navsat_transform result in one localization node. Should this be done? How should it be done? Is there an example of it?
↧
IMU convention for robot_localization
Hi all,
I am trying to use one IMU in robot_localization. I am confusing with IMU convention.The symbol in my IMU is as shown in the following figure.

Can I assume that IMU is in ENU format if I make the IMU X direction as the forward direction of the robot?
If my robot is stationary in the flat surface, what should be my linear acceleration value in z direction ? -9.8 or +9.8?
Best Regards
Min Latt
↧
Robot_localization
I'm trying to get the ekf_localization node working on a pioneer 2 using whells odometry and a camera for pose with aruco markers and I'm having problems. When see get a marker and send info to ekf this get angular velocity in z while robot stay in his position.
My launch file is this:
>
and my ekf params file this:
# The frequency, in Hz, at which the filter will output a position estimate. Note that the filter will not begin
# computation until it receives at least one message from one of the inputs. It will then run continuously at the
# frequency specified here, regardless of whether it receives more measurements. Defaults to 30 if unspecified.
frequency: 30
# The period, in seconds, after which we consider a sensor to have timed out. In this event, we carry out a predict
# cycle on the EKF without correcting it. This parameter can be thought of as the minimum frequency with which the
# filter will generate new output. Defaults to 1 / frequency if not specified.
sensor_timeout: 0.1
# ekf_localization_node and ukf_localization_node both use a 3D omnidirectional motion model. If this parameter is
# set to true, no 3D information will be used in your state estimate. Use this if you are operating in a planar
# environment and want to ignore the effect of small variations in the ground plane that might otherwise be detected
# by, for example, an IMU. Defaults to false if unspecified.
two_d_mode: true
# Use this parameter to provide an offset to the transform generated by ekf_localization_node. This can be used for
# future dating the transform, which is required for interaction with some other packages. Defaults to 0.0 if
# unspecified.
transform_time_offset: 0.0
# Use this parameter to provide specify how long the tf listener should wait for a transform to become available.
# Defaults to 0.0 if unspecified.
transform_timeout: 0.0
# If you're having trouble, try setting this to true, and then echo the /diagnostics_agg topic to see if the node is
# unhappy with any settings or data.
print_diagnostics: true
# Debug settings. Not for the faint of heart. Outputs a ludicrous amount of information to the file specified by
# debug_out_file. I hope you like matrices! Please note that setting this to true will have strongly deleterious
# effects on the performance of the node. Defaults to false if unspecified.
debug: false
# Defaults to "robot_localization_debug.txt" if unspecified. Please specify the full path.
debug_out_file: /path/to/debug/file.txt
# Whether to broadcast the transformation over the /tf topic. Defaults to true if unspecified.
publish_tf: true
# Whether to publish the acceleration state. Defaults to false if unspecified.
publish_acceleration: false
# REP-105 (http://www.ros.org/reps/rep-0105.html) specifies four principal coordinate frames: base_link, odom, map, and
# earth. base_link is the coordinate frame that is affixed to the robot. Both odom and map are world-fixed frames.
# The robot's position in the odom frame will drift over time, but is accurate in the short term and should be
# continuous. The odom frame is therefore the best frame for executing local motion plans. The map frame, like the odom
# frame, is a world-fixed coordinate frame, and while it contains the most globally accurate position estimate for your
# robot, it is subject to discrete jumps, e.g., due to the fusion of GPS data or a correction from a map-based
# localization node. The earth frame is used to relate multiple map frames by giving them a common reference frame.
# ekf_localization_node and ukf_localization_node are not concerned with the earth frame.
# Here is how to use the following settings:
# 1. Set the map_frame, odom_frame, and base_link frames to the appropriate frame names for your system.
# 1a. If your system does not have a map_frame, just remove it, and make sure "world_frame" is set to the value of
# odom_frame.
# 2. If you are fusing continuous position data such as wheel encoder odometry, visual odometry, or IMU data, set
# "world_frame" to your odom_frame value. This is the default behavior for robot_localization's state estimation nodes.
# 3. If you are fusing global absolute position data that is subject to discrete jumps (e.g., GPS or position updates
# from landmark observations) then:
# 3a. Set your "world_frame" to your map_frame value
# 3b. MAKE SURE something else is generating the odom->base_link transform. Note that this can even be another state
# estimation node from robot_localization! However, that instance should *not* fuse the global data.
#map_frame: map # Defaults to "map" if unspecified
odom_frame: odom # Defaults to "odom" if unspecified
base_link_frame: base_link # Defaults to "base_link" if unspecified
world_frame: odom # Defaults to the value of odom_frame if unspecified
pose0: /pose
pose0_config: [true, true, true,
false, false, true,
false, false, false,
false, false, false]
pose0_differential: false
pose0_relative: false
pose0_queue_size: 5
pose0_rejection_threshold: 2 # Note the difference in parameter name
pose0_nodelay: false
odom0: /rosaria/pose
odom0_config: [false, false, false,
false, false, false,
true,true,false
false,false,true]
odom0_differential: true
odom0_relative: false
odom0_queue_size: 5
odom0_rejection_threshold: 2
odom0_nodelay: false
# Whether or not we use the control input during predicition. Defaults to false.
use_control: false
# Whether the input (assumed to be cmd_vel) is a geometry_msgs/Twist or geometry_msgs/TwistStamped message. Defaults to
# false.
stamped_control: false
# The last issued control command will be used in prediction for this period. Defaults to 0.2.
control_timeout: 0.2
# Which velocities are being controlled. Order is vx, vy, vz, vroll, vpitch, vyaw.
control_config: [true, true, false, false, false, false]
# Places limits on how large the acceleration term will be. Should match your robot's kinematics.
acceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 3.4]
# Acceleration and deceleration limits are not always the same for robots.
deceleration_limits: [1.3, 0.0, 0.0, 0.0, 0.0, 4.5]
# If your robot cannot instantaneously reach its acceleration limit, the permitted change can be controlled with these
# gains
acceleration_gains: [0.8, 0.0, 0.0, 0.0, 0.0, 0.9]
# If your robot cannot instantaneously reach its deceleration limit, the permitted change can be controlled with these
# gains
deceleration_gains: [1.0, 0.0, 0.0, 0.0, 0.0, 1.0]
# [ADVANCED] The process noise covariance matrix can be difficult to tune, and can vary for each application, so it is
# exposed as a configuration parameter. This matrix represents the noise we add to the total error after each
# prediction step. The better the omnidirectional motion model matches your system, the smaller these values can be.
# However, if users find that a given variable is slow to converge, one approach is to increase the
# process_noise_covariance diagonal value for the variable in question, which will cause the filter's predicted error
# to be larger, which will cause the filter to trust the incoming measurement more during correction. The values are
# ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. Defaults to the matrix below if
# unspecified.
process_noise_covariance: [0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0.05, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0.03, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0.06, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0.025, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0.04, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.02, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.01, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0.015]
# [ADVANCED] This represents the initial value for the state estimate error covariance matrix. Setting a diagonal
# value (variance) to a large value will result in rapid convergence for initial measurements of the variable in
# question. Users should take care not to use large values for variables that will not be measured directly. The values
# are ordered as x, y, z, roll, pitch, yaw, vx, vy, vz, vroll, vpitch, vyaw, ax, ay, az. Defaults to the matrix below
#if unspecified.
initial_estimate_covariance: [1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1e-9]
what i do wrong?
Blockquote
> Blockquote
↧
↧
robot_localization and IMU selection
Dear all,
I am thinking to buy IMU to be used with robot_localization package. I understand that IMU must be in ENU mode. May I know which IMU support ENU mode and output reliable orientation information? Is microstrain 3DM-GX5-25 a good choice? But it is in NED. Can we use it as ENU via some kind of static transform?
http://www.microstrain.com/inertial/3DM-GX5-25
Best Regards
Min Latt
↧
robot_localization - gps and imu minimal example not working
I have some trouble getting `robot_localization` working. I have tried several configurations but nothing gives me a good result. So I thought I would try to get the minimal working. I have gps data and imu data. I also have wheel odometry. All this is simulated and all the values are currently almost ideal (no or very small noise). I use navsat with the following conf:
navsat_transform:
frequency: 30
delay: 3.0
magnetic_declination_radians: 0 #
yaw_offset: 0
zero_altitude: true
broadcast_utm_transform: true
publish_filtered_gps: true
use_odometry_yaw: true
wait_for_datum: false
And then I use one ekf node with the following conf:
odom0: odom
odom0_config: [false, false, false,
false, false, false,
true, true, true,
false, false, false,
false, false, false]
odom0_queue_size: 10
odom0_nodelay: true
odom0_differential: false
odom0_relative: false
odom1: odometry/gps
odom1_config: [true, true, false,
false, false, false,
true, true, true,
false, false, false,
false, false, false]
odom1_queue_size: 10
odom1_nodelay: true
odom1_differential: false
odom1_relative: false
imu0: imu/data
imu0_config: [false, false, false,
true, true, true,
false, false, false,
true, true, false,
true, true, true]
imu0_nodelay: false
imu0_differential: false
imu0_relative: false
imu0_queue_size: 10
imu0_remove_gravitational_acceleration: true
The conf is pretty taken from the examples.
I have static transform for `map` -> `odom` (they are exactly the same). ekf publishes `odom` -> `base_link` transform. The latitude-longitude are exactly aligned with y and x axis.
Wheel odometry is in odom topic.
**The problem I am facing**: when there is no movement, the filtered odometry (also the transform from `odom` to `base_link`) starts moving slowly (the robot itself is not moving). When the robot is moving, everything gets messy pretty quickly.
I have tried to exclude imu, odom, change covariance, all the other parameters. What is my problem here? I would like to get a minimal example working, which would mainly use GPS.
↧
Help with IMU orientation
Hi,
I am trying to integrate IMU on the pioneer robot. I am using the BNO055 IMU sensor. For visualization I created a child_link from the base_link. The IMU is oriented with axes aligned to front (x axis), left (y axis) and top (z axis) from the [sensors data sheet(pg.24)](https://ae-bst.resource.bosch.com/media/_tech/media/datasheets/BST_BNO055_DS000_14.pdf).
And my launch file for the robot is:
The imu_link properly shows the orientation as same as the base_link. However, when I launch the imu node, the orientation of the imu is incorrect. Its been shown as rotated 180 degrees. I am not sure what am I doing wrong.
My final goal is to use the robot_localization package to fuse the wheel odometry and imu sensor messages. But I am stuck for now.
Thanks
↧
Multiple nodes publishing to same topic in robot_localization package
In robot_localization package, the nodes publish to /odometry/filtered (hardcoded). If there are 2 nodes - one for odom and one for map, which node is writing to this topic ?
↧
↧
robot_localization: How are the sensor measurements being fused?
I was working with the robot_localization package for fusing encoder/IMU/GPS measurements. I wanted to know does the filter fuse the measurements in time? Is it a loosely-coupled fusion? It would be helpful if someone can explain the underlying algorithm for taking in the measurements for correction.
↧
robot_localization IMU TF
Dear all,
If I use robot_localization with [9 Degrees of Freedom - Razor IMU](https://www.sparkfun.com/products/retired/10736), do I need to provide TF between base link and imu link? Is TF between base_link and imu_link important for robot_localization internal processing?
Since this IMU uses three different chips for an accelerometer, gyroscope and magnetometer, which sensor's location will determine the TF value?
Based on this [Q &A](https://answers.ros.org/question/9957/what-frame_id-to-put-in-a-sensor_msgsimu-message/), we may need to mount IMU to be aligned with base_link. In that case, how should I align this IMU due to three separate chips locations in the PCB?
↧
Does robot_localization pakcage requires inter-sensor extrinisic parameteres to be known?
Hi
Just two short question
1) Does this package requires inter-sensor extrinisic parameteres to be known?
2) Does it requires synchronous sensors? My camera works at 20fps but my IMU works at 100
↧