Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 398 articles
Browse latest View live

AMCL converges to wrong pose and is unable to recover

$
0
0
Hi, I am working with AMCL together with laser_scan_matcher (as odometry source) for my simulated laser-only based virtual robot. I have a prior map and want to do localization only, without updating the map. I always call the *global_localization* service and avoid providing an initial pose to AMCL. In most of my experiments, AMCL does manage to converge to the correct pose. However, during long walks it diverges at some point (probably due to bad odometry from the laser_scan_matcher). Since at that point the particle cloud is already dense, AMCL will persist its wrong pose and won't recover. I played a little with AMCL parameters but couldn't find something which helped. I'm basically after a parameter which tells AMCL at what rate to expand its particle cloud when sensorial information doesn't not correspond with the map. Thanks!

rplidar A2 amcl parameters

$
0
0
I am using an rplidar A2 for mapping and for localization. I have two different platforms, one is a Raspberry Pi 3B+ and the other is an Intel NUC. Both are running ROS Kinetic. I am wondering if someone has optimized the amcl parameters for use with the A2.

Confused with amcl map to odom transform

$
0
0
I am using [amcl](http://wiki.ros.org/amcl) for localizing my robot in Gazebo in a known map. My robot has an IMU and a laser on it. I was going through the amcl docs and I noticed this. Published Topics ... tf (tf/tfMessage) Publishes the transform from odom (which can be remapped via the ~odom_frame_id parameter) to map. According to [REP-105](http://www.ros.org/reps/rep-0105.html), both odom & map are world fixed frames. So why does amcl publish a transform from map to odom? Shouldn't this transform be static? **Note**: Although it is not relevant to this question, but if you are wondering - I am generating the odometry data using package robot_localization, which takes as input the IMU data. amcl requires odometry and laser scan data to localize the robot.

Indoor localization with 2D lidar and IMU

$
0
0
I have a 3 wheeled differential drive robot (2 wheels + 1 castor) which has a 9 DOF IMU (3d acceleration, heading and angular velocities) and a 2D lidar (RPLidar). I have a pre-computed map. To localize the robot I am using the ROS package [amcl](http://wiki.ros.org/amcl). AMCL requires 3 inputs: 1. 2D Laser scan data 2. odom -> base_link transform. This is typically published by an odometry source. 3. Map Odometry is estimated using the ROS package [robot_localization](http://wiki.ros.org/robot_localization), which uses Kalman filters to fuse data from the IMU's 3 sensors - accelerometer, gyroscope and magnetometer. The localization works reasonably well at slow speeds. However sometimes the estimated position is way off the actual position. My questions : 1. Is this the right approach to perform localization using an IMU and 2D lidar? 2. Will the odometry estimated using the IMU data be good enough for amcl?

robot_localization with cartographer and amcl

$
0
0
I am using cartographer_ros along with amcl and an imu and wheel odometry for localization. Robot_localization seems ideal for this. I modified the sources for cartographer and amcl so that they publish poses and do not broadcast transforms. I used the following robot_localization setup and it seems to be working well, but I am not sure it is entirely correct. If someone would be kind enough to look it over, I would sure appreciate that. I am particularly interested to know whether I set the frame ID and child frames correctly for the inputs from amcl and cartographer (frame ID = map, child ID = base_footprint). I am also interested to know if the second instance should be fed any of the continuous data. Right now it just gets data from amcl and cartographer. robot_localization ukf instance 1, continuous inputs, 2D mode is true, neither of the inputs is set to differential. odom0 is from the wheel encoders, reporting vx and vYaw (vy = 0), 2D mode, with frame id= odom, child frame is base_footprint [false, false, false, false, false, false, true, true, false, false, false, true, false, false, false] imu0 is from the IMU, reporting vYaw, frame ID is imu_link [false, false, false, true, true, true, false, false, false, true, true, true, true, false, false] robot_localization ukf instance 2 brings in amcl and cartographer (and eventually GPS) pose0 is from amcl, with a frame ID = map, it is set to differential. [true, true, false, true, true, true, false, false, false, false, false, false, false, false, false] odom0 is from cartographer, the frame ID is map, the child is base_footprint [true, true, false, true, true, false, false, false, false, false, false, true, false, false, false]

AMCL for (x,y) pose only

$
0
0
Hi, I would like to apply AMCL on my virtual robot which is a point mass. To the best of my understanding, the default implementation of AMCL has 3 state variables: x, y and yaw. Since my robot is a point mass and my scans are 360 degrees I am not interested in the yaw, only in x and y. Even if I ignore the yaw filed in amcl_pose topic, the oriented particles make the pose estimation diverge to undesired directions over the time. Is there a way or a trick to make AMCL work with (x,y) only? Thanks!

AMCL warning: no laser scan received

$
0
0
Hello everyone. I'm having a problem. I acquired a RPLidar A2 recently and I'm trying to perform localization on a previously built map. I've built the map via Hector Map and saved it via map_server - yaml file. I am now trying to use the AMCL node but I have some issues. After a few researches on the problem, I am running the following commands in separate windows.   roslaunch rplidar_ros rplidar.launch rosrun map_server map_server Documents/Maps/mymap.yaml rosrun tf static_transform_publisher 0 0 0 0 0 0 map base_link 100 rosrun the tf statin_transform_publisher 0 0 0 0 0 0 base_link laser 100 rosrun amcl amcl roslaunch rviz rviz   These are the frames I'm getting: https://ibb.co/hzDLLU   In the terminal window, I’m getting a problem in the AMCL launch. The warning message is the following:   > No laser scan received (and thus no pose updates have been published) for 1536824498.771014 seconds. Verify that data is being published on the /scan topic.   I have checked the /scan topic and there are messages flowing. > Type: sensor_msgs/LaserScan> Publishers: */rplidarNode> Subscribers: */rviz_1536823488387470311 * /amcl In the RViz window, I can see the map and the scan, and everything seems to be working, but for some reason I can't get the estimated position of the laser inside the map.   Any suggestion on what is causing the problem? I believe it must be something with the TF but the TF tree seems OK to me. Thank you for your help PS: Ubuntu 16.04; ROS kinetic

Laser scan doesn't match the map when use amcl?

$
0
0
when the robot rotates 180 degrees in place, for example,from 0 degree to 180 degree, but in reality it rotates about 160 degrees。However, according to the amcl pose ,the angle is 180 degrees, and the scan doesn't match the map, why does this happen ? ![image description](/upfiles/15371100739953215.png)

amcl: No laser scan received (and thus no pose updates have been published) for xxxx seconds ?

$
0
0
Hello, I'm using ROS kinectic on ubuntu 16.04 on an HP laptop. I made my own "real/physical" 2WD robot, and now I want to use ROS navigation stack to get it to move around autonomously. I'm currently struggling with the real robot. Typical behavior is that after the robot accepts a 2DNav goal, it starts navigating to the goal, then hesidates, rotates on himself, then continues rotating forever. A friend of mine suggested to model my physical environment in gazebo, then use the ROS navigation stack in gazebo, tune the navigation parameters until it works decently in gazebo, then reuse those navigation parameters for the real robot. So here I am, struggling to get the simulated robot to navigate. Firing gazebo works. the world I created is ok. 2WD plugin of gazebo feels fine when i teleop it with a keyboard. The mapping process worked just fine. I publish the previously recorded map, it works. The published /odom and /scan values seems correct too. I can see them in rviz as expected. rostopic echo /scan and rostopic echo /odom feels fine as well. The problem is that when I launch amcl, I keep getting the following warning:
[ WARN] [1537204065.051924673, 3433.606000000]: No laser scan received (and thus no pose updates have been published) for 34139472.000000 seconds.  Verify that data is being published on the /scan topic.
[ WARN] [1537204080.975562709, 3448.606000000]: No laser scan received (and thus no pose updates have been published) for 34139472.000000 seconds.  Verify that data is being published on the /scan topic.
[ WARN] [1537204096.883037473, 3463.606000000]: No laser scan received (and thus no pose updates have been published) for 34139472.000000 seconds.  Verify that data is being published on the /scan topic.
[...]
Here is a typical /scan message (taken somehow later)
user@ubuntu:~$ rostopic info /scan
Type: sensor_msgs/LaserScan

Publishers:
 * /gazebo (http://ubuntu:40085/)

Subscribers:
 * /amcl (http://ubuntu:38221/)

user@ubuntu:~$ rostopic echo -n 1 /scan
header:
  seq: 42106
  stamp:
    secs: 4001
    nsecs: 770000000
  frame_id: "sensor_laser"
angle_min: 0.0
angle_max: 6.27319002151
angle_increment: 0.0174740664661
time_increment: 0.0
scan_time: 0.0
range_min: 0.10000000149
range_max: 18.0
ranges: [0.14208605885505676, 0.11974052339792252, ..., 1.9066493511199951, 1.8749361038208008]
intensities: [-3.16707076146899e+24, ... , 0.0]
---
user@ubuntu:~$
When i look into the node, it seems amcl is indeed connected to /scan.
user@ubuntu:~$ rosnode info amcl
--------------------------------------------------------------------------------
Node [/amcl]
Publications:
 * /amcl/parameter_descriptions [dynamic_reconfigure/ConfigDescription]
 * /amcl/parameter_updates [dynamic_reconfigure/Config]
 * /amcl_pose [geometry_msgs/PoseWithCovarianceStamped]
 * /particlecloud [geometry_msgs/PoseArray]
 * /rosout [rosgraph_msgs/Log]
 * /tf [tf2_msgs/TFMessage]

Subscriptions:
 * /clock [rosgraph_msgs/Clock]
 * /initialpose [unknown type]
 * /map [nav_msgs/OccupancyGrid]
 * /scan [sensor_msgs/LaserScan]
 * /tf [tf2_msgs/TFMessage]
 * /tf_static [tf2_msgs/TFMessage]

Services:
 * /amcl/get_loggers
 * /amcl/set_logger_level
 * /amcl/set_parameters
 * /global_localization
 * /request_nomotion_update
 * /set_map


contacting node http://ubuntu:38221/ ...
Pid: 20339
Connections:
 * topic: /rosout
    * to: /rosout
    * direction: outbound
    * transport: TCPROS
 * topic: /tf
    * to: /amcl
    * direction: outbound
    * transport: INTRAPROCESS
 * topic: /clock
    * to: /gazebo (http://ubuntu:40085/)
    * direction: inbound
    * transport: TCPROS
 * topic: /tf
    * to: /amcl (http://ubuntu:38221/)
    * direction: inbound
    * transport: INTRAPROCESS
 * topic: /tf
    * to: /robot_state_publisher (http://ubuntu:33029/)
    * direction: inbound
    * transport: TCPROS
 * topic: /tf
    * to: /gazebo (http://ubuntu:40085/)
    * direction: inbound
    * transport: TCPROS
 * topic: /tf_static
    * to: /robot_state_publisher (http://ubuntu:33029/)
    * direction: inbound
    * transport: TCPROS
 * topic: /scan
    * to: /gazebo (http://ubuntu:40085/)
    * direction: inbound
    * transport: TCPROS
 * topic: /map
    * to: /map_server_1537203380314611704 (http://ubuntu:33768/)
    * direction: inbound
    * transport: TCPROS

user@ubuntu:~$
I tried outputting the /clock values as well, and I see nothing partiulcarly wrong there (except maybe that time seems to start at 1000seconds... go figure) And indeed, when i try to view the tf tree with rosrun tf view_frames => the link between map and odom is missing, whereas i expect amcl to publish it. ![image description](https://image.ibb.co/n95ojz/deleteme.png) I'm guessing that amcl is either not receiving /scan events and filtering them all (no laser scan received for 34139472 seconds feels a lot like none were ever received)=> therefore published no map->odom tf. while increasing the verbosity of amcl from default info to debug (*), i see those debug messages, but i have no clue what they mean
[DEBUG] [1537204573.964568814, 3913.317000000]: MessageFilter [target=odom ]: Removed oldest message because buffer is full, count now 100 (frame_id=sensor_laser, stamp=3908.290000)
[DEBUG] [1537204573.964899783, 3913.317000000]: MessageFilter [target=odom ]: Added message in frame sensor_laser at time 3913.294, count now 100
[DEBUG] [1537204574.025497198, 3913.368000000]: MessageFilter [target=odom ]: Removed oldest message because buffer is full, count now 100 (frame_id=sensor_laser, stamp=3908.340000)
[DEBUG] [1537204574.025600379, 3913.368000000]: MessageFilter [target=odom ]: Added message in frame sensor_laser at time 3913.344, count now 100
[DEBUG] [1537204574.049464305, 3913.388000000]: Incoming queue was full for topic "/clock". Discarded oldest message (current queue size [0])
[DEBUG] [1537204574.054565708, 3913.392000000]: Incoming queue was full for topic "/clock". Discarded oldest message (current queue size [0])
[DEBUG] [1537204574.066972159, 3913.403000000]: Incoming queue was full for topic "/clock". Discarded oldest message (current queue size [0])
[DEBUG] [1537204574.086864841, 3913.420000000]: MessageFilter [target=odom ]: Removed oldest message because buffer is full, count now 100 (frame_id=sensor_laser, stamp=3908.390000)
[DEBUG] [1537204574.086959735, 3913.420000000]: MessageFilter [target=odom ]: Added message in frame sensor_laser at time 3913.393, count now 100
[DEBUG] [1537204574.133658368, 3913.453000000]: MessageFilter [target=odom ]: Removed oldest message because buffer is full, count now 100 (frame_id=sensor_laser, stamp=3908.440000)
[DEBUG] [1537204574.133732163, 3913.453000000]: MessageFilter [target=odom ]: Added message in frame sensor_laser at time 3913.442, count now 100
[DEBUG] [1537204574.182417016, 3913.491000000]: Incoming queue was full for topic "/clock". Discarded oldest message (current queue size [0])
(*) if someone knows how to default to debug so that i can see what's going on as the amcl node starts => plz let me know) some environment parameters that may apply
user@ubuntu:~$ printenv | grep -i ros
ROS_ROOT=/opt/ros/kinetic/share/ros
ROS_PACKAGE_PATH=/home/user/catkin_ws/src:/opt/ros/kinetic/share
ROS_MASTER_URI=http://localhost:11311
ROS_VERSION=1
LD_LIBRARY_PATH=/home/user/catkin_ws/devel/lib:/opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu
PATH=/home/user/bin:/home/user/.local/bin:/opt/ros/kinetic/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
ROSLISP_PACKAGE_DIRECTORIES=/home/user/catkin_ws/devel/share/common-lisp
ROS_DISTRO=kinetic
PYTHONPATH=/home/user/catkin_ws/devel/lib/python2.7/dist-packages:/opt/ros/kinetic/lib/python2.7/dist-packages
PKG_CONFIG_PATH=/home/user/catkin_ws/devel/lib/pkgconfig:/opt/ros/kinetic/lib/pkgconfig:/opt/ros/kinetic/lib/x86_64-linux-gnu/pkgconfig
CMAKE_PREFIX_PATH=/home/user/catkin_ws/devel:/opt/ros/kinetic
ROS_ETC_DIR=/opt/ros/kinetic/etc/ros
user@ubuntu:~$
How can I get rid of the No laser scan received (and thus no pose updates have been published) for xxxx seconds ?

AMCL estimates poses on occupied map cells

$
0
0
Hi, I'm running AMCL and getting that both amcl_pose and many among the particles are located on occupied cells in the map. Is there a way to tell AMCL not to do that? I think this will help for the convergence of the pose estimation. Thanks.

navigate to NEAREST goal pos (if goal pos is blocked)

$
0
0
Hello all, I am working on a project where my robot needs to navigate to the goal position (and it is easy and working fine). But sometimes the goal position may be blocked and there is a possibilty that the goal planners fail hence resulting in no movement at all. In this case i want my robot to move to the nearest point of the goal position (this can be done by taking a radius around the goal position and assign it a new position and then navigate again but it might take alot of time). Please help me that how can i make my algorithms more robust to implement these changes. (using amcl) Below is my code for navigation to the goal position. Please share your thoughts: import rospy from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal import actionlib from actionlib_msgs.msg import * from geometry_msgs.msg import Pose, Point, Quaternion class GoToPose(): def __init__(self): self.goal_sent = False # What to do if shut down (e.g. Ctrl-C or failure) rospy.on_shutdown(self.shutdown) # Tell the action client that we want to spin a thread by default self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction) rospy.loginfo("Wait for the action server to come up") # Allow up to 5 seconds for the action server to come up self.move_base.wait_for_server(rospy.Duration(5)) def goto(self, pos, quat): # Send a goal self.goal_sent = True goal = MoveBaseGoal() goal.target_pose.header.frame_id = 'map' goal.target_pose.header.stamp = rospy.Time.now() goal.target_pose.pose = Pose(Point(pos['x'], pos['y'], 0.000), Quaternion(quat['r1'], quat['r2'], quat['r3'], quat['r4'])) # Start moving self.move_base.send_goal(goal) # Allow TurtleBot up to 60 seconds to complete task success = self.move_base.wait_for_result(rospy.Duration(60)) state = self.move_base.get_state() result = False if success and state == GoalStatus.SUCCEEDED: # We made it! result = True else: self.move_base.cancel_goal() self.goal_sent = False return result def shutdown(self): if self.goal_sent: self.move_base.cancel_goal() rospy.loginfo("Stop") rospy.sleep(1) if __name__ == '__main__': try: rospy.init_node('nav_test', anonymous=False) navigator = GoToPose() # Customize the following values so they are appropriate for your location position = {'x': 13.3, 'y' : 3.9} quaternion = {'r1' : 0.000, 'r2' : 0.000, 'r3' : 0.000, 'r4' : 1.000} rospy.loginfo("Go to (%s, %s) pose", position['x'], position['y']) success = navigator.goto(position, quaternion) if success: rospy.loginfo("Hooray, reached the desired pose") else: rospy.loginfo("The base failed to reach the desired pose") # Sleep to give the last log messages time to be sent rospy.sleep(1) except rospy.ROSInterruptException: rospy.loginfo("Ctrl-C caught. Quitting") #### code by mark sulliman

Preventing large jumps in AMCL

$
0
0
When using AMCL, if some large obstacle (e.g. a trolley) passes close to my robot, interfering with the sensor data, AMCL tries to localise the robot and places it in vastly different location (often inside a wall). It also does this when first starting AMCL, before the robot's initial pose has been set, making specifying a pose difficult. Is there any way to tell AMCL that the robot's odometry is more-or-less accurate, and to only make small (less than 1m) adjustments to the robot's position?

hector_slam: how to load a saved map (and perform localization)

$
0
0
Hi, I read the tutorial about creating a new map with hector_slam and save it. My question now is: how to load a saved map and use it for localization? (thus substituting amcl) Thanks

Random position in AMCL with robot_localization

$
0
0
Hello, I have a problem in using AMCL package , The output position is totally random. When feeding the /initial_pose topic with current output of robot localization , the car is on the right position on the map. As shown in the next pic: ![image description](/upfiles/15386688663847328.png) As soon as the car moves , Particles start to vastly spread out and randomly jump and the car never localize itself in a proper way again as shown in the video: [amcl ](https://drive.google.com/file/d/11yNiU6Co4_duEnzuQzItKyZQxVFJwEgX/view?usp=sharing) I didn't investigate much at amcl parameters . But I suspect that I have a fundamental issue. Am I missing something? Here is my setup: **robot localization ( 1 local node):** **inputs:** imu , wheel_odometry Sample Msgs: **IMU:** header: seq: 25 stamp: secs: 1538665174 nsecs: 779748916 frame_id: "imu_link" orientation: x: -0.021728795831 y: 0.0629217740198 z: 0.934970575289 w: 0.348423209804 orientation_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01] angular_velocity: x: 0.0010685628315 y: 0.00427425132599 z: 0.00747993982048 angular_velocity_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] linear_acceleration: x: 1.45088620605 y: 1.59932670898 z: 10.4099888184 linear_acceleration_covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] **Wheel_Odom:** header: seq: 1 stamp: secs: 1538665300 nsecs: 997972965 frame_id: "odom" child_frame_id: "base_footprint" pose: pose: position: x: 0.0 y: 0.0 z: 0.0 orientation: x: 0.0 y: 0.0 z: 0.0 w: 0.0 covariance: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] twist: twist: linear: x: 0.0 y: 0.0 z: 0.0 angular: x: 9.0 y: 0.0 z: 0.0 covariance: [1e-05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1e-06, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.01] **Launch File:** [false, false, false, false, false, false, true, true, false, false, false, true, false, false, false][false, false, false, false, false, true, false, false, false, false, false, false, false, false, false] **amcl :** inputs: map , laser_scan , initial_pose " from robot localization" at first **tf:** - static tf from imu_link , lidar_link to base_footprint - robot localization publishes tf (base_footprint -> odom ) - amcl publishes tf (odom -> map) ![image description](/upfiles/153866796060310.png) Thank you

Can Turtlebot walk through an occupied grid in given map which is free in enviroment?

$
0
0
I wonder is that possible for Turtlebot can do this: I gave it a map which is opened with RVIZ; after running amcl and 2D pose estimate, everything is precise. Now if I control the robot to move as I want, - what happend if it cross some black grid which is occupied? - Is there any invisible error in the moment could cause the program fail in the future?

How can I localize without any sensors on robot?

$
0
0
I have been successfully using the amcl package with laser sensors and learnt few things that it listens to rviz initialpose topic and publishes the position and orientation of the robot relative to map. Now, I want to publish the position and orientation myself, without relying on sensors. The thing is, I have a separate localisation, which can send the geometry_msgs::PoseWithCovarianceStamped message. Here is my publisher code, which I will add by continuously getting location messages in json from my server and after converting from json to geometry_msgs::PoseWithCovarianceStamped I want to publish to initialpose topic and I want amcl to localise based on that message without reading any sensor messages: #!/usr/bin/env python import rospy from geometry_msgs.msg import PoseWithCovarianceStamped def initial_pos_pub(): publisher = rospy.Publisher('initialpose', PoseWithCovarianceStamped, queue_size=10) rospy.init_node('initial_pos_pub', anonymous=True) #Creating the message with the type PoseWithCovarianceStamped rospy.loginfo("This node sets the turtlebot's position to the red cross on the floor. It will shudown after publishing to the topic /initialpose") start_pos = PoseWithCovarianceStamped() #filling header with relevant information start_pos.header.frame_id = "map" start_pos.header.stamp = rospy.Time.now() #filling payload with relevant information gathered from subscribing # to initialpose topic published by RVIZ via rostopic echo initialpose start_pos.pose.pose.position.x = -0.846684932709 start_pos.pose.pose.position.y = 0.333061099052 start_pos.pose.pose.position.z = 0.0 start_pos.pose.pose.orientation.x = 0.0 start_pos.pose.pose.orientation.y = 0.0 start_pos.pose.pose.orientation.z = -0.694837665627 start_pos.pose.pose.orientation.w = 0.719166613815 start_pos.pose.covariance[0] = 0.25 start_pos.pose.covariance[7] = 0.25 start_pos.pose.covariance[1:7] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] start_pos.pose.covariance[8:34] = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] start_pos.pose.covariance[35] = 0.06853891945200942 rospy.loginfo(start_pos) publisher.publish(start_pos) if __name__ == '__main__': try: initial_pos_pub() except rospy.ROSInterruptException: pass So the question is, how can I make amcl package to accept my own location messages and running without laser sensor?

How to check if there is problem in odometry?

$
0
0
Hi, I am moving a robot from point A to point B using ROS navigation packages. I found the strange thing that, after few iterations the there is the difference between actual map and laser map, the laser map get tilts, so the robot starts moving in the wrong direction. I am not getting where the exact problem is. The following figure may help you to understand the issue. Does amcl not work properly or there is problem with odometry or something else? Could you help me? Thank you. ![C:\fakepath\Capture.JPG](/upfiles/15401889687876595.jpg)

How to verify odom --> base_link is accurate or not?

$
0
0
I'm using AMCL and when I specify a goal to navigation, the pose of my robot drift.While AMCL need `odom --> base_link` tf and laserscan data,so how can I verify `odom --> base_link` is accurate or not?

Fiducials in conjuction with AMCL

$
0
0
I'm having some trouble getting fiducials (using [aruco_detect](http://wiki.ros.org/aruco_detect)) to work alongside [amcl](http://wiki.ros.org/amcl). The goal here is to use normal amcl localization when a fiducial is not present, and to switch to localization using fiducials when the camera sees a fiducial. After having *a lot* of trouble trying to use [fiducial_slam](http://wiki.ros.org/fiducial_slam) for this, I decided to create my own node that subscribes to [/fiducial_transforms](http://docs.ros.org/kinetic/api/fiducial_msgs/html/msg/FiducialTransformArray.html) (published by aruco_detect) and publishes to [/initialpose](http://docs.ros.org/api/geometry_msgs/html/msg/PoseWithCovarianceStamped.html) (subscribed to by amcl). While the subscribing and publishing works well with both aruco and amcl, I have been running into problems with the trigonometry involved in obtaining the robot's position and orientation. Since the robots position is relative to a 2D map (already created), I converted the rotation quaternions into euler angles and only considered the yaw value (using [tf.transformations.euler_from_quaternion](http://docs.ros.org/jade/api/tf/html/python/transformations.html#tf.transformations.euler_from_quaternion)). To reconstruct the quaternion for publishing, I use [tf.transformations.quaternion_from_euler](http://docs.ros.org/jade/api/tf/html/python/transformations.html#tf.transformations.quaternion_from_euler), passing in (0, 0, new_yaw). After playing around with my calculation of new_yaw (including passing in -new_yaw, pi-new_yaw, pi/2-new_yaw, etc.), I was unable to un-invert the robot's rotation (a rotation of 30 degrees of the robot always corresponds to a rotation of negative 30 degrees in the quaternion). My current assessment of the problem is that there is some disconnect between the axes of the 2D map and the response from aruco_detect, which for some reason may cause the rotation to always happen in a certain direction (though this does not explain why publishing the quaternions for (0, 0, new_yaw) and (0, 0, -new_yaw) have the same outcome, so I am still unsure). In addition, even though we are assuming for now that the fiducial is at the origin (facing in the quaternion direction (0, 0, 1, 0)), we would like to incorporate known values for the fiducial's position and rotation to obtain the robot's true pose.

[noobie] Need help with custom mobile robot navigation (multiple questions)

$
0
0
Good evening guys, (questions on the bottom) I'm a Robotics student working on my senior design and REALLY NEED SOME HELP. The idea of the project is that we will build our own environment. Then build a 2D map using "hector_slam", or "gmapping". Then send this map to the MR to navigate the map. The MR will always start from the same spot and will give it a goal and pose using "rviz" (ultimate goal is to have the goals and poses of a specific spot on the map predefined. In other words, I want to send it a number (1 or 2 or 3) of predefined goals and poses and it drives from point A (the predefined start point) to goal 1 or 2 or 3. I want to send the number using another computer over network). I'm VERY new to ROS and trying my best to get learn and finish this project so I can graduate. My team and I finished building the 4 wheel drive robot using: - mobile-robot chassis: http://www.lynxmotion.com/p-603-aluminum-4wd1-rover-kit.aspx - raspberry pi: 3 model B running Ubuntu MATE 16.4 & ROS Kinetic - adafruit motor drive: https://learn.adafruit.com/adafruit-dc-and-stepper-motor-hat-for-raspberry-pi/overview - Optical Encoder on 2 of the 4 wheels - lidar sensor: https://www.slamtec.com/en/Lidar/A3Spec My current achievements:- - Installed all the ROS, dependences, and the packages - Using roslaunch rplidar_ros rplidar.launch to have the data and "/scan" node - Using roslaunch hector_slam tutorial.launch to build a map - Using rosrun map_server map_saver -f mymap to save a map - Using rosrun map_server map_server mymap.yaml to publish the saved map - Using motor-hat-node package & it's teleop (W-A-S-D) to manually move the mobile robot around to build the map - Built the environment with dimensions in gazebo - Built the mobile robot with dimensions in gazebo Stuck on: - Publishing odometery (is that the encoder data?) - Navigation (which package to use? I have navigation stack installed but no idea how to use it) - Localization (is it done with the navigation stack?) - How to add encoders for the wheels in gazebo? - How to add the right rplidar that I have in real life on top of the robot in gazebo? - How to establish the communication between any of the navigation packages and my robot simulation or in real life? Packages that I looked at but have no idea how to use for navigation: - navigation stack - mapping - hector_slam
Viewing all 398 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>