Exercise 2 - State Estimation

Published: Wednesday Nov. 10.

Exercise 2: Kalman Filter

Setup

Follow all the same setup steps from Exercise 1 for setting up your robot, and developing through the dts exercises interface.

Additionally, run the following:

$ docker -H ![your_robot].local pull duckietown/dt-duckiebot-interface:daffy-arm32v7

Then ssh into your robot and run

$ dt-autoboot

Finally, make sure your shell is up to date, and the first time you run things through the dts exercises interface add the --pull flag once locally and once on the robot:

$ dts exercises test -b ![ROBOT_NAME] --pull 
$ dts exercises test --sim --pull

You may choose to copy over the other perceptual modules that we tuned for Exercise 1, such as the anti_instagram, image_processing, or line_detector packages. Just copy them into your exercise_ws/src folder and then build.

For the lane_controller it is up to you which version to use. You may use your work from Exercise 1, you may use the built-in PID-based lane controller, or you may borrow a lane_controller from someone else in the class. If you borrow the work of someone else, you must acknowledge them, for which they will receive a small bonus.

Your Task

We will be working in the state_estimation exercises folder in dt-exercises.

You will need to edit the lane_filter package in state_estimation/exercise_ws/src.

You probably only need to touch 3 files (but you could do things any way you like):

  • config/lane_filter_node/default.yaml are parameters that will get loaded by your ROS node
  • src/lane_filter_node.py is the actual ROS node that will get run. We have tried to provide everything you need for the ROS interface already, so you may not have to change this file at all.
  • include/lane_filter/lane_filter.py is where the python class is located that I would use to do most of the logic.

In class, we have discussed the pros and cons of different filtering approaches. Here, we propose a “best of both worlds” hybrid approach. We will use the histogram approach for generating the measurement likelihood (since this has nice robustness properties as we discussed), but then we will parameterize the output as Gaussian and integrate it with a motion model that uses the Duckiebot encoders. The code is prepared for you such that you only need to fill a few TODOs. They are as follows:

  • Implement the predict function in include/lane_filter/lane_filter.py. This function takes the encoder “deltas” from the two wheels and should propagate the belief forward. You can use the Extended Kalman filter propagation equations for this.
  • Implement the update function in include/lane_filter/lane_filter.py. This function already builds the measurement likelihood for you. You will need to reparameterize the measurement likelihood as a Gaussian with mean equal to the maximum likelihood value in the histogram, and covariance determined somehow by looking at the rest of the histogram. Finally, you will need to fuse this measurement with your current belief.
  • define and tune any new parameters that you might need

Deliverables

To submit your assignment,

  1. you should make two different submissions to the aido5-LF-sim-validation challenge. One of them should be optimized to run in the simulator, and the other for on robot. You should change your submission’s label in the file submission.yaml to be user-label: sim-exercise-2 before submitting for the simulation, and user-label: real-exercise-2 for the real robot. The output that you get on the challenge server for the real-exercise-2 does not matter, we will run that submission on our own robots for the evaluation.

Note that you can use the same code for both submissions, but having two different submissions will allow you to tune parameters for the real robot and the simulator separately.

  1. You should also submit a video of your code running on your robot with dts exercises test -b ![YOUR_ROBOT]. This will be useful in case a problem happens when we try to run your code on our robot. You can submit the video here.

  2. Please send to Liam and Anthony a link to your github repo through a private message on Slack. Also please mention whether you borrowed the lane_controller from someone else.

Grading

This assignment is worth 15% of your final grade. Compared with Exercise 1, more weight will be given to the intermediate output of the lane filter (the lane pose estimate), and less to the end-to-end performance of the agent.

  • 4% performance of your lane filter evaluated in simulation
  • 4% performance of your lane filter evaluated on the Duckiebot
  • 3% performance of your end-to-end system evaluated in simulation
  • 3% performance of your end-to-end system evaluated on the Duckiebot
  • 1% for submitting the video

Please report problems quickly on discord, in class, on slack, as github issues, etc. If you are not sure if something is working properly ask. Don’t spend hours trying to fix it first.

Other Pointers, Helpers, and Things of Interest

The deadline is set for Wednesday Nov. 25.