Difference between revisions of "Localizing with AprilTags"

From Lofaro Lab Wiki
Jump to: navigation, search
(How the Module Performs Localization)
(How the Module Performs Localization)
Line 28: Line 28:
  
  
Step 1: Find an ordered tag pair
+
'''Step 1: Find an ordered tag pair'''
  
 
An ordered tag pair is two side-by-side tags, with the top of each tag facing the positive direction of the y-axis of the global reference frame, as shown in the following picture:
 
An ordered tag pair is two side-by-side tags, with the top of each tag facing the positive direction of the y-axis of the global reference frame, as shown in the following picture:
Line 49: Line 49:
 
(picture that shows the rules for which tag is leading)
 
(picture that shows the rules for which tag is leading)
  
Step 2: Index into LUT to get global position
+
'''Step 2: Index into LUT to get global position'''
  
 
Now that we have an ordered tag pair, we index into an array of global positions. For a tag pair (i,j), we look at the (N*i + j) item in the array, where N is the number of tag IDs that are used to localize (for example we use tag IDs from 0 to 50, then N is 51 and the number of items in the LUT array is 2500). So if we have the tag pair (2,1), and N = 51, then we look at item (51*2 + 1) in the array, which is item 103. What we get when we retrieve this item from the array is a string in the format "x,y,z" where x and y are signed floats that represent the global frame position of the center of the lead tag in the tag pair and z represents the floor that the tag pair is on. This information, along with the local frame position of the lead tag in the tag pair and the orientation of the tag pair, are sufficient data to calculate the global position of the module's camera center.
 
Now that we have an ordered tag pair, we index into an array of global positions. For a tag pair (i,j), we look at the (N*i + j) item in the array, where N is the number of tag IDs that are used to localize (for example we use tag IDs from 0 to 50, then N is 51 and the number of items in the LUT array is 2500). So if we have the tag pair (2,1), and N = 51, then we look at item (51*2 + 1) in the array, which is item 103. What we get when we retrieve this item from the array is a string in the format "x,y,z" where x and y are signed floats that represent the global frame position of the center of the lead tag in the tag pair and z represents the floor that the tag pair is on. This information, along with the local frame position of the lead tag in the tag pair and the orientation of the tag pair, are sufficient data to calculate the global position of the module's camera center.
Line 55: Line 55:
 
(picture of the center of the leading tag)
 
(picture of the center of the leading tag)
  
Step 3: Calculate global position of module's camera center
+
'''Step 3: Calculate global position of module's camera center'''
  
 
Let the global frame glyph position be the global frame x,y attained from the LUT as xgg,ygg.
 
Let the global frame glyph position be the global frame x,y attained from the LUT as xgg,ygg.
Line 95: Line 95:
 
As for the z position value of the camera (that is, what floor the camera is on) it is obviously equivalent to the z value found from indexing into the LUT when obtaining the global position of the leading tag in the recognized tag pair.
 
As for the z position value of the camera (that is, what floor the camera is on) it is obviously equivalent to the z value found from indexing into the LUT when obtaining the global position of the leading tag in the recognized tag pair.
  
Step 4: Calculate orientation of module's camera
+
'''Step 4: Calculate orientation of module's camera'''
  
 
If we want the orientation of the module's camera with respect to true north, we need the following:
 
If we want the orientation of the module's camera with respect to true north, we need the following:

Revision as of 22:19, 24 February 2015

Our method of indoor localization utilizes a module with a ceiling-facing camera that recognizes glyph markers on the ceiling. The glyph markers on the ceiling each have unique IDs corresponding to positions in the global map of the area that the module is localizing in. In order to make our localization method possible, we needed to determine a practical glyph recognition system to use. We have chosen to use AprilTags as our glyph recognition system due to its robustness in accurate recognition of its tags. The AprilTags system provides quick scale-invariant and rotation-invariant recognition of its tags and will therefore prove very useful to our indoor localization project as our chosen glyph recognition system. AprilTags was developed at the University of Michigan by Professor Edwin Olson. Check out the AprilTags wiki here.


Chosen AprilTags Family

AprilTags has several tag families. We originally did testing with the 36h11 tag family. But later also considered using the 16h5 tag family instead. In the end, we decided on using the 36h11 tag family. The naming convention for tag families, for example "36h11", have the number of data bits in a member tag of the family, in this case 36, followed by the minimum hamming distance between two tags of the same family, in this case 11.

TagFams.jpg

Four member tags from each of the two AprilTags families pictured.

Hamming Distance

It is desired to have a high hamming distance between members of the chosen tag family because hamming distance, by definition, is the number of positions at which two symbols are different. Therefore, a high hamming distance leads to less of a chance of recognizing one tag as a different tag. This is one reason why the 36h11 tag family is more desirable to use than the 16h5 tag family.

Family Size

Another reason we chose the 36h11 tag family instead of the 16h5 tag family is because the 16h5 tag family only has 30 tag members, while the 36h11 tag family has 586 tag members. We must cover the ceilings of two floors of the engineering building, therefore we need a lot of glyphs. Our strategy to use pairs of tags from a given family, means we can have N^2 amount of spots marked by tags for a tag family with N members. This means that even with our tag pair strategy, the 16h5 tag family can only cover 900 spots. The 36h11 tag family, has the potential to cover 343396 spots. This was the deciding factor for why we chose the 36h11 tag family, not only will it provide more accurate tag recognition, but it will also provide us with the ability to localize more area than we will even need to localize.

Complexity

One con in choosing the 36h11 tag family over the 16h5 tag family is that the 36h11 tag family has more data bits and is therefore more complex. Because we are manually making the tags with stencils and spray-paint, the stencils will therefore have to be carved out to a higher complexity for each tag that we use from the 36h11 tag family relative to the 16h5 tag family. However, the pros of using the 36h11 tag family still outweigh the cons.

How the Module Performs Localization

(diagram here)


Step 1: Find an ordered tag pair

An ordered tag pair is two side-by-side tags, with the top of each tag facing the positive direction of the y-axis of the global reference frame, as shown in the following picture:


(picture of global frame and tag pair here)

We determine if we have an ordered tag pair by first getting the local (camera) frame x,y position of (at maximum 5) recognized tags. The reason we only need to recognize 5 tags to find a pair is because at maximum their are 4 tag pairs in the camera FOV at once. This means that the fifth tag recognized will be the other tag in a pair with a previously recognized tag.

(picture of four tag pairs in local frame)Caption about how the fifth tag is the other tag in one of the already recognized pairs.

We determine that two of the recognized tags are a pair if their centers are a certain distance away. Therefore, it is important that tags in a tag pair are within less than or equal to the specified distance from each other. It is equally important that the centers of two tags from different tag pairs are further away from each other than this specified distance (or else they will be confused as a pair).

Next we must determine which tag is the leading tag in the recognized pair. That is, if we recognize a pair with one tag that has tag ID 1, and another tag that has tag ID 2, we must know if we are looking at the tag pair (1,2) or (2,1), as these two ordered tag pairs represent different global positions in the global frame.

(picture that shows the difference between (1,2) and (2,1)

We use a conditional structure to determine which tag is the leading tag in the ordered tag pair that is based on the rules shown here:

(picture that shows the rules for which tag is leading)

Step 2: Index into LUT to get global position

Now that we have an ordered tag pair, we index into an array of global positions. For a tag pair (i,j), we look at the (N*i + j) item in the array, where N is the number of tag IDs that are used to localize (for example we use tag IDs from 0 to 50, then N is 51 and the number of items in the LUT array is 2500). So if we have the tag pair (2,1), and N = 51, then we look at item (51*2 + 1) in the array, which is item 103. What we get when we retrieve this item from the array is a string in the format "x,y,z" where x and y are signed floats that represent the global frame position of the center of the lead tag in the tag pair and z represents the floor that the tag pair is on. This information, along with the local frame position of the lead tag in the tag pair and the orientation of the tag pair, are sufficient data to calculate the global position of the module's camera center.

(picture of the center of the leading tag)

Step 3: Calculate global position of module's camera center

Let the global frame glyph position be the global frame x,y attained from the LUT as xgg,ygg. Let the local frame glyph position attained through AprilTags recognition software be xgl,ygl. Let the orientation of the tag pair in the local frame, which yields the -tilt of the local frame with respect to the global frame, be gamma. Note that the tag orientation yields the [(-tilt)=gamma] of the local frame with respect to the global frame since the glyphs at 0 degrees have their top pointing toward a line parallel to the y axis of the global map.

We shift the origin at camera center to the leading tag and obtain local frame coordinates of the camera center for this new local frame. The position of the camera center in the local frame with the tag pair's leading tag taken as the origin is the negative of the local frame position of the leading tag of the tag pair when the camera center is taken as the origin:

       local frame camera center x for leading tag as origin = xcl = -xgl
       local frame camera center y for leading tag as origin = ycl = -ygl

Now we calculate beta, which is the angle between the line created from the camera center to the leading tag origin, and the x-axis of the local frame. We will see the use of this angle soon.

beta = math.atan(ycl/xcl) if(xcl < 0): beta = beta + math.pi

Now we get the distance l1 from the origin of the global frame to the global frame position of the leading tag. l1 = math.sqrt((xgg*xgg) + (ygg*ygg))

We also get the distance l2 from the global frame position of the leading tag to the global frame position of the camera center. This distance is equivalent to the distance between the local frame position of the leading tag and the local frame position of the camera center since the local frame and the global frame have axes of the same scaling and units. Therefore: l2 = math.sqrt((xcl*xcl) + (ycl*ycl))

Now we get theta1, which is the angle between l1 and the global x-axis: theta1 = math.atan(ygg/xgg) if(xgg < 0): theta1 = theta1 + math.pi

We also want theta2, which is the angle between l2 and the global x-axis minus theta1. First we get thetaee, which is the angle between l2 and the global x-axis: thetaee = beta - gamma Why is thetaee = beta - gamma? Recall that gamma is the orientation of the tag pair, it is also the orientation of the leading tag since each tag in the tag pair has the same orientation. This orientation is taken with the tag facing toward the positive direction of the y-axis of the global frame. If the tag is viewed with orientation of zero, then the tag is facing toward the positive y-axis of the local frame and therefore the local frame y-axis and global frame y-axis are in parallel with no angle difference between them. However if the orientation is non-zero, then the local frame has an angle with respect to the global frame of the same magnitude but opposite sign. The reason the angle difference between the local frame and the global frame is -gamma is the same reason why if you tilt your camera clockwise when observing an arrow that is on an image plane parallel to the camera lens, the arrow appears to tilt counterclockwise by the same amount of degrees that you tilted your camera clockwise. So negative gamma is the angle between the local frame and the global frame, with counterclockwise rotation taken as positive. When negative gamma is added to beta, we get thetaee because beta is the angle between the line l2 (the line between the leading tag's center point and the camera center point and the local frame's x-axis. Since scaling and units are the same for the local and global frame, we simply add beta to negative gamma to get the angle between l2 and the global frame's x-axis.

Now we can get theta2: theta2 = thetaee - theta1

Finally, we can find xcg,ycg (the coordinate for the camera center in the global frame) using forward kinematics trigonometry: xcg = (l1*math.cos(theta1)) + (l2*math.cos(theta1+theta2)) ycg = (l1*math.sin(theta1)) + (l2*math.sin(theta1+theta2))

As for the z position value of the camera (that is, what floor the camera is on) it is obviously equivalent to the z value found from indexing into the LUT when obtaining the global position of the leading tag in the recognized tag pair.

Step 4: Calculate orientation of module's camera

If we want the orientation of the module's camera with respect to true north, we need the following:

gamma = orientation of a spotted glyph with respect to the x-axis of the local frame given that the top of the glyph faces the positive y direction of the global frame.

offset = the angular difference between the y-axis of the global frame and true north.

Then we find THETAcg, the orientation of the module's camera with respect to true north, as:

THETAcg = offset - gamma

With angle change in the counterclockwise direction.

Example of Localization

(Real World Robot Demo Video with One Glyph)

(Short Explanation)

(Demo Code)

TODO

  • Explanation of how we localize
  • Demo video if real world robot demonstrating this behavior