Friday, April 3, 2015

Determining Orientation

To start this post, I'd just like to add this little disclaimer that the description below is according to my own understanding. There are probably flaws as I am working on this project now. But if there's something I'm off on, feel free to let me know! I'm sure it could only help!


In pretty much every quadcopter I've ever seen, you use a few sensors to determine orientation. A Gyroscope, an Accelerometer, and if you're interested in a sensor to help you determine your heading, a Magetometer. For now, I'm not really interested in finding my heading using a Magnetometer. I'm more worried about pitch and roll. But what are these sensors and what do we get from them?

An Accelerometer is a sensor that measures acceleration. As we know, gravity is an acceleration. In the MPU 6050 which I am using for this project, the accelerometer has 3 axes: x, y and z. The Z axis is the vertical axis. When the sensor is level and unmoving, z should read at or around 1. If I suddenly lift up the quadcopter, while it's traveling up, z should be something higher than 1. Think of it as a ball on a spring (or a slinkie toy). The ball is heavy enough that the spring is stretched a little bit but not stretched to its limit. As it's resting there, it is in a state of equilibrium. As you move the hand that is holding the spring sharply upwards, the distance between the ball and your hand will increase. This increase or decrease in distance (how much the spring is stretched) can be used to measure this force.

For the other 2 axes (x and y), I thought about the ball and spring where the ball is sitting on a platform. (See the picture I made with my extensive Photoshop skills....MS paint... below)


As the platform tilts, the ball rolls farther away or closer to the hand holding the spring. We also make sure the hand holding the spring is always on the same horizontal plane as the ball. So as the ball rolls down the slope, we move our hand to stay level with the ball. But not moving closer or farther away from the ball. This increased or decreased distance as the ball rolls away or closer until an equilibrium is found is the value of the acceleration of the x or y axes. Two things are happening to the ball in the example above. We're getting a force pulling it down and a force pulling it outward. Since this is a horizontal axis (x or y) we are only interested in the value of the force pulling it out or pulling it inward.

Now, I know that's not a perfect example. In fact, don't quote me. But it's how I have it laid out in my mind and it helped me to get it. Ultimately, it doesn't matter how the sensor is getting these values. We just know that we are getting them now we can use them.

Since you now have numbers for each of these three axes representing how much acceleration there is in each direction, when the sensor is still, you can use these values to determine the angles. The orientation. In the case of the quadcopter, you can get the x and y rotation angles which, in this case, means pitch and roll.
The following is an example of the code used in this flight controller program.

def dist(a,b):
    return math.sqrt((a*a)+(b*b))

def get_y_rotation(x,y,z):
    radians = math.atan2(x, dist(y,z))
    return math.degrees(radians)

def get_x_rotation(x,y,z):
    radians = math.atan2(y, dist(x,z))
    return math.degrees(radians)

These x, y, and z values are read in and then divided by a number that we get from the datasheet in order to scale it. In the case of the MPU6050 accelerometer, we divide the numbers by 16384. After we have scaled all three numbers, we input them into the functions above and we can get the x and y rotations. Remember that we are not getting angle data from the accelerometer. We are getting acceleration data for three different axes and we are using some math magic to determine our angles.

This is all fine and dandy but there is actually a pretty important feature/weakness that accelerometers have. As they are sensitive and they measure accelerations, accelerometers are very sensitive to vibrations. (Like spinning quadcopter blades?) As the sensor vibrates, the numbers fluctuate and it interferes with the math we did in order to determine our angles. This fluctuation is what I like to call "jitter" because I've heard many other people call it jitter and it fits.
But from what I understand about accelerometers, this isn't necessarily a bug. More just how it works. Let's say you want to measure how hard you punched a wall. (Why are you punching walls?) That's an acceleration. Or maybe you put it in a ball and you want to measure how hard the ball hits the ground ever time it bounces. And since you get values back from just standing still thanks to gravity it just so happens that you can also use it to determine your x and y angles. But unless I'm wrong, you need data from all three axes in order to determine just one angle. (see code above)

I actually made a quick and VERY dirty flight controller program using only the accelerometer. I printed out the angle data for pitch and roll using only the accelerometer. What I noticed was that right after calibration, the values that came out were very close to zero just like they should be because the quadcopter hadn't moved yet. I then increased the throttle slowly without propellers on. At this point, the quadcopter still wasn't moving AND there wasn't extra vibrations from the props but I still noticed my pitch and roll angles going up to almost 10 degrees just from having the motors on. Since this version of the flight control program was set up to adjust motors to correct angles, you could hear the motors freaking out and changing speeds... But the quadcopter still hasn't moved...
So in the case of a quadcopter, I would end up losing orientation accuracy as I increased the speed of the motors. I don't know if people have managed to get a quadcopter flying using only an accelerometer but I highly doubt that will be enough for this quadcopter.

At this point you're probably thinking "Well, okay Arik. If an Accelerometer measures accelerations and the method to determine your orientation when using an accelerometer is overly sensitive to vibrations which are GOING to happen in a quadcopter, why don't you just use another sensor? What's this gyroscope you mentioned?" Well that, my friend, is the next topic.

I'll start the explanation of gyroscopes with a definition found here:
Gyro sensors, also known as angular rate sensors or angular velocity sensors, are devices that sense angular velocity. Angular velocity. In simple terms, angular velocity is the change in rotational angle per unit of time. Angular velocity is generally expressed in deg/s (degrees per second).

This can actually be very easy to confuse with the spinning wheel device you've probably seen before. (In case you don't want to Google Image Search "Gyroscope") My older sister had a toy gyroscope when we were kids. As the wheel spins, you can change the angle of the base but the spinning disk keeps the same orientation. But in the case of the sensor, the number we get gets us angular velocity in degrees per second.
"But wait, Arik! You're supposed to be able to determine orientation from a gyroscope! Just like the toy gyroscope!"
Yes. You're right. Let's say you start your program with your sensor perfectly level and unmoving. The sensor then proceeds to tell you angular velocity. With it unmoving, the x, y, and z values should all be zero. (because the sensor isn't moving or rotating.) Now, looking down at the sensor, let's rotate it clockwise. We should see the z axis number increase and then, when we stop, it should return to zero. We can use this stream of numbers to determine our orientation pretty well. If we add the degrees per second after each reading and take into account the time difference between readings and you take frequent readings, you can still determine your orientation. I would include a code example for how to use a gyro to determine orientation like I did with the accelerometer but I don't actually have that yet. This is why my quadcopter is still grounded. (We'll get to that)

The issue with gyroscope sensors (or angular rate sensors) is that they tend to not return to zero when they are unmoving. If you're seeing this in your gyroscopes, don't worry! They're not defective! It's just the way it goes. But lets say you stop rotating your sensor but the values don't return to zero. Your virtual model will continue to rotate around the axes that aren't zero. This will cause your virtual model to drift. Now, in a quadcopter, this also wouldn't be the best. If under certain circumstances, you can get drift, you will have to re-calibrate often to fix it. This isn't good for sustained flight.

So we have two sensors: Gyroscope and Accelerometer. These sensors' weaknesses are "drift" and "jitter" respectively. The really awesome thing about a gyro and accel combination exactly that! They don't have the same weaknesses! The thing I'm researching right now is the use of filters and algorithms to achieve something with a super awesome name: "Sensor Fusion!" (Did somebody say fusion??) Sensor fusion in this case is using both our gyroscope and accelerometer to determine our orientation. Imagine this: As vibrations increase, our accelerometer values start to jitter. But these jittering values are compared to our gyroscope values which are not affected by vibration. Together, we can cancel out the jitter.
"But wait! What about gyroscope drift?" Well, the same thing applies but in reverse! We have a steadily changing roll angle thanks to gyroscope drift. But we're actually able to compare that to the accelerometer data and cancel it out.

"Arik, this isn't actually very descriptive..." Yes. As I said, this sensor fusion is what I'm researching right now. While I was researching and testing over spring break last week, I came across two methods to achieve sensor fusion. The first I found is called a complementary filter. The other method I found is using something called a Kalman filter. I'm not entirely sure which one to use yet. But once I can get reliable readings for my pitch and roll angles (x and y), it should be a relatively simple task to create the part of the program called a PID. The PID is what runs over and over as long as the program is running. In the case of roll, your roll PID will take in the current roll angle and figure out how far away it is from the target. It then decides how to fix it. In my quadcopter, I'll be starting with two PID's. One for pitch and one for roll. Yaw can come later. I can just bypass the PID for yaw if I have to. It won't be the best but it should give me SOME control.

So now, with a fresh SD card image for my quadcopter and just a few days to go, here goes sensor fusion!

This is a link to one of the most helpful tutorials that helped me to understand how to use the MPU6050:
http://blog.bitify.co.uk/2013/11/interfacing-raspberry-pi-and-mpu-6050.html
(This link is only the first one. Follow the links to the newer posts and you'll see his full explanation and tutorial)

No comments:

Post a Comment