Support us and view this ad

可选:点击以支持我们的网站

免费文章

  • Platforms: NXP
  • License: GNU General Public License, version 3 or later (GPL3+)
About the project Autonomous cars are starting to rule world. So let's try to make our own and go to race!   Items used in this project Hardware components FRDM-K64F Gear Stepper Motor with Driver Micro Servo Alamak Car Model Custom PCB Laser proximity sensor A lot of different SMD and non-SMD components (You can find the list for download at bottom of article) 4N25 Optocoupler   Software apps and online services MCUXpresso (MCUXpresso-IDE) Processing   Hand tools and fabrication machines 3D Printer Screwdriver Soldering tools (preferably air soldering tools too)   Story Me and my friends have decided to go again to NXP Cup this year. We already had some knowledge from previous one, so we decided to make something more difficult. So we will be making a new main board, camera on servo motor, bluetooth communication and much more.   Camera This is the most important thing of whole vehicle. The camera is used for navigation of car between two edge lines of track. It is even stated in rules that it should be main thing that is used for navigation of car. We are using monochromatic line scan camera. This means that our camera can see just one line of black, grey and white pixel, aka it has resolution 128x1. This resolution is compensated by high possible framerates and also by good light sensitivity. While we were racing previous year, we had found out two things. First is that light doesn't come only from front of camera, but also from back. This happens, because camera chip is placed on thin PCB, which lets come light through. So to mitigate this problem we 3D printed black camera case and now it looks cooler too! Second problem was, that we were moving camera up and down a lot. We were doing this because we were trying to find right distance, where camera captures turns correctly and doesn't capture car itself and we found that also for different type of tracks best position differs. Next issue was also that if we accidentally hit the camera we had to found right position again. So we added stepper motor with gears with 1:2 ratio, which make our camera move faster. Also you can see at the image that we added end switch. Because servo motor doesn't itself provide any informations about it's initial position. When our cars boot up, we always move it down to the position where it touches the end switch. From that moment we know where is our camera looking at. Here you are able to see 3D render of all 3D printed parts (just the stick isn't 3D printed and was added to make complete looking image). So lets look more at the hardware and software side. Our goal is to capture as much as possible images per second while keeping good contrast. If we wouldn't have good contrast, then we would either have issue with having image too dark, so there wont be possible to see anything on it or it would be too bright it, so you wouldn't be able to see anything as well, but in this case everything would be white. Also we can't have our exposure long, because it would limit our framerate. Framerate is calculated as: FrameRate = 1 / Exposure It is partially possible to fix those problems by normalizing values. Lets say that we have black color with value 0 and white with 1. We take our oversatured image which have lowest value of 0.8 and highest 1, this gives us value range of <0.8, 1> which we are going to map into <0, 1>. By this our colors change from 0.8 to 0; from 0.9 to 0.5; and from 1 to 1. By this really simple process we got normal looking image without saturation, but it also brings a lot of noise into the image, so it makes searching for lines a lot harder. And if we would add 25% light into our scene, then captured image would go into <1, 1> range and we would be unable to extract any data from this, not even noisy ones. So, lets get back to the exposure and how to set it right. Our goal is to achieve biggest possible contrast possible. Contrast is calculated by following equation: Contrast = LightestColor - DarkestColor We achieve biggest contrast when lightest color is pure white and darkest color is pure black. Our camera is linear sensor, so this means if we increase exposure by 50%, then both darkest and brightest value should increase by 50%. Our goal is to get average color to 0.5. Consider following examples <0, 0.1>, average of those values is too low (10x), so we can say that we need to divide exposure by 0.1; <0,1> average is 0.5, so we are perfectly fine and don't change anything; <1,1> average is too high (2x), so we divide exposure by 2.   Based on this we can create following equation for adjusting exposure. NewExposure = OldExposure / (LigtestColor + DarkestColor) And that's it, everything works. Well almost. You can go with your car anywhere and colors looks just fine. Atleast untill you go to room which isn't lighted by sun, but by fluorescent lamp or something. You would start to see a huge blinking, even while the lights are shining light for your eye just as usual. Why does that happens? Our lights are powered by AC power, which follow sinus at fequency of 50 Hz. Which looks like this (x axis is in seconds): You can see that it is really running at 50 Hz, but peaks are at 100 Hz. Also what really matters for light isn't voltage, but the power, which looks like this:   We can now see the 100 Hz really clearly. So how to fix this problem? We will need to get our sampling frequency synchronized with the power frequency, so only aviable frequencies which makes sense are 100, 50, 25, ... Hz. But we want to stay at highest possible frequency, because the lines are really thing and we can miss them really easily if we don't take photos often. But lower frequency like 50 Hz is useful in low light scenario, where we need our exposure over 10ms because of low light levels. So lets look how to implement this. We would need to take frame each 10ms with our exposure. But we reach into problem here. Start and end of frame capturing are linked together into single event.   As you can see on the image we send SI pulse and read pixel data one by one by sending CLK pulses. But as you might see, after reading 18 pixels of our image there begins new integration (capturing of image) which we want to happen after some more time (integration of previous image took 6ms and we want to keep our sync at 10ms, so we want to wait 4ms more). We can fix this by clearing CMOS by dummy read (we ignore content and  do the read as fast as possible). So our final reasult looks like this: We integrate the image for 6ms then read it and process it, imediatly after this new integration begin, so we read the sensor after 4ms, but ignore the data. And after this new integration with lenght of 6ms for our real image begins. Of course there can be case that the track will be lighted by some better lights. Some lights have capacitor which decreases this flashing or they can store some of that energy in term of heat and light for some time. Then the light power will look like this:   Or it can be just be under sunlight, so it would be perfect line. When we got enough stable light output like this, we can remove the code to fix blinking and try to get more than 100fps. But we should keep always around this piece of code as the competition happens in enviroment we don't know. Let's look how to program this. K64F have aviable four PITs. PIT stands for programmable interrupt controller. It is really simple device in which you set after what period you want to trigger interrupt and it will trigger it. Interrupt is usually small function, which doesn't get executed by other code, but it gets executed by interrupt controller. It is device, which can interrupt executing of current code, save it's state and execute some that interrupt function when some event happens, like PIT waits for time we set it to wait. We will need two PITs for our camera. One will be for 10ms clear interval and second for setting integration interval. Let's look at sample code of interrupt handler: extern "C" { void PIT_CHANNEL_0_IRQHANDLER(void) { if(PIT->CHANNEL[0].TFLG & PIT_TFLG_TIF_MASK) { PIT->CHANNEL[0].TFLG = PIT_TFLG_TIF_MASK; // interrupt code goes here } } First thing we need to write is extern "C", beucase C++ doesn't follow old school low level code practices, which can make it hard to find our interrupt function. This toogles it to old C mode, so everything will works fine. Then it is follow by function declaration, it always have to have void return type with void parameters, so the function doesn't return any value and doesn't accept any too. The channel sets which PIT should be used, in this case we are using PIT0. First thing the function do is that it check if it was executed by the interrupt and if it wasn't then it exits. If it was, then it clears interrupt flag. This flag causes interrupt controller to trigger this interrupt and if it won't be cleared, then it would end in endless interrupt loop. This is followed by our real interrupt code. Now we need to look how to detect lines from image we got. We are looking for black line with white area next to it. I have first thought about thresholds and search for area, but this didn't worked really well. The best way I found out is use of derivation. Derivation will give peaks at places where is rapid change for black to white or vice versa. It works great, but to achieve best quality detection we need to use averaging. At first someone might get idea that the way to do this is to average adjacent derivations, but this won't work. Let me show you why: Derivation = (Der1 + Der2 + Der3)/3 This equation equals to next equation: Derivation = ((P1-P2)+(P2-P3)+(P3-P4))/3 And when we simplify that, we simply get: Derivation = (P1-P4)/3 This won't get us rid of the noise. So I came with different approach. I average first group of pixel, then I average second group of adjacent pixels and perform derivation on average of those groups: Derivation = (P1+P2+P3)/3 - (P4+P5+P6)/3 If the result is close to 0, then it means there are no lines. If the result is away from zero, then it means that there is line somewhere. Based on if the derivation is positive or negative you can detect if it is change from white to black or black to white.  Stepper The stepper motor is controlled with use of 4 signals. They are called A,B,C,D and each of them controls one coil. I am using motor 28BYJ-48 for which is best working half step mode, which you can see on picture below. There also exist full step mode, which I tried, but I had feeling that it leads into worse performance. Half step mode consist of 8 different combinations of ABCD (you can see that everything on image repeats after 8 divisions) and half step mode is using just 4 combinations. Motor is moved forward by sending pulses from left to right and in opposite direction by sending them from right to left. When the car boots up, then position of motor depends on the position where the motor was before car was shut down. But it isn't good to get the position by this way, because someone might apply huge enough force to the motor by hand and move it into different position also this position saving would slowly destroy flash memory of controller. So best way is always at car boot send those pulses in direction which move the camera down and at each move check if the camera had hit the end switch (that is small switch mounted under camera and is defining the lowest position of camera, also the angle under which the camera hits the switch must be known, so we can do all calculations we need). After the camera hit the end switch, the switch closes and send signal to out microcontroller, this stops moving of stepper and says that we are now at position 0, aka start position.    Bumper We learned from previous year how much is bumper important. We have collided with walls and various objects many times and almost totally broke our car. So this time we have added bumper from flexible piece of tube, which holds on massive 3D printed block, which we also use as holder for our laser. The tube is able to absorb part of the collision and rest is transfered through the massive block to our car, so the collision isn't handle only by edge of the car. Here is 3D render of bumper, as you can see it is really massive and it also includes holder for the laser. Laser This part is needed because of obstacle avoidance cheallange. Goal is to dodge white cube, which will be placed into the track. Our plan is to find this obstacle with laser, which will be rotated around with servo motor....

继续阅读完整内容

支持我们的网站,请点击查看下方广告

正在加载广告...

Login