300x250 AD TOP

Search This Blog


Paling Dilihat

Powered by Blogger.

Saturday, December 16, 2017

Machine Learning Automatic License Plate Recognition

I'm starting to study deep learning, mostly for fun and curiosity but following tutorials and reading articles is only a first step.

Though I know and programmed multiple languages in the past, somehow deep learning is associated with Python and as someone who likes C like languages it was always a dislike for me, the whole concept of using spaces to control program blocks looked ridiculous to me, but what the hell, lets try to learn it, it makes things a lot less complicated than compiling Tensorflow, Caffe or OpenCV from source and then trying to get them to talk to each-other, where in python these issues have already been solved.

Learning neural networks have been on my mind for quite a while, I've even read a few neurology books to understand the origins of these ideas but only when I've attended GTC Israel 2017 and had the chance for hand-on guided Nvidia DIGITS session I've started to take active interest, though not really achieve anything new for a while.

10 points if you can locate me in this clip

So I thought about a cool project, though I'm not sure what its usefulness is going to be now, so how about recognizing and registering all the vehicle license plates around one's car?

Algorithmic Approach

At first I've tried OpenALPR's approach, finding a large rectangle with multiple rectangles inside it. it works if the license plate is a major object in the image, but not if there are multiple vehicles, not to mention an unstructured scene, like driving on the highway or a mobile camera of some sort, though I might have not implemented it correctly in my code.

Image Segmentation

So the 2nd approach I've thought about is image segmentation. I've been reading a lot about ENet, SegNet and ICNet lately and was eager to try it. and so I've began to look for a Keras model to get things started. But then I realized, I don't necessarily need the localized polygon of the license plate, a bounding box should be more than enough. then I can pass the cropped image to tesseract and get a license plate.

Object Detection

So I've looked up a few object detection models, such as SSD, YOLO, Faster RCNN, R-FCN, RetinaNET and more are being designed as we speak. I've decided to go with YOLO, being biased to it after seeing a demo I liked.

But to train any kind of machine learning model, you need data and lots of it. I've started to look for a license plate dataset but couldn't find anything that has both the images and the polygons... but then I remembered I've seen that in the Cityscapes Dataset there is an unmarked license plate class, so theoretically all I needed to do was generate the right mask/polygons for the training.

I've cloned the basic-yolo-keras repo by Huynh Ngoc Anh, updated it to work with python 3 and ran a training session on the dataset.

Having a laptop, its a bit of a problem to train on it, since its not always on, I need to take it with me etc' etc'. so I've looked for an online solution. eventually I ended up using Azure NC6 machine at $0.90/hour, it has Nvidia K80 with 12GB of RAM so I could increase the batch size to make things run a bit faster, eventually training took less than 24 hours on a ~2400 images, some with more than one sample.

(on my 1050TI, this video was created at about 9fps)

As you can see the license plate should be readable, otherwise it doesn't really detect it, I didn't plan this, so I'm guessing YOLO training is really good or its a side effect of using the Cityscapes Dataset quality.


My next task was OCRing the license plates so I can get data I can list and log, I've had some experience with tesseract in the past, so I chose to try it this time as well.

Well.. this didn't go as smooth as I wanted... while many license plates are readable by a human, the noise is just too high for tesseract to recognize reliably.

The following video was shot with 4K camera, high shutter speed and high bitrate (SJCAM M20), but the recognition quality has marginally increased.

(creating this video was even slower, the GPU didn't work as hard, but tesseract did a lot of work (CPU), about 1.5fps)

I've had my fun with this project, but I think the next step could be another deep learning  object detection, only this time it should be the license plate numbers in case of Israel in addition, letters - for many others.

If I may guess further, the reason this project was not a complete success is the OCR process, the camera is an action camera, so very wide lens, that means very low resolution for each license plate.

I'm pretty sure further pre-processing effort might raise tesseract's recognition quality, they do look readable. I did discover that Israeli license plates are just too tall for tesseract's english detection, which is somewhat amusing.
If pre-processing doesn't work as desired, this little project has taught me that machine learning can probably do this task as well and probably with high precision.

Source Code

I'm still not ready to publish any python code, I will need to familiarize with more of it before being ready to do so.
In any case, there is nothing new there, the code for building the CityScapes dataset extract is basically just parsing the JSON files and producing VOC format XML, the YOLO code is the code from basic-yolo-keras with some adjustments. and lastly the cleanup code for the license plate is just a simple auto-levels like code on the V channel in HSV.


This was a fun project, I'm sure that with further research it can be a pretty cool and reliable software, using YOLO for license plate detection seemed to work pretty good, perhaps cleaning up the dataset and further optimizing the training and inference processes will make it even better, perhaps using a machine learning based number/letter recognition will make reading the plates more reliable. perhaps it can all be coded with an algorithm rather than a model.... maybe the next thing should be recognizing car maker and color?....

Further Reading:

Speed/accuracy trade-offs for modern convolutional object detectors by Felix Lau
Cityscapes Dataset
ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
ICNet for Real-Time Semantic Segmentation on High-Resolution Images
SSD: Single Shot MultiBox Detector
You Only Look Once: Unified, Real-Time Object Detection
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
R-FCN: Object Detection via Region-based Fully Convolutional Networks
RetinaNET - Focal Loss for Dense Object Detection


Tuesday, December 5, 2017

Flashing ATtiny85 with USBasp and Making a PWM Generator

I've ordered a few Digispark clones (originally made by DigiStump) from AliExpress for use in low pinout, low power, tiny projects that don't require much code as they use the ATtiny85. These devices looked very cool and suitable for my needs and the fact that they can be programmed without the cumbersome ISP programmers made it even more appealing (spoiler: the clones did not work properly)

At first I expected them to be able to communicate over USB and expose a com port, which would make programming them even easier but they don't work that way.


The firmware bootloader is based on micronucleus and the USB interface  is V-USB, it is a firmware/software only implementation of low-speed USB device, so any functionality you wish to implement needs to be in software, even virtual com ports, for examples you can look here.

Something to note about the micronucleus firmware is the 5 seconds startup delay. If you need the device to start up immediately, you'll have to try a different approach where shorting pin 5 to ground enabled programming, otherwise it starts immediately. There is however a solution for it, but I did not test it.

So I've hooked up the Digispark, started the Arduino IDE and loaded blink and...

\micronucleus\2.0a4/launcher -cdigispark --timeout 60 -Uflash:w:Blink.ino.hex:i 
Running Digispark Uploader...
Plug in device now... (will timeout in 60 seconds)
> Please plug in the device ... 
> Press CTRL+C to terminate the program.
> Device is found!
connecting: 16% complete
connecting: 22% complete
connecting: 28% complete
connecting: 33% complete
> Device has firmware version 1.6
> Available space for user applications: 6012 bytes
> Suggested sleep time between sending pages: 8ms
> Whole page count: 94  page size: 64
> Erase function sleep duration: 752ms
parsing: 50% complete
> Erasing the memory ...
erasing: 55% complete
erasing: 60% complete
erasing: 65% complete
> Starting to upload ...
writing: 70% complete
writing: 75% complete
writing: 80% complete
> Starting the user app ...
running: 100% complete
>> Micronucleus done. Thank you!

no dice.

Apparently the clones were flashed with micronucleus but either an old version or wrong fuses.


Fixing Digispark Clones with USBasp

For a different purpose I've ordered a cheap USBasp 2.0 programmer from AliExpress, although it comes with a firmware already flashed, its using an old usbasp firmware so its vendor id and product id are not compatible with current avrdude which is being used by Arduino.

If you insist on updating the USBasp firmware, you can follow Darell Tan post, which is based on work made by Uwe Zimmermann.

otherwise you can use PROGISP  v1.72 which does a great job.

I've connected the USBasp to the Digispark

First, connect the pins according to this:

RST <> PB5

Next, I've downloaded the lastest micronucleus firmware for ATTiny85 and flashed the bootloader:

- Select the chip as ATTiny85
- Click RD to see it is able to communicate appropriately
- Load the appropriate micronucleus firmware
- Select the fuses according to the documentation.
- Click Auto to flash the bootloader

Note: Note fuse RSTDISBL, You might not be able to use PB5 as its used as external reset pin.

We then retry to flash using Arduino IDE / USB cable and its working!

Running Digispark Uploader...
Plug in device now... (will timeout in 60 seconds)
> Please plug in the device ... 
> Press CTRL+C to terminate the program.
> Device is found!
The upload process has finished.
connecting: 16% complete
connecting: 22% complete
connecting: 28% complete
connecting: 33% complete
> Device has firmware version 2.1
> Device signature: 0x1e930b 
> Available space for user applications: 6522 bytes
> Suggested sleep time between sending pages: 7ms
> Whole page count: 102  page size: 64
> Erase function sleep duration: 714ms
parsing: 50% complete
> Erasing the memory ...
erasing: 55% complete
erasing: 60% complete
erasing: 65% complete
> Starting to upload ...
writing: 70% complete
writing: 75% complete
writing: 80% complete
> Starting the user app ...
running: 100% complete
>> Micronucleus done. Thank you!

ATtiny85 as a PWM generator

I've wanted a decent pwm generator that can display the duty cycle width for diagnosing problems with brushless ESCs and servo motors, so the cheap ones would not do. I've decided to build my own, as always for education and fun.

We'll start with the basics, the schematics:

As mentioned previously, the ATtiny85 does not have a dedicated USB port, nor does it have a dedicated serial port, so how does it communicate over USB? It does so with V-USB, which is a software emulation for a USB port/device.

But the USB connection comes at a cost, you'll most likely encounter problems if pins 4-5 (PB3-4) will have any contact with other components. so either flash and test this chip on a breadboard or flash once and forget you did, or just don't use these pins.

So lets start with our PWM generator.

I'm using:
Digispark ATtiny85 Clone - This chip is good enough for this purpose, only drawback is the limited timers.
10k potentiometer - As an analog input for the 10bit ADC, which is then mapped fromo 400-2400us.
TM1637 7 segment 4 digits display - To display the selected duty cycle.
1k resistor and 1N4004 Diode - for output protection, I've just had to replace the ATtiny85 due to feedback from a servo, so its my attempt to protect it.

The source code is very simple, its actually the example from Adafruit SoftServo with display code.

Timer Limitation

The ATtiny85 has a limited timer, which can't be used to control high resolution PWM signal. but Adafruit SoftServo is a good solution for this problem, rather than use the timer to control the PWM directly, it uses the timer to call a function that simulates a PWM signal by writing the pin directly with a delayMicroseconds between.

ADC Noise

At first I've tried to read the potentiometer directly and push an update to the servo, but I got so much noise from the ADC that the servo shaked a lot. I think most if not all ADCs have a noise problem. usually a capacitor and a low pass filter can eliminate some of the noise and while the ATtiny85 has a ADC Noise Reduction Mode, I've resorted to use a simple solution:

long avg = 0;
for (int i = 0; i < 100; i++) {
avg += analogRead(POT_PIN);
val = avg / 100;

It will never be complete without a printed enclosure :-)

Further References
- I had to check which vendor/device ids to find out how to use the USBasp first as the device didn't come with any information. so I've used NirSoft's USBLogView to see which device was being plugged in/out.
- I looked into having a virtual com port with these devices, Osamu Tamura @ Recursion Co started AVR-CDC and its emulating virtual com port, but I didn't get around to test it. Two more source code libraries can be found here and here.
- Official USBasp firmware is written by Thomas Fischl