1. The document discusses improving a Raspberry Pi Zero security camera system by addressing issues with false positives from the initial motion detection software.
2. An initial version used Motion software for activity detection and sending alerts but had problems with false alarms from things like moving trees.
3. A second version analyzes images using Amazon Rekognition to identify if a human is present before sending alerts, reducing false positives. It also implements the system using AWS Lambda functions and Step Functions for improved scalability and manageability.
4. Motivation and Requirements
Input from Stakeholder
@markawest
Motivation and
Requirements
Pi Zero Camera
Version 1
Pi Zero Camera
Version 2
Conclusion
6. Project Requirements
Functional
• Monitor activity in the
garden.
• Send warning when activity
detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
@markawest
7. 1. Activity in
garden
2. Camera
detects
movement
3. Camera sends
alert email with
snapshot
Event Flow to Implement
@markawest
8. Pi Zero Camera Version 1
Hardware and Activity Detection
@markawest
Motivation and
Requirements
Pi Zero Camera
Version 1
Pi Zero Camera
Version 2
Conclusion
9. PiZero Starter Kit: 300kr
Camera Module (NoIR): 250kr
Camera Adapter: 50kr
ZeroView: 80kr
Total Cost: 680kr
Hardware Shopping List
@markawest
11. 1. Activity in
garden
2. Camera
detects
movement
3. Camera sends
alert email with
snapshot
Event Flow
@markawest
12. Motion
(https://motion-project.github.io)
• Open Source motion detection software.
• Excellent performance on the Raspberry Pi Zero.
• Built-in Web Server for streaming video.
• Detected activity or ‘motion’ triggers events.
• Works “out of the box”. No need for additional programming.
@markawest
15. Project Requirements : Evaluation
Functional
• Monitor activity in the
garden.
• Send warning when activity
detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
@markawest
18. The Motion Software
focuses on the amount of
changed pixels, and not the
cause of the changed pixels!
@markawest
19. Pi Zero Camera Version 2
Addressing false positives
@markawest
Motivation and
Requirements
Pi Zero Camera
Version 1
Pi Zero Camera
Version 2
Conclusion
20. Project Requirements Reloaded
Functional
• Monitor activity in our
garden.
• Send warning when activity
detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
@markawest
21. Project Requirements Reloaded
Functional
• Monitor activity in our
garden.
• Send warning when human
activity detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
@markawest
22. Finding an Image Analysis Solution
OpenCV
• Use Face Detection to find out if
a human was in the snapshot.
• Problem: What if the subject
was facing away from the
camera, or wearing a mask?
TensorFlow
• Train and use a Neural Network
to find humans in the snapshot.
• Rejected by stakeholder as this
solution “would take too long to
implement”.
@markawest
24. AWS Rekognition
@markawest
• Officially launched in November 2016.
• Part of Amazon Web Services (AWS) suite of products.
• Image Analysis as a Service, offering a range of API’s.
• Built on Deep Neural Network models.
• Competitors: Google Vision, MicroSoft Computer Vision and Clarafai.
26. Limitations of AWS Rekognition
• Black box:
• Not always intuitive.
• Which labels are we looking for?
• 85% hit rate:
• Noisy or blurred images give
worse results.
• Possible solution - send multiple
snapshots for each single motion
detected event.
@markawest
27. Updated Event Flow
@markawest
Amazon Web Services (AWS)
Trigger Email if
snapshot
contains a
person
Analyse
snapshot from
Motion
software
Tweaked Motion
configuration file to provide
multiple snapshots for each
detected activity
29. AWS IAM
AWS
Rekognition
AWS Simple
Email Service
AWS s3
(storage)
Amazon Web Service Flow
Overview
AWS Step Function
(workflow)
s3 Upload
Trigger
@markawest
1
2
4
5
6
calls
3
uses
uses
30. AWS Software Developer Kit (SDK)
• SDK’s exist for a range of platforms – including Python, Ruby, .NET and
Java.
• Provide an API for consuming AWS Services.
• For this project I used the Node.js AWS SDK.
• Upload from PiZero Camera to AWS s3 handled by the AWS Node.js
SDK.
@markawest
31. AWS IAM
AWS s3
(storage)
AWS Lambda Functions
Code Building Blocks or Microservices
AWS Step Function
(workflow)
s3 Upload
Trigger
@markawest
calls
AWS
Rekognition
AWS Simple
Email Service
uses
uses
32. AWS Lambda Functions
• Units of code, based on Java, C#, Python or Node.js
• Serverless.
• Stateless, allowing for rapid scaling as needed.
• High availability.
• “Pay as you go” model, with a generous free tier.
• AWS SDK natively available.
@markawest
34. AWS IAM
AWS s3
(storage)
AWS Step Functions
Orchestation of Lambda Functions (Microservices)
AWS Step Function
(workflow)
s3 Upload
Trigger
@markawest
calls
AWS
Rekognition
AWS Simple
Email Service
uses
uses
35. AWS Step Functions
• Launched in December 2016.
• Coordinate and orchestrate Lambda Functions into a Workflow or
State Machine.
• Defined via JSON files, displayed as visual workflows.
• Provide the same benefits as AWS Lambda (High Availability, Scalable,
“Pay as you go” pricing model).
@markawest
36. Pi Zero Camera v2 : Step Function
@markawest
Error
Handler
(sends
Error
Email
via
AWS
SES)
Sends snapshot to AWS Rekognition
Evaluate AWS Rekognition response
Send Alert Email?
Send Alert Email
via AWS SES
Archive image in AWS s3
1
2
3
4
5
6
1
2
3
4
5
6
38. Estimated AWS Cost
First 12 months
0 kr *
After first 12 months
Less than 50 kr a month *
@markawest
* Estimated cost based on access to AWS Free Tier for first 12
months, with 50 processed images per day
39. Project Requirements : Evaluation
Functional
• Monitor activity in the
garden.
• Send warning when human
activity detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
@markawest
40. Lessons Learned
• Hobby projects are still projects.
• Iterative development allows for changing requirements.
• Find creative solutions to cost and time constraints.
• Use The Cloud to push past hardware constraints.
• AWS provides a cheap and easy way to experiment with Cloud
Computing, but beware vendor lock-in!
@markawest
41. Next Steps
Night vision Automated deployment
• Round trip for development
and test is clunky, especially
for Zip Files.
• Look into easing this
process.
@markawest
But before I go any further I’d like to tell you a couple of things about me. In my day job I am a manager in Bouvet’s Oslo offices, where I work with a range of technologies based upon Java and JavaScript. My current focus is upon AI, Cloud and the Internet of Things.
When I’m not at work I can often be found working on a range of hobby projects – often using the Raspberry Pi and Arduino platforms.
In addition I am also involved in the organisation of the JavaZone conference, where I primarily work with the program committee.
I’m on twitter under the handle @markawest, so feel free to contact me if there is anything you’d like to talk about!
Ok, so lets look at the outline of todays talk.
We’ll start in a second with a run through of the motivation for building this camera, along with the requirements. These should always be clear and defined for any project, whether it is a multi million kroner project for NAV, or a hobby project undertaken by an englishman living in Oslo.
Then we’ll look at the first version of the camera, evaluate how it worked and discuss it’s weaknesses.
The second version of the camera is where we introduce Amazon Web Services to the mix and will take up the majority of the presentation time.
Finally I’ll evaluate the project and talk about what I learned.
Ok, so lets get started by looking at why I did this project!
The motivation for creating my camera was simple enough. A large amount of break-ins had recently taken plan in the area where I live.
The thieves targeted houses with secluded gardens, as the chance of being spotted by passers by was lower.
My garden is not visible from the road, making it a prime target for the thieves.
When undertaking any project you need to identify the stakeholders, or those who will be affected by the project outcome.
These stakeholders will have their expectations of the project, and your requirements should always take the stakeholders expectations into account.
For this project my wife was my primary stakeholder, so the requirements reflected her wishes.
(Go through requirements)
Once I’d gathered the requirements I realized that I needed to implement the above, in a cost effective and speedy manner.
Ok so now we’ll look at how I implemented the first version of the camera, from both a hardware and software perspective.
We’ll begin by looking at the hardware I used for creating the Camera and how much this costs.
The Pi Zero is a cut down version of the Raspberry Pi. They cost about 50kr, but you’ll also need to buy an SD card, power supply and HDMI and USB adapters. Therefore I recommend a starter kit such as the one sold by the Pi Hut. This costs 300 kr, and gives you all you need to get started with the Pi Zero.
In addition to the Pi Zero board I needed a camera. I choose the Raspberry Pi Camera module, at around 250 kr. To fit this to the Pi Zero I also had to buy a Pi Zero Camera adapter, which cost an additional 50 kr. The camera I chose was the Pi NoIR, which work better in low conditions. To truly provide night vision it requires an IR light source, which I didn’t buy for this project.
Finally I needed a mount for the Camera. I choose the ZeroView at 80 kr.
The total cost for all equipment (if bought new) would be therefore approximately 680kr. If this seems a lot of money, remember that you can easily reuse the components for other projects at a later point!
Here you can see the fully assembled camera, from the front and the back. The white cable is a USB Wi Fi adapter and the black cable is for power.
Now I had the camera set up I needed to implement this event flow. I also needed something to monitor the camera stream and send me emails once activity was detected.
After some consideration, I decided to use the Linux Motion software.
Motion is open source, and is purpose built for motion detection.
It has excellent performance on the Raspberry Pi Zero, plus a built in web server for streaming video.
Detected activity triggers an “event” in Motion, which can be used to send email alerts.
Motion works out of the box and is highly configurable. With no need for additional programming it would take no time at all to get up and running.
So how does Motion work? Well it basically monitors the video stream from the camera.
Each frame is compared to the previous, in order to find out how many pixels (if any) differ.
If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
Ok, let us try a quick demo of the Pi Camera Version 1.
Explain that the email is sent using standard linux tools mpack and sstmp, they basically piggyback on top of my GMail account.
Ok, lets revisit my project requirements. My camera:
Would monitor activity in the garden.
It would be able to send an email when acitivity was detected.
It included a live video stream.
It was quick to build and program.
It had a one of cost of 680kr.
And it was portable.
All this led to a happy stakeholder! But was the project completely successful?
I was initially very happy with the first version of the Pi Zero Camera. The camera monitored my garden for the whole of July, whilst I was in England with my family.
The results from the camera were quite boring, and mostly showed the neighbor's watering our flowers, plus me testing out the camera before we left.
Unfortunately the Camera also produced a large amount of false positives.
False positives were email alerts triggered by non human activity.
In the first picture you see the neighbor's cat paying our garden a visit.
In the second picture the email alert was triggered by fast moving shadows on a cloudy and windy day.
Other false positives were triggered by raining running down the window in front of the camera lens.
Ok, let us try a quick demo of the Pi Camera Version 1.
Ok, so now is where things start to get interesting. Lets see how I addressed the false positive issue!
You’ll remember these project requirements from earlier on in this presentation.
Lets just revisit them all! They are all still just as relevant. But by adding one word to one of the functional requirements we can address the issue of false positives.
Any guesses?
Yep. The missing word is “Human”! We only want to send warnings when human activity is detected.
At this point I realised that I had to find an Image Analysis solution.
My first idea was to use OpenCV to perform Face Detection on the snapshot images.
<click>
This type of use of OpenCV is well proven and well documented, and wouldn’t take long to implement.
<click>
However, it would only work if the subject was facing the camera, and wasn’t wearing a mask!
<click>
My next idea was to use Googles TensorFlow. This would allow me to create and train a neural network to analyse my images.
<click>
The downside here is that it would take a considerable investment of time for me to get up and running. This idea was therefore vetoed by my primary stakeholder, who wanted a solution as quick as possible.
Luckily the solution presented itself to me through a random tweet in my twitter feed.
<click>
It turned out that Arun was talking about Amazon Rekognition, a new Image Analysis solution.
So what is Amazon Rekognition?
Perhaps the best way of understanding what Amazon Rekognition is, is to run a quick demo.
DetectLabels use Burglar picture from NTNU folder on my Desktop.
My next challenge was to find out how I would trigger the image processing from my Pi Zero Camera.
After some thought I decided to move the whole image processing and email handling to The Cloud, and specifically Amazon Web Services.
My planned solution basically looked as follows:
The PiZero Camera monitors the garden.
On detecting movement, a snapshot is taken and pushed to Amazon Web Services.
Amazon web services handles the image analysis and the eventual sending of an alert emails.
This solution had some benefits:
1. The Pi Zero could now focus on running the motion software.
2. The Camera and Image Processing would be seperated, making it easy to replace either at a later date.
I also tweaked the Motion configuration file to provide multiple snapshots for each detected activity. More pictures would increase the chance of Motion correctly detecting movement.
Ok, let us try a quick demo of the Pi Camera Version 2.
Show happy case and unhappy case.
If camera NOT working, try upload from PC :
cd '/Users/mark.west/Documents/GitHub Repositories/smart-security-camera/s3-upload/'
s3-upload mark.west$ node s3-upload-image-file.js smart-security-camera-upload /Users/mark.west/Desktop/ntnu/burglar.png
Ok, so how did the AWS processing work?
Here’s a simplified run through.
Firstly an image is pushed from the PiZero Camera to Amazon’s s3 storage.
A small unit of code or Lambda Function is triggered by the upload. It in turn triggers a Step Function.
The Step Function orchestrates further Lambda Functions into a workshop.
The first Lambda Function makes a call to Rekognition to evaluate the picture.
The second Lambda Function uses the Simple Email Service to send the alert email.
Finally, all components in the workflow use Identity Access management to make sure that they have access to the components they need to use. For example, the Lambda Function that sends an email needs access to both the Simple Email Service and to s3 in order to attach the image file to the email
In order to consume the AWS services one uses the AWS SDK.
On upload to s3, a Lambda Function is triggered, which in turn triggers a step function, which contains more Lambda Functions. Lets look at AWS Lambda Functions and their role in our solution.
A Lambda Function is essentially a unit of code. As today it is possible to implement Lambda Function using Java, C#, Python and Node.js.
Lamba Functions are serverless. This doesn’t mean that they run without servers, but that the business or person running the code doesn’t have to worry about provisioning or managing the servers. In other word, Amazon has responsibility for making sure that the servers are up and running.
Lambda Functions are also stateless. By not storing state they are more easily scaled up and down. This scalability is an important part of the Cloud paradigm, as it allows one to scale Lambda Functions rapidly up and down according to demand.
The AWS Lambda platform is engineered to be highly available, making it suitable for running applications that need to be up all of the time.
AWS Lambda utilises a Pay as you go model. This means that you pay only for the processing time you use. This means that an application with low traffic will be relatively cheap compared to one with lots of traffic. ASW Lambda also has a free tier for those who want to get started with AWS. This provides free usage up to a given threshold.
AWS Lambda also has the SDK natively available, making it simple to call other AWS services and resources.
View Lambda Functions - all I have
Select one - Rekognition processing
Three types
Inline code
zip files - used for code with third party dependancies
S3 objects – find out more about these!
Lambda Functions can pass information in and out via objects
Event - input
Callback - allows to one to specify either successful output or error
Show configuration options - especially memory - the more memory consumed, the higher the cost.
Show how you can test the Lambda Function online:
Happy Case
Unhappy Case
Cloudwatch logs.
cd Desktop/NTNU
node s3-upload-image-file.js smart-security-camera-upload
On upload to s3, a Lambda Function is triggered, which in turn triggers a step function, which contains more Lambda Functions. Lets look at AWS Lambda Functions and their role in our solution.
Here is the actual Step Function for my Camera, rendered by the AWS Step Function interface. It’s not necessarily easy to see the flow here, so let me try to shed some light on how this works.
The first step submits the snapshot to AWS Rekognition.
The second step then evaluates the response from AWS Rekognition. It looks for labels such as “Person” and ”Human” and sets a flag to indicate whether a person has been found.
The third step uses this flag to decide whether or not to send an email. Unlike all the other steps this is not calling a Lambda Function.
The fourth step sends the alert email if. This will be skipped if the alert flag is set to false.
The fifth step archives the snapshot – either in an alert folder or a false positive folder.
The final step is an all purpose error handler. Any errors during processing will result in an email being sent to me.
Go to Dashboard.
1. Click create new State Machine: Show blueprints.
2. Show all the previous executions for my Step Function.
a. Switch between JSON definition and visual representation.
3. Look at the execution details tab – show how this allows you to see the input and output to each step.
4. Testing is possible by clicking New execution and specifying the JSON input.
{"bucket":"smart-security-camera-upload", "key":"upload/PiZeroCamera.jpg"}
So how much will running all this software cost me?
As I mentioned before my camera is pointing at my back garden. It’s pretty quiet out there and I generally only switch the camera on when we are not home. As such it is very rare that my camera generates more than 30-50 images a day.
With this in mind I took at look at Amazon’s prices and tried to estimate how much this would cost me.
As a new AWS user I have access to their free tier. Under this I get free access to all the AWS services I need, up to a given threshold. This includes up to 5000 rekognition requests a month, 4000 step function requests and a million lambda requests, not to mention 5GB of free storage on s3. As I regularly clean out my s3 bucket I don’t envisage having to pay anything for the first year.
After the free tier expires there will come some costs, mainly for Rekognition and s3. Lambda Functions and Step funcitons will continue to be free as I am way under the threshold for charging. Therefore I estimate the costs being less than $5.
It is important to remember that these figures are based on my experience with running the camera and Amazons current pricing strategy. Your own experience may vary.
Ok, lets revisit my project requirements. My camera:
Would monitor activity in the garden.
It would be able to send an email when human activity was detected.
It included a live video stream.
It was quick to build and program.
It had a one of cost of 680kr. And a yearly cost of 600kr after the first year.
And it was portable.
All this led to a happy stakeholder! But was the project completely successful?
Hobby Projects are a great way of learning!
Creativity can be inspired when facing finite cost and time boundaries.
Before I finish, I wanted to tell you about Bouvet Labs. Bouvet Labs provides a platform for hackers and makers to get hands on experience with exciting new technologies, from the Internet of Things to Artificial Intelligence.
Bouvet Labs also provides consultancy skills to various customers. Most recently we helped the Norwgian Consumer Council to uncover security holes in popular children's toys. The results were discussed around the world, with articles on CNN, the BBC and in the Wall Street Journal.
We also have won awards for an IoT based motivation machine we built for Viasats customer care department.
Thanks very much for attending my talk!
You can find more information about this project at the Bouvet blog. And you’ll find all the code on my GitHub.
If anyone wants to have a chat I’ll be on the Bouvet stand for the rest of the day and will be joining some of you for food this evening!