project-image

OpenCV AI Kit

Created by OpenCV

Open Source Spatial AI From The Biggest Name in Computer Vision.

Latest Updates from Our Project:

Community Friday #10: Intel DevCloud for OAK Backers, OAK T1 production samples approved; production has begun!, DepthAI API updates
over 3 years ago – Sat, Nov 14, 2020 at 09:54:53 PM

Happy Friday, community! We’re back with some more project updates..

Intel DevCloud for OAK Backers

Interested in running your applications virtually while waiting to receive the OAK Board? The Intel DevCloud for the Edge is a virtual development sandbox with access to Myriad X processors. Backers can access the environment for free, allowing you to test models prior to receiving the kit, and also test, refine, and optimize the models after receiving the kit.

How it works:

  • Register for Intel DevCloud for the Edge here 
  • Sign in to access the DevCloud Jupyter Notebook environment here 
  • Upload your model and source files (how-to guide)
  • You can also access a library of pre-built models here 
  • Submit your job to an edge compute node (how-to guide)
  • Select a Myriad X edge node and associated queue label (how-to guide)
  • Create a job script and execute the qsub command (how-to guide)
  • View the model output and performance data (how-to guide)

The DevCloud comes pre-loaded with the latest Intel hardware and software, including the OpenVINO toolkit for model conversion and optimization. You can also access a variety of development resources, including sample applications, tutorials, and training videos. Visit the DevCloud website to learn more.

OAK T1 production samples approved; production has begun!

We’re sure many of you have been following along for this moment. This week we approved the T1 samples and we’ve entered production! This is a very exciting time for us! We can’t wait to get these into your hands! Here are some pictures of the T1 samples.

OAK T1 samples

DepthAI Updates

We’ve had another busy week and we wanted to highlight some new additions. We’ve created some of the first demos using the Gen2 pipeline and several new depth modes we added.

We have a repo on GitHub dedicated to some experiments we’ve created, which you can find here

Initial experiments using the Gen2 pipeline

Gaze estimation can be found here on GitHub. This is our first working demo using the gen2 pipeline. This example demonstrates how to run 3 stage (3-series, 2 parallel) inference on DepthAI. You can start this demo by issuing the following commands:

  • cd gaze-estimation
  • python3 -m pip install -r requirements.txt
  • python3 main.py -cam 
Gaze estimation using the Gen2 pipeline

We’ve added new depth modes: Subpixel, LR-check, and extended disparity (see here)

  • Subpixel allows for further maximum distance, and better accuracy within a given range.
  • Left/Right check eliminates shadowing from running disparity in only one direction, by running disparity in both directions.
  • Extended disparity adds support for closer-in minimum depth, by a factor of 2 (compared to non-extended disparity).

We’ve created a demo of these depth modes using the Gen2 Pipeline, which can be found in our depthai-experiments repo. You just need to make sure to switch to the ‘gen2_stereo’ branch and install the requirements (as they differ from other branches/repos). That can be done by issuing these commands from the 'depthai-experiments' working directory:

  • git checkout gen2_stereo
  • cd gen2-camera-demo
  • python3 -m pip install -r requirements.txt
  • python3 main.py

Here’s a screenshot with LR-check and subpixel enabled (default operational mode of the demo):

Example showing LR-check with subpixel

Note that as of now there is no median filtering when any of the LR-check, subpixel, or extended disparity are enabled. So that is why it looks a bit grainy. And there’s some error when doing the point cloud projection where a bunch of valid depth points (maybe 1/2 of them?) don’t get projected.

This past week we released a new version of DepthAI, which is in release candidate status.

DepthAI 0.4.0.0-rc Release Notes

Changes since tag 0.3.0.0:

  • Add Python 3.9 support
  • Add Windows support back to Point Cloud Projection (currently requires Python <3.9)
  • Drop Python 3.5 support due to being EOL
  • Fix crash on second device object delete (reported HERE)
  • Improve robustness of model downloader
  • Migrate to new blob converter
  • Update test scripts

The changes from this release were merged into the ‘main’ branch this week, including a bug fix for point cloud support that was introduced with 0.4.0.0-rc. We will be releasing updated wheels for 0.4.0.0 on PyPI next week.

Also to remind you that the API roadmap can be found by visiting the Projects section on GitHub. I’ve highlighted the December delivery project, as that’s what we’re targeting to have ready for when you receive OAK hardware.

New Docs Site Ready

We’re constantly working on expanding and improving our documentation. We recently decided to switch platforms, and we’ve completed the data migration. We’re going to be using ReadTheDocs. We believe that it is better formatted and easier to use, which should improve the overall experience and usability of our documentation.

This also brings new options to the table. The main part remaining is to actually replace the existing one at docs.luxonis.com. We’re planning to do that in the near future. For now the updated docs site can be found here.

Please take a look if you have time. We’re open to any feedback and also are interested in knowing if you have any feature requests. If so, please do let us know! You can send us an email ([email protected]), message via KickStarter, or join us on our Slack Community.

OAK Pledge Management Status

Not much to add on the campaign front at this time. We’re down to roughly 700 incomplete surveys at this time. We will continue to periodically send out reminder emails to those with incomplete BackerKit surveys. If you aren't familiar with BackerKit, it's what we're using as a pledge management system for the OAK Campaign. All backers will need to complete their BackerKit survey.

Incomplete BackerKit surveys will be kept open, meaning they will not be fulfilled until completed. If you didn’t receive your BackerKit survey, or you need a new link then please don’t hesitate to try their recovery form (here), or you can contact us either via direct message on KickStarter or email ([email protected]).

--

That's all for this week. We'll be back next Friday with more updates for you. Hope you have a great weekend!

Community Friday #9: OAK Campaign Updates, Tripod, OAK Renders, Calibration, A Brief Introduction to the DepthAI API
over 3 years ago – Sat, Nov 07, 2020 at 12:12:17 AM

Happy Friday, community! We’re back with some more project updates..

OAK Campaign Updates

We're still on target for OAK-1 and OAK-D to start shipping in December. We will start to include more details on this as we get closer, but we wanted to make mention of this first.

Last week we hit the target of 6,000 completed BackerKit surveys. Thanks to all who have completed their surveys so far! We have a list and we’ll be sending out the redemption code for the free 50 hours Microsoft Azure compute time along with the OAK crash course. We will make sure to announce this in an update here on KickStarter so you know when you can actually expect it.

We will continue to periodically send out reminder emails to those with incomplete BackerKit surveys. If you aren't familiar with BackerKit, it's what we're using as a pledge management system for the OAK Campaign. All backers will need to complete their BackerKit survey.

Incomplete BackerKit surveys will be kept open, meaning they will not be fulfilled until completed. If you didn’t receive your BackerKit survey, or you need a new link then please don’t hesitate to try their recovery form (here), or you can contact us either via direct message on KickStarter or email ([email protected]). 

Tripod Design

We’ve decided on a tripod design for the OAK campaign. It’s fairly versatile and works very well with the case as it allows you to pivot the unit. Here are a couple photos of an early production sample.

Tripod close-up
OAK-D mounted to the tripod

OAK Renders

The OpenCV graphics design team has come up with some really nice renders and we wanted to share them with you. We're only going to highlight a couple below, otherwise this week's update would get very image heavy. Feel free to grab the zip to review the other renders of OAK-1 and OAK-D. For now we've uploaded a copy to Google Drive, which you can find HERE. Soon we'll moves these to our GitHub too. 

OAK-1 Exploded View
OAK-D Exploded View

 OAK Calibration

We've recently improved our automated factory calibration process. It has been producing better depth maps in DepthAI than we've previously observed. We open-sourced it for folks who want to build DepthAI into their product. All you need to do is buy the same arm that we bought and then you're good to go to calibrate your own designs easily, quickly, and automatically. The source can be found on GitHub in this repo.
 

OAK-D running through the automated calibration process

A Brief Introduction to the DepthAI API

We've noticed that there has been some question of how to work with OAK hardware, so we thought we'd take a moment to introduce the API. It's written in Python, can be installed as a PyPI package or build from source, and is supported on all major OS. It's open source and MIT licensed. If you wish to read more about it then please visit our docs site (here), or take a look at the code in the DepthAI repo on GitHub.

Our roadmap can be found by visiting the Projects section on GitHub. I've highlighted December delivery which will give an idea on direction and what will be available when these units start shipping.

Last week we released version 0.3.0.0. There were some fairly big changes, which we've included the highlights below. We're preparing 0.4.0.0 which should be released this upcoming week. We'll release it first as a release candidate (RC), then a week later we'll merge it to main. The upcoming version has some important fixes for the model downloader, and also adds support for Python 3.9. We'll have a detailed changelog of that ready when it's released, but if you wish to view that now then you can switch to the develop head in the DepthAI repo.

DepthAI 0.3.0.0 release highlights

  • Added color, rectified_left and rectitied_right streams (what is rectification?), together with an ability to run a NN on those streams
  • Added median_kernel_size and lr_check options to depth pipeline config (you can use -med and -lrc flags in demo script to test them out)
  • Added verbose output (-vv) to demo script which will print packets' metadata
  • Added a Point Cloud Visualization (use -s rectified_right depth -pcl flags in demo script to preview)
  • Added model downloader and removed all NN blobs from the DepthAI repository - they are now downloaded when requested. Blobs can be compiled in the cloud, or locally (if OpenVINO toolkit is on the host). 
  • Fixed missing metadata for disparity, disparity_color and depth streams
  • Fixed sequence number sync issue by adding a sync_sequence_numbers config option (you can test it by using the -seq flag with the demo script)
  • Improved calibration system (ability to use two homography)
  • Improved format of NN config file

If you're already using DepthAI then please take a look at the tag on GitHub for important migration information when upgrading from previous releases.

--

That's all for this week, folks. Stay tuned, we'll be back with more campaign updates in the near future. Hope everyone as a great weekend!

Community Friday #8: Good News on the IMU Fix, First Photo-Shoot of the OAK-1 and OAK-D Aluminum Enclosures, And Campaign Updates on Survey Completion and Azure Credits
over 3 years ago – Tue, Oct 20, 2020 at 01:11:49 PM

Happy Friday!

We've been very busy these past few weeks but we're back with some important updates for the OAK Campaign. We hope you appreciate the news we have for you. 

Campaign Updates on Survey Completion and Azure Credits

At the time of writing this update a total of 86% surveys have been completed. Thank you! We greatly appreciate that so many of have completed your surveys. We've locked a majority of orders for manufacturing purposes, and will continue to lock completed backings every few days. We have been processing payments as well, and will continue to process payments weekly for newly locked orders.

So where does 86% leave us with the surveys? Well that's exactly 5,940 surveys completed. That means there is room for 60 more backers to get the 50 free hours of Microsoft Azure compute time that we announced in Update #43. So if you haven't yet completed your survey that means there is still a chance that you can make the list to get the free GPU time. 

We will be sending out the redemption code for the free Azure credits to backers who made the list along with the free OAK Crash course in December. We think this approach makes the most sense. 

Haven't completed your survey yet? Don't fret, you can still finish your survey even though it's past the deadline. Please contact us either via a direct message on KickStarter, or via e-mail ([email protected]) if you have questions or concerns that have caused a delay in completing as we're happy to help address those. If you didn't receive the survey you can either try the recovery at THIS URL, or send us an email and we'll gladly help you out.

Good News on the IMU Fix

So I imagine many of you are aware of the bad news we posted back in Update #42 about the IMU I2C issue. Well, good news. Our proposed fix to switch to the SPI connection is working as expected meaning our at-risk order is safe and we're still on target for shipping in December!

Below are the test boards we rush-ordered through MacroFab (they're red because they're protoypes) which confirmed the new SPI interface for the IMU (BNO085) works as planned and at full-rate (3MHz):

OAK-D SPI IMU Works Properly at Full (Maximum) 3MHz Clock Rate

First Photo-Shoot of the OAK-1 and OAK-D Aluminum Enclosures

This week we took some time to capture proper images of OAK-1 and OAK-D in the final design of their enclosures. Not only do they look great, but as we mentioned they are designed for serviceability. We are very happy with how they turned out, and we think that you'll appreciate the design too! Please see below as the pictures speak for themselves. 

OAK-D and OAK-1 size comparison

OAK-1

OAK-1 from the front
OAK-1 from behind, and compared to a United States quarter
A view of the port on OAK-1, and other objects for size reference

OAK-D

OAK-D from the front
OAK-D from behind
A view of the ports on OAK-D

Comparison of PMMA (acrylic glass) versus Corning Gorilla Glass

Just in case you needed re-assurance on our choice to go with Gorilla Glass we decided to perform a  test on an OAK-1 enclosure with both options. Gorilla Glass clearly is the more robust option, the results speak for themselves. Don't just take our word for it, check out the results:

Gorilla Glass on the left, and PMMA on the right

And for tech history buffs among us (we definitely count ourselves in that camp), this might be reminiscent of Apple's (last-minute) switch from plastic to Gorilla glass on the original iPhone (more on that here).  IT was for this exact reason - plastic is just too easy to accidentally scratch.

--

That's all for this week. Stay tuned, we'll be back with more campaign updates next week. Hope everyone as a great weekend!

Community Friday #7: Aluminum Case Progress Update ($1 million Stretch Goal Reward Update)
over 3 years ago – Sat, Sep 26, 2020 at 04:19:15 PM

Happy Friday!

This update is all about the enclosures for OAK-1 and OAK-D.  

Enclosure Prototypes are In!

As many of you know (as you actively helped us crush this stretch goal), every backer of OAK-1 and OAK-D (and OAK-1-POE, OAK-D-POE, because we're cool like that) will be receiving their unit with an aluminum enclosure!

Aluminum Case (Enclosure) Stretch Goal That You Helped Us Scream Past!

So this has been our biggest schedule risk (beside the unexpected IMU I2C issue, see Update #42).


But we think we have now tamed this schedule-risk beast and are still on-track to ship OAK-1 and OAK-D with enclosures in December as planned. In fact, tooling was actually opened a couple weeks ago (at risk, actually, more on that later), and we recently got back the first prototypes, which allows us to verify that we don't have a fundamental design error.  Since OAK-1-POE and OAK-D-POE ship in March, their enclosures have not been started yet, so only OAK-1 and OAK-D enclosures are shown below.  We will provide updates on OAK-1-POE and OAK-D-POE enclosure designs when we start on them - we are focused 100% on the December delivery for now.

So we're happy to share some photos of initial, machined prototypes below: 

OAK-1 Enclosure Prototypes

Initial Machined Prototype of the OAK-1 Enclosure. Note the Die-Cast will look different in color, finish, etc. but this representative of the shape/size.
This painted, machined prototype is more representative of what the aluminum enclosure for OAK-1 will look like.
OAK-1 disassembly, showing the Thermal Interface Material (TIM) used to heatsink the heat generated by the electronics/camera.
Both OAK-1 and OAK-D have a 1/4-20 threaded hole (a 'Tripod Mount') for easy/quick mounting.

OAK-D Enclosure Prototypes

OAK-D Piece-Parts Prior to Test Assembly
OAK-D Test Assembly in Progress.
Assembled OAK-D Prototype. Note that the RGB camera (the center camera) is slightly misaligned here (slightly too low). This has since been corrected.

Note that we intentionally designed both OAK-1 and OAK-D enclosures for serviceability.  So the four screws are easily accessible to remove the front cover, and the whole device is still mounted internally even when the front cover is removed (mounted with 4 screws from the back).

Just like the OAK-1, the OAK-D enclosure has a 1/4-20 'Tripod Mount' for easy mounting.

Machining vs. Die-Casting and Tooling

Above we mentioned that these prototypes are 'machined'.  What does `machined` mean?  Well, it means what is effectively a fancy drill bit was controlled by a computer to 'sculpt' ('machine') these out of a block of aluminum.  This is expensive, but much faster - so it is used for prototypes (like these) and for low-volume production.  

For production we will be using die-casting.  What is die-casting?  It's like injection molding, but for aluminum.  In both injection molding and die-casting (speaking as an Electrical Engineer here, so a proper Mechanical Engineer will probably correct this), a mold is made, and molten material (plastic in the case of injection modeling, aluminum or some other metal in the case of die-casting) is injected into the mold to make the part.

The trick for both is that a mold needs to be made for both injection molding and die-casting.  And then tooling/fixturing needs to be made to facilitate injecting the molten plastic or aluminum into the mold. And this tooling is expensive, so it does not make sense to do die-casting unless there is significant volume to amortize the cost across.  Given your fantastic support, we have enough volume to do die-casting and injection molding.  OAK-D uses both die-casting (for the aluminum structure) and injection molding (for the front plastic cover).  And OAK-1 uses die-casting for the both the front and back portions of its enclosure.

And both OAK-1 and OAK-D use Corning Gorilla Glass with double-sided Anti-Reflective coatings to preserve image quality behind the glass. 

Risk-Order of Tooling

Going from initial idea of an enclosure to ~10,000 unit production in 4 months is a tight schedule, as building the tooling for the molds takes around 6 weeks alone, and the production of that many units is about 6 weeks.  So that's about ~3 months of schedule right there!  And then they need to be assembled with the electronics/cameras, and tested (and calibrated, in the case of OAK-D).

So, we relied on our mechanical engineers' experience (thanks, Yan and Andy!) to risk-order the tooling (i.e. put the order in for the tooling prior to having received our first prototypes) so that we could start the clock counting on this ~3 month countdown.

--

Don't forget to complete the BackerKit survey; the first 6,000 backers to do that will receive 50 hours of GPU time on Microsoft Azure (see Update #43). That's all for this week, folks. Next week we'll have our community spotlight from the poll in Community Update #6. Have a great weekend, community!

URGENT : Free 50 hours of GPU time on Microsoft Azure for OAK backers
over 3 years ago – Thu, Sep 24, 2020 at 12:30:27 AM

As promised during our Kickstarter campaign, our backers will get access to high quality AI models that they can run on OAK with ease. 

What if you want to train your own custom model? Our free crash course for backers will give you instructions on how to train your own model and deploy it on OAK. 

But wait, it gets even better! 

Training a custom model requires several hours of computational time on a GPU. We have partnered with Microsoft to provide free 50 hours of GPU time on Azure. That's worth $45! We are so thankful to Microsoft for this generous gift. The crash course will cover how to use Azure NC6 instance to train a model. 

But there is a catch! 

We have included the exact Microsoft offer in the section below. The not-so-fine print says, "Microsoft makes this Offer available to the first 6000 participants identified and verified by OpenCV as backers for the OpenCV AI Kit." 

And we have more than 6500 backers! 

The first 6000 backers who complete BackerKit Survey (sent via email) will be eligible for free Azure credits as long as they meet the criteria laid out in the section below. Approximately 5200 backers have filled out the survey so far (and so will be receiving the 50 hours of GPU training). 

If you have not done it, please fill it out immediately - to make sure you receive your free 50 hours of Azure NC6 GPU time!

And if you have not received your BackerKit Survey (maybe it went to SPAM?), please either visit THIS link or email [email protected] with your backer number and we will provide you a link to your survey.

Microsoft Azure Credit Offer for OpenCV AI Kit (2020)

This offer for a limited amount of Azure credits is available by e-mail invitation only for a limited number of backers who funded in OpenCV AI Kit [Kickstarter Campaign].

This Offer gets the recipient started with 50 free* GPU hours of NC6 SKU in Azure. If you are entitled to this offer, you will receive an e-mail invitation to accept this offer for your Account ID. If you do not have an existing Account ID, you will be prompted to create an Azure Account.

This Offer is available only to backers who meet the following requirements. You are of age 18 years or older and have provided your consent to be contacted by Microsoft to receive the Offer redemption code.

This Offer is limited to the NC6 SKU and will remain valid for 180 days after the Azure redemption code has been issued to you after which any unused credits will expire. Detailed instructions to ensure you are applying the credits to the NC6 SKU will be directly provided within the Course. Upon redemption, the Azure credits will be applied to your account in accordance with current Azure Pricing. The Offer is only available to customers located in the following countries found here. This Offer will terminate, and your subscription will be converted automatically to the Pay-As-You-Go offer upon the earlier occurrence of (1) 180 days of redemption of this Offer or (2) exhaustion of the Azure credits. You will be responsible for all usage charges after this Offer expires or terminates. Azure pricing is subject to change.

* Microsoft makes this Offer available the first 6000 participants identified and verified by OpenCV as backers for the OpenCV AI Kit who meet the requirements above. Offer good only to the first party recipient receiving Offer invitation and is non-transferable. Limit one Offer per eligible participant. This Offer is not redeemable for cash. Microsoft reserves the right to cancel, change, or suspend the Microsoft Azure Sponsorship Offer at any time without notice.