project-image

OpenCV AI Kit

Created by OpenCV

Open Source Spatial AI From The Biggest Name in Computer Vision.

Latest Updates from Our Project:

Community Friday #13: More OAKs en route to the fulfillment center!
over 3 years ago – Fri, Dec 18, 2020 at 11:43:41 PM

Hi Backers,

As we shared with you last week, 920 OAK were delivered to floship on December 11th.  We now have 5,502 OAK on their way to floship, which will arrive at floship's warehouse on Monday December 21st.

OAK-1s in a carton

But we have yet to get word on when floship will start shipping.  

We suppose that the reason we have not heard back is that floship must not know internally.  We just asked them again, and let them know that the world is waiting on this, that we're all counting on them. 

A snippet of our latest question to floship is below:

"We're counting on you guys.  And so are our backers and the companies who are relying on these to build blind-assistance devices, to build devices that save lives, and devices that help prevent the spread of COVID, and to autonomously kill COVID.

What can we do to get these shipping?"

We're of course giving floship grace (but persistently asking what can be done to get these shipping), as like we mentioned in the last shipping update, there probably has been no harder time to be shipping things in volume than right now (at least in modern history).

We are hoping next week we will be able to update you here with floship's shipment schedule.  

We'll keep you all posted. 

Thanks again, and stay safe out there,

The OpenCV AI Kit Team

Pallets of OAKs

 Note: we received these images after we made the initial post, but before the 30 minute editing window expired so we thought we'd update it by adding a couple.

Community Friday #12: Social Distancing Tutorial and Upcoming OpenCV Spatial AI Competition
over 3 years ago – Fri, Dec 11, 2020 at 11:47:53 PM

Happy Friday, community! We shared an important update about our campaign yesterday, but we have some other content to cover which we wanted to share with you. On another note, it’s December 11th. Is everyone ready for the upcoming holidays?

If you missed our update yesterday, you really should take a moment to read it (see HERE). TL;DR: Our first pallet shipped carrying 920 OAK units to floship, who is our fulfillment partner for the OAK campaign.

Social Distancing Detection System by Augmented Startups

Ritesh over at Augmented Startups has posted another DIY tutorial for your viewing pleasure. This time you will learn how to create a real-time social distancing monitoring system with alarm using OAK-D and a Raspberry Pi. Check it out at the link below for the video, example code, and more! 

Upcoming Spatial AI Competition

We're excited to announce that there will be another competition January 12th. That's right, we’re back with another spatial AI competition, except this time everything is bigger; more contestants, more winners, bigger prizes! In fact this will be the largest spatial AI competition in the world! 

We’ve partnered with Microsoft and Intel and the total prizes are worth over $400K. Unfortunately I can’t say a lot at this time, but you can sign-up HERE to be one of the first to know! You even have a chance to win a free OAK just by signing up for the newsletter!

We can’t wait to see what you can do with OAK.

--

That’s all for this week folks. Short and sweet. We’ll be back with more updates next week. Stay safe, everybody.

Thanks,

The OpenCV AI Kit Team

Logistics, COVID-19, and the Holiday Season... and the First Pallet
over 3 years ago – Fri, Dec 11, 2020 at 05:58:42 PM

Logistics, COVID-19, and the Holiday Season

We spent a TON of time down-selecting who our fulfillment partner would be for this.  We investigated easyship, fulfillmentcrowd, ChinaDivision, ORQA FPV, floship, and many others. We traded speed, cost, perceived reliability, and probability that there would be some customs issue.


Ultimately, we down-selected floship based on them seeming to be the highest probability of being reliable, and their capability to ship DDP (delivered duties paid) at scale - which in theory should reduce the probability of some customs issue that you, as a backer, would have to deal with - and make it so the package simply shows up at your address.  


They were actually one of the more expensive, but offered some of the best shipping options in terms of guaranteed delivery, speed, and customs (DDP).  


Despite the research, and our confidence in floship, any and every shipper, logistics company, etc. has a tall task right now.  COVID-19 has resulted in logistics being significantly harder (and more expensive) globally, with supply chains disrupted while simultaneously facing the biggest volume ever as a result of so many industries having to move to remote work. 


So there is significant risk that, even though we are on schedule with production, the shipments themselves could hit delays because of all of this.  We’ll keep everyone posted as we learn along the way.  So far so good…  we just want to give the forewarning that we’re shipping at volume at probably one of the worst possible times in modern history (during a global pandemic and during the peak of holiday shipments).


First Pallet of 920 OpenCV AI Kit

With that warning out of the way, we are excited to share that the first pallet of OpenCV AI Kit shipped to floship yesterday!  


And we’re producing around 700 OpenCV AI Kit a day now.  So it will not be long until all of the OpenCV AI Kit are in floship’s hands, and ready to ship to you all!

343.5 Kilograms of OpenCV AI Kit on a full pallet

What’s on the pallet?

  • 320 OAK-1
  • 600 OAK-D

These are our first full-production versions of OAK-D and OAK-1 with their aluminum enclosures.  (As a reminder, every OAK-1 and OAK-D comes with an aluminum enclosure as a result of blowing past our stretch goals; thank you!)


The "Gift Box"

See below for the final units in their “gift boxes” (as the terminology goes for the box that holds a product directly):

OAK-D Gift-box - containing OAK-D, Power Supply, USB3C Cable (1 meter), MicroFiber cloth (a bonus!), the getting started card, and the OpenCV AI Kit Sticker
OAK-1 Gift Box - containing OAK-1, USB3C Cable (1 meter), MicroFiber cloth (a bonus!), the getting started card, and the OpenCV AI Kit Sticker

Cartons of OAK

It’s quite enjoyable for the team to see these being produced at volume.  Even seeing these being packed into cartons is fun to see.  See below for the units being packaged into the cartons, which are then what are stacked on the pallet above.  (Retrospectively, we should have made the sticker on the box clear, as even the tiniest mis-alignment of the sticker on the box is painfully clear when it is white like this.)

OpenCV AI Kit being placed into their cartons, prior to being packed on the pallet to be delivered to floship

We're super excited to get these out to everyone.  We'll keep you posted on how all the shipping/logistics goes.  


Thanks,

The OpenCV AI Kit Team

Community Friday #11: OAK Campaign Update, RPi Smart Security Camera Tutorial, DepthAI Update, OpenCV Spatial AI Competition Winners Announced!
over 3 years ago – Fri, Dec 04, 2020 at 09:27:28 PM

Happy Friday, community! We have some updates for the OAK Campaign! It’s a busy month ahead, and we’re very excited.

OAK Campaign Updates

Earlier this week we sent out the 48 hour notification for locking addresses to backers with devices that start shipping this month. That 48 hour window has passed and we’ve locked those addresses. If you have devices which ship in December and you missed this email then please reach out to us so we can update your address as needed.

When will I receive my goods?

This question has been coming in more frequently as of late. We can literally feel your anticipation and excitement, which is great! We can’t wait to get these to you. Happy to say that we’re still on track to hit our targets to start shipping this month. A brief reminder for shipping for those who backed our KickStarter campaign:

OAK-1 and OAK-D start shipping in December, unless you chose combined shipping with units that start shipping in March. OAK-1-POE, OAK-D-POE, and OAK-D-WIFI start shipping March 2021.

If you chose split shipping, then part of your order will ship this month, and the rest will start shipping in March. Any accessories will ship with the main part of your OAK order. T-shirt orders will likely ship separately directly from the T-shirt manufacturer.

Raspberry Pi Smart Security Camera DIY Tutorial

Augmented Starts have created a wonderful video showing how to build your own surveillance system using Raspberry Pi and OAK. It includes code examples, and shows you the entire process including training models . Please check out the video below

DepthAI Update 0.4.0.0 has been released

This past week we released 0.4.0.0 of DepthAI. Below is a changelog showing all changes, including what was previously mentioned in 0.4.0.0-rc.

Changes since 0.3.0.0

  • Add ability to fetch usb speed and MyriadX serial number.
  • Add ability to manually control focus and exposure of the RGB camera using keyboard shortcuts.
  • Add Python 3.9 support, and drop Python 3.5 support due to being EOL.
  • Add Windows support back to Point Cloud Projection (currently requires Python <3.9).
  • Fix crash on second device object delete (reported HERE).
  • Fix potential crash with RGB 12MP (-rgbr 3040) + depth.
  • Fix the cropping area for 4K (-rgbr 2160) - make it centered, it was top-left.
  • Fix watchdog recreation after device object deletion.
  • Fixes in device creation.
  • Improve robustness of model downloader.
  • Update test scripts.
  • Updated Python wheels for 0.4.0.0 on PyPI (prebuilt for Python 3.9).
     

We had a feature focus update which we dropped earlier this week. You can check it out HERE if you missed that update.

OpenCV Spatial AI Competition Winners Announced! 

We are thrilled to announce the OpenCV Spatial Al competition Sponsored by Intel  results. We believe the people who won the competition are not just some talented Al engineers but also trailblazers who are leading the way in making the world a better place. Congrats to all of them!

It was never easy to select the best out of the best, thus we have decided on two winners each for the Gold, Silver, and Bronze category.

We are overwhelmed to see numerous submission from all the Artificial Intelligence and Machine Learning Pundits. We are glad to introduce the finest projects submitted by the participants as given below:

Vision System For Visually Impaired

Team: Jagadish Mahendran

Mission

Here the focus is to provide a reliable smart perception system to assist blind and visually impaired people to safely ambulate in a variety of indoor and outdoor environments using an OAK-D sensor.

Solution

In this project, the team has developed a comprehensive vision system for visually impaired people for indoor and outdoor navigation, along with scene understanding. The developed system is simple, fashionable and is not noticeable as an assistive device. Common challenges such as traffic signs, hanging obstacles, crosswalk detection, moving obstacles, and elevation changes (e.g., staircase detection, curb entry/exits), and localization are addressed using advanced perception tasks that can be run on low computing power. A convenient, user-friendly voice interface allows users to control and interact with the system. After conducting hours of testing in Monrovia, California, downtown and neighboring areas, we are confident that this project addresses common challenges faced by visually impaired people.

"I would like to thank Daniel T. Barry for his help, support and advice throughout the project, Breean Cox for her continuous valuable inputs, labelling and educating me on challenges faced by visually impaired people, my wife Anita Nivedha for her encouragement and walking tirelessly with me for collecting the dataset and helping with testing,"

Notes from Author

Motivation for the project

Back in 2013 as I started my Master’s in Artificial Intelligence, the idea of developing a visual assistance system first occured to me. I had even shared a proposal with a professor. The infeasibility of using smart sensors then, combined with deep learning techniques and edge AI not being mainstream in computer vision made it difficult to make any progress on this project. I have been an AI Engineer for the past 5 years. Earlier this year when I met my visually impaired friend Breean Cox, I was struck by the irony that while I have been teaching robots to see, there are many people who cannot see and need help. This further motivated me to build the visual assistance system. The timing of the OpenCV Spatial AI competition could not have been better, it was the perfect channel for me to build this system and bring this idea to life.

Note: We are working on an updated Research Paper which will give more insights about our mission and the solution we created, we will publish it here very soon on the Blog.

Universal Hand Control

Team: Pierre Mangeot

Mission

No doubt in the coming years we would have lots of different things to work on. We would want everyone to have more time for the productive tasks by reducing unnecessary spent time for basic stuff from day to day life, i.e. turning on/off the TV, Lights, Machine, etc. with more ease and efficiency than the technology present today.

Solution

Today technologies are capable of doing hand pose estimation (neural networks) or to measure the position of the hand in space (depth cameras). A device like the OAK-D even offers the possibility to combine the two techniques into one device. Using a hand to control connected devices (includes smart speaker, smart plugs, TV, … an ever-growing market) or to interact with applications running on computers without touching a keyboard or a mouse, becomes possible. More than that, with Universal Hand Control, it gets straightforward.

Universal Hand Control is a framework (dedicated hardware + python library) that helps to easily integrate the « hand controlling » capability into programs.

Currently, the OAK-D is used as a RGBD sensor but the models are run on the host. Once the Gen2 pipeline is available the models would be able to run on the device itself. Details and status about the Gen2 pipeline can be followed from the given GitHub link below.

View Source
Report

We are grateful of PINTO’s Model Zoo Repository in developing this project which helped us in tuning the results of trained models effieciently.

Link to Model Zoo : https://github.com/PINTO0309/PINTO_model_zoo

Parcel Classification & Dimensioning

Team: Abhijeet Bhatikar, Daphne Tsatsoulis, Nils Heyman-Dewitte, William Diggin

Mission

In the broader global cargo industry, there is a massive capacity crunch due to the inability to measure their Cargo accurately.

1. Many cargo companies do not measure their Cargo accurately at a large scale using the newest technology. They describe their Cargo with width, height and depth at best. In many cases, a manual measuring tape is being used to make these guesstimates.

2.  Even when they do measure their Cargo on a more granular level, they mostly always fit cuboids on top of irregularly shaped objects. For example, a cylindrical barrel’s dimensions are up to the nearest 3D cube that fits this barrel (width x length x height).

Solution

Team has built an end-to-end proof of concept for measuring and loading packages into cargo containers. This solution leverages the DepthAI USB3 (OAK-D) camera to determine the shape accurately, then calculates the width, length and height of packages for cargo shipment. All of this could be managed by a software tool (3D CONTAINER STACKING SOFTWARE) that finds the optimal arrangement of the batch of packages given the container’s size.

Report

Real Time Perception For Autonomous Forklifts

Team: Kunal Tyagi, Kota Mogami, Francesco Savarese, Bhuvanchandra DV

Mission

The disadvantage of using a modern Autonomous Forklift is the failure of any technology part due to the lack of clear view while loading or unloading cargo. Many times, the technology break down in Autonomous Forklift will include a lot of waiting period.

Solution

The team has come up with a fabulous solution to this with Real-Time Perception for Autonomous Forklifts which will help to address the issue. It will enhance the functionality of advanced sensors, as well as vision and geo guidance technology. Many people outside the industry don’t know that autonomous vehicles in logistics have already taken on a significant part of the logistics work process. Autonomous Forklifts load, unload and transport goods within the warehouse area, by connecting to one another and forming flexible conveyor belts. Thus, it is imperative to have a solution which could enhance the overall performance of Autonomous Forklifts avoiding frequent breakdown and make it functional at full potential.

"Special thanks to Ayush Gaud, Luong Pham, Kousuke Yunoki, and Yu Okamoto
  for helping with the project."

Notes from Author

At Home Workout Assistant

Team: Daniel Rodrigues Perazzo, Gustavo Camargo Rocha Lima, Natalia Souza Soares, Victor Gouveia de Menezes Lyra

Mission

We observed that due to the closure of gyms and public spaces this year, doing physical activities while maintaining social distancing became a challenge. However, exercising alone can also be complicated and even dangerous, sometimes due to possible muscle injuries.

Solution

The team have developed a Motion Analysis for Home Gym System (MAHGS) to assist users during at-home workouts. In this solution, it estimates the person’s 3D human pose and analyzes the skeleton’s movements, returning the appropriate feedback to help the user to perform an exercise correctly.

The system is composed of two main modules:

1. 3D Human Pose Estimation – It was developed using the neural inference capabilities of the OAK-D and the DepthAI library. The human pose estimation module will run on a PC/Raspberry Pi 4 connected with the OAK-D since it does not have WiFi nor Bluetooth yet.

2. Motion Analysis – This has been implemented using a technique called Ikapp. The movement analysis module will be running on a smartphone using a TCP/IP protocol with the ZeroMQ framework to perform the communication between these devices.

Automatic Lawn Mower Navigation

Team: Jan Lukas Augustin

Mission

Lawn mowing has always been a time-consuming and tiring routine household task. There are many other Robotic Appliances available, but even high-end products still require the expensive installation of boundary wires to ensure that the robot stays on the lawn. However, even with such wires limiting the lawn area, numerous problems can occur, including the bot-

  • Killing small animals such as hedgehogs.
  • Hurting children, cats, dogs, etc.
  • Driving into molehills and crashing into “unwired” obstacles such as trees.

Solution

Proper obstacle detection for mowing bots can save lives, money and the bot itself. The aim is to prove that this can be achieved using the OpenCV AI Kit with Depth (OAK-D).  Spatial AI allows for multimodal solutions. OAK-D makes Spatial AI and Embedded AI available for everyone. I tried to fully leverage the power and functionality of OAK-D using all sensors and functionalities simultaneously:

  • Neural inference for object detection on Intel Movidius Myriad X and 4K RGB camera.
  • Point cloud classification based both mono cameras for disparity/depth streams.
  • Disparity image classification based on disparity and rectified right streams
  • Motion estimation using rectified right stream.

OAKMower uses three classifiers for limit and obstacle detection:

  • Point Cloud (Elliptic Envelope for Outlier Detection)
  • Disparity (Support Vector Machine)
  • Objects (Mobilenet-SSD for Object Detection)
View Source
Report

Contest Details

We started this competition back in July this year, and unexpectedly it got so popular that we received more than 235 submissions in just a few days. Initially, we had shortlisted 32 Winners in Phase I Result, and we had to increase our deadline also after looking at so many high-quality submission that it became very challenging for us to declare the best out of best.

Finally, we congratulate all the winners as well as participants who showed interest in contesting. However, we a much bigger contest which is launching January 2021 with 10X the rewards. To know more about the forthcoming contests, please sign-up on our mailing list HERE.

---

That's all for this week, folks. Stay tuned, we'll be back with more campaign updates in the near future. Hope everyone has a great weekend!

Thanks,

The OpenCV AI Kit Team

Subpixel, Extended Disparity, and RGB-Depth Alignment and OAK-D Holography!
over 3 years ago – Wed, Dec 02, 2020 at 10:34:01 PM

Greetings OAK Backers!

We would like to update you on some very exciting work that has come together over the past couple weeks.  We have been so heads-down that we've fallen a bit behind on updating all of you. 


Holograms of OAK-D Spatial Data

  • Some folks in the OpenCV community have been pushing the limits of what the OAK-D can do in arenas it wasn't originally primarily designed for, such as spatial mapping. And even though the OAK-D uses a stereo camera, it stacks up surprisingly well against cameras optimized for that sort of thing, like the Azure Kinect DK, as you can see from doctoral student Ibai Godordo's post here.
  • Our longtime friends at Looking Glass Factory just announced their second-generation holographic light field display called the Looking Glass Portrait today. We sent them some test RGBD and point cloud test shots from the OAK-D to convert into holographic lightfield frames for their display last weekend and -- it works!!  And it's super cool.  On our team alone, we have ordered 3x independently to play with OAK-D + Looking Glass Portrait.  
OAK-D Spatial Data Capture of Sachin (our 3D specialist) Visualized as a Hologram
  • We think there's a lot of potential here for OAK-D x Looking Glass Portrait -- depth photography, gesture capture to control a character, maybe even holographic video recording. And of course, our own retrofuture holographic music video remakes of Radiohead's House of Cards point cloud video from 2008.
  • So we are taking the unusual step to let you know about their launch (and that we've already backed it 3x), since today they are making their new system available for the wild price of $199 (you can find out more about it here). We aren't making anything off of this, we just got super excited when we saw OAK-D content as holograms and thought some of you might want to get a system to experiment with yourselves.  That, and, holograms!  2020 may have taken a lot from us, but at least we get holograms on our desk, like we're in the future.

RGB-Grayscale, RGB-Depth Alignment

  • We got RGB-depth alignment working well.  For those who have been using OAK-D/DepthAI already, this has been a popularly-requested feature.  We have it now!  It's not fully pushed outside of a branch, but it seems to work well (see below).
RGB view of semantic segmentation being aligned between RGB and grayscale
Grayscale (right) view of semantic segmentation being aligned between RGB and grayscale (right camera)

So we ran the semantic segmentation network so you could see the alignment between the grayscale and color cameras in various locations in the scene.  If you look closely, you can see that this matches well.  And there is a little `shadow` around the edge of the chair (can you see it) as a result of the cameras seeing the scene from slightly differing angles.  

Manual Control of Focus, Exposure, and Sensitivity of RGB Camera

Another one of the popular requests from early-adopters of OAK-1 and OAK-D is manual control of focus, exposure, and sensitivity.  This is now implemented through the DepthAI API (which is what you use to interact with OAK).  

And we have a built-in example which allows control from keys on the keyboard:

Control:      key[dec/inc]  min..max
    exposure time:     i   o    1..33333 [us]
    sensitivity iso:   k   l    100..1600
    focus:             ,   .    0..255 [far..near]

Manual focus, exposure, and sensitivity control example.

It's actually quite remarkable how well the neural model can still pick me up as a person even when the lighting is set so low that I (a full-blown person) can hardly tell there's a person in the image under these lighting conditions:

Object Detector performing quite well even in very low-light conditions

Improved Sub-Pixel and Gen2 Pipeline Builder Progress

The improvement here from our last update is support for exporting (and running inference on) the rectified-left and rectified-right streams.  Which then allows aligned point-cloud projection on the host.

Example of subpixel support on OAK-D
My floor with subpixel.
My face with subpixel

With this improvement, running depth from the host is now possible.  The below images are from https://vision.middlebury.edu/stereo/data/scenes2014/, with depth estimation run on OAK-D.

Host-supplied disparity-depth

That's all for now.  We'll provide an update on Friday with manufacturing and shipping schedule (TLDR, everything is proceeding as planned so far).


Thanks,

The OpenCV AI Kit Team