This article applies to these versions of LandingLens:
LandingLens
LandingLens on Snowflake
offers code samples in Python and JavaScript libraries to help you learn how to effectively deploy computer vision models you’ve built in LandingLens.This tutorial explains how to use the Detect Suits in Poker Cards example in the Python library to run an application that detects suits in playing cards. In this tutorial, you will use a web camera to take images of playing cards. An Object Detection model developed in LandingLens (and hosted by ) will then run inference on your images.This example is run in a Jupyter Notebook, but you can use any application that is compatible with Jupyter Notebooks.
Clone the Python repository from GitHub to your computer. This will allow you to later open the webcam-collab-notebook Jupyter Notebook, which is in the repository.To clone the repository, run the following command:
Open Jupyter Notebook by running this command in your terminal:
Copy
Ask AI
jupyter notebook
In Jupyter Notebook, open the landingai-python repository you cloned.Navigate to and open this file: landingai-python/examples/webcam-collab-notebook/webcam-collab-notebook.ipynb.The notebook opens in a new tab or window.
The notebook consists of a series of code cells. Run each code cell, one at a time.An asterisk (*) displays in the pair of brackets next to a code cell while that code executes. When the code has run, the asterisk disappears.
Not too familiar with Jupyter Notebook? To run a code cell, click the Run button in the toolbar, or press Shift+Enter. You need to do this for each cell!
When you run the Acquire Image from Camera code cell, the application turns on your webcam. Hold a playing card up to the webcam and press Spacebar to take a photo.Your webcam turns off, and the image displays below the code cell.
Don’t like the photo? You can replace it by running the Acquire Image from Camera code cell again.
Continue running the code cells.Running the Run the Object Detection Model on LandingLens Cloud code cell initiates the model in LandingLens to run inference on your image.Then, when you run the Visualize Results code cell, an image displays below the cell. This is your original image, with the predictions from the LandingLens model overlaid on top. The prediction includes the bounding box, the name (Class) of the object detected, and the Confidence Score of the prediction.For example, in the screenshot below, the model identified that the card has several Diamonds.
Run the next code cell, which is Process Results to Count the Number of Suits. When you run this code, the application counts how many objects were detected that had a confidence score higher than 50%. The count displays below the code cell.For example, in the screenshot below, the application counted four Diamonds that met the threshold criteria.