Run Inference with Web APIs
This article applies to these versions of LandingLens:
LandingLens | LandingLens on Snowflake |
---|---|
There are several ways to send images for inference in LandingEdge. This section explains how to use web APIs to upload images to your model for inference.
With this method, you upload and receive results programmatically. Therefore, this method requires proficiency with a modern programming language. We’ve provided Python examples in this article.
Set Up Inspection Points for Images Sent via API
To set up Inspection Points to run inference on images sent via API, follow the instructions below:
- Create an Inspection Point.
- Select Web API from the Image Source drop-down menu.
- Enter the port that you will use to send images to LandingEdge in the Port field.LandingEdge will monitor this port to receive images from your API call. The supported port number range is 7000 to 8000. If setting up multiple Inspection Points, use a different port for each Inspection Point.
- If you want other devices (different IP addresses) on your network to be able to send images to the web API endpoint, select the Allow External Access checkbox.
- Verify that Self is selected from the Inspection Start drop-down menu. This option means that sending an image via web API will trigger the inspection process to start. (This is the only option when using the web API method.)
- Set up the Cloud Connection and Modelsettings.
- Skip the Communication section.
- (Optional) Set up Image Saving settings.
- (Optional) Set up Other Settings settings.
- (Optional) Set up Custom Processing.
Web API Documentation
To access the LandingEdge web APIs in Swagger, first ensure that the Inspection Point is running. Then go to http://localhost:[port]/docs
, where [port]
is the number you entered in the Port field when setting up the Inspection Point. For example, if you entered 7054 as your port number, then you would go to http://localhost:7054/docs
.
Web API Endpoints
The Web API provides the following possible endpoints you can use. All endpoints return the predictions as JSON.
- /images
- /RGB24mmf
Example: /Images Endpoint
The following script shows how to use the /images
endpoint to run inference on images that are already on your device.
Example: /RGB24mmf Endpoint
The RGB24mmf API requires images to be on the same system that you are calling APIs from. If the images are on a separate system, you cannot call this API (even if the Allow External Access setting is enabled). Use this API if your images are already in the device’s memory.
The following script shows how to use the /RGB24mmf
endpoint to run inference on images stored in your device’s memory.
Example: Use the LandingLens Python Library
The following script shows how to use the LandingLens Python library to run inference on images already on your device. For more information, see the LandingLens Python library.
Example: cURL Request Example
The following code snippet shows how you can use the web APIs to send the image for inference from the command line using a cURL command.