LandingLens | LandingLens on Snowflake |
---|---|

Analyze Model Performance
Watch the following video to learn how to use the model report metrics and confusion matrix to analyze and improve model performance.Model List Overview
Here’s a quick introduction to the elements of the Model List:.png?fit=max&auto=format&n=vjz73xlrEd3bs2OU&q=85&s=ea8cb3242f8cfba1ff23dab63ba7cd70)
# | Item | Description |
---|---|---|
1 | Model List | Click the Model List button to show/hide the model tiles. |
2 | Name | The model name. |
3 | Performance scores for splits | The performance score for each split. Object Detection, Classification, and Anomaly Detection projects show the F1 score. Segmentation projects show the Intersection over Union (IoU) score. |
4 | More Actions | Click the Actions icon (…) to access these tools or shortcuts: Download CSV, View on Models Page, Go to Snapshot Page. |
5 | Predictions | The number of times the model made each of these predictions: False Positive, False Negative, Misclassified, and Correct. (Some predictions aren’t applicable to certain project types.) For Segmentation projects, the number is the number of pixels. |
6 | View Confusion Matrix | Click View Confusion Matrix to see the model performance metrics and confusion matrix. The data is based on the dataset that the model was trained on. |
7 | Try Model | Click Try Model to see how the model performs on new images. |
8 | Collapse and expand tile | Click to show/hide the predictions. |
9 | Load more models | Click the Load button to show more model tiles. |
View Confusion Matrix
Click View Confusion Matrix to see the model performance metrics and confusion matrix.Performance

- Object Detection, Classification, Anomaly Detection: F1 Score
- Segmentation: Intersection Over Union (IoU)
Object Detection, Classification, Anomaly Detection: F1 Score
The Performance section for Object Detection, Classification, and Anomaly Detection projects shows the F1 score for each split.
Segmentation: Intersection Over Union (IoU)
The Performance section for Segmentation projects shows the Intersection over Union (IoU) score for each split.
Precision
Select Precision from the drop-down in the Performance section to see the Precision scores for each split.
Recall
Select Recall from the drop-down in the Performance section to see the Recall scores for each split.
Confusion Matrix
Click View Confusion Matrix on a model tile to see the confusion matrix.

Precision Score for Class
The Precision score for each class is listed along the x-axis. Precision answers the natural language question, “When the model predicts Class A, how often is it correct?” The Precision score for a class is the percentage of instances that the model correctly predicted the class out of all instances that the model predicted the class, and is calculated using this equation: For example, let’s calculate the Precision score for the Wheat class in the image below. The model predicts Wheat 7 times. Of those, 6 are correct (True Positives) and 1 is incorrect (False Positives). When we plug those numbers into the Precision equation, we see that the Precision for this class is 85.7%.
Recall Score for Class
The Recall score for each class is listed along the y-axis. Recall answers the natural language question, “Of all the Class As in the dataset, what percent of them are found by the model?” The Recall score for a class is the percentage of instances that the model correctly predicted the class out of all actual instances of the class, and is calculated using this equation: For example, let’s calculate the Recall score for the Wheat class in the image below. The dataset has 8 instances of Wheat. The model predicts 6 instances correctly (True Positives) and 2 instances incorrectly (False Negatives). When we plug those numbers into the Recall equation, we see that the Recall for this class is 75.0%.
Use Colors to Help Interpret Performance
Each cell has a color that can quickly help you identify correct classifications and errors. Darker colors indicate a higher number, and lighter colors indicate a lower number. For example, if the model correctly predicts all instances, only the cells on the diagonal will be blue and have non-zero values. See the following image as an example.

Try Model
After you train a model, you can test its performance by using the Try Model tool. Using Try Model is a good way to “spot-check” a model’s performance. When you click Try Model, you can upload a few images to see how the model performs on them. Ideally, you should upload images that aren’t already in the dataset and that match your real-world use case. If the model performs well on the new images, you can deploy it. If the model doesn’t perform well on the images, try uploading and labeling more images in your project. Then run Try Model again. The Try Model tool runs inference on each image, so using this tool costs 1 credit per image. (The credit cost is not applicable when using .) To use Try Model:- Open a project to the Build tab.
- Click Model List to view all models in the project.
- Click Try Model on the model you want to use. (You can also click a model tile to open the model, and then click Try Model.)
- Upload images.
- LandingLens runs the model and shows you the results. Adjust the Confidence Threshold slider (for Object Detection and Segmentation) or Anomaly Threshold slider (for Anomaly Detection) to see how the model performs with different thresholds. Typically, a lower threshold means that you will see more predictions, while a higher threshold means you will see fewer.
Download CSV of Model Predictions
For Object Detection and Classification projects, you can download a CSV that shows the ground truth labels and model predictions for images. You can download the CSV:Download CSV: Model Predictions for Images in a Model Dataset
You can download a CSV of model predictions for the dataset of images that a model was trained on. This is available for Object Detection and Classification projects. The prediction data in the CSV will be based on the selected model and its default confidence threshold. To download the CSV for images in a model’s dataset:- Open a project to the Build tab.
- Click Model List to view all models in the project.
- Click the Actions icon (…) on the model tile and click Download CSV. (You can also click a model tile to open the model, and then click Download CSV).
- The file is downloaded to your computer. For a description of all data in the file, go to CSV Data.
Download CSV: Model Predictions for Select Images
You can download a CSV of model predictions for select images in your Object Detection or Classification dataset. The prediction data in the CSV will be based on the selected model and confidence threshold (if you manually change the threshold, that threshold is used in the CSV). If a model hasn’t been created in the project yet, the prediction fields in the CSV will be blank. To download the CSV for select images in a dataset:- Open a project to the Build tab.
- Select the model you want to see the predictions for from the Prediction/Model drop-down menu.
- Select the images you want to download the CSV for.
- Click Options in the action bar near the bottom of the screen and select Download CSV.
- Click Download on the pop-up window that opens.
- The file is downloaded to your computer. For a description of all data in the file, go to CSV Data.
CSV Data
Item | Description | Example |
---|---|---|
Project Name | Name of the LandingLens project. | Defect Detection |
Project Type | Project type (“bounding_box” is Object Detection). | classification |
Image Name | The file name of the image uploaded to LandingLens. | sample_003.jpg |
Image ID | Unique ID assigned to the image. | 29786892 |
Split | The split assigned to the image. | train |
Upload Time | The time the image was uploaded to LandingLens. All times are in Coordinated Universal Time (UTC). | Mon Jun 26 2023 16:37:10 GMT+0000 (Coordinated Universal Time) |
Image Width | The width (in pixels) of the image when it was uploaded to LandingLens. | 4771 |
Image Height | The height (in pixels) of the image when it was uploaded to LandingLens. | 2684 |
Model Name | The name of the model in LandingLens. | 100% Precision and Recall |
Metadata | Any metadata assigned to the image. If the image doesn’t have any metadata, the value is "". | {"Author":"Eric Smith","Organization":"QA"} |
GT_Class | The Classes you assigned to the image (ground truth or “GT”). For Object Detection, this also includes the number of objects you labeled. | {"Screw":3} |
PRED_Class | The Classes the model predicted. For Object Detection, this also includes the number of objects predicted. If the model didn’t predict any objects, the value is {"null":1} . | {"Screw":2} |
Model_Correct | If the model’s prediction matched the original label (ground truth or “GT”), the value is true. If it didn’t match, the value is false. Only for Classification projects. | true |
PRED_Class_Confidence / PRED_Confidence | The model’s Confidence Score for each object predicted. If the model didn’t predict any objects, the value is . | [{"Screw":0.94796216},{"Screw":0.9787127}] |
Class_TotalArea | The total area (in pixels) of the model’s predicted area. If the model didn’t predict any objects, the value is . Only for Object Detection projects. | {"Screw":76060} |
GT-PRED JSON | The JSON output comparing the original labels (ground truth or “GT”) to the model’s predictions. See the JSON Output link for details. | {"gtDefectName":"No Fire","predDefectName":"No Fire","predConfidence":0.9684047} |
THRESHOLD | The confidence threshold for the model applied to the dataset. This column is only included when downloading the CSV for select images. | 0.09 |
View on Models Page
To adjust the threshold, view visual predictions, or compare the model to other models in the same project, open the model in the Models tab. The Model List has a few shortcuts to the Models tab:- Click the Actions icon (…) on the model tile and click View on Models Page.
- Click View Confusion Matrix on a model tile and click View Full Report.
- Click View Confusion Matrix on a model tile, click the Actions icon (…) and click View on Models Page.
Go to Snapshot Page
The Model List has a few shortcuts to the Snapshot page:- Click the Actions icon (…) on the model tile and click Go to Snapshot Page.
- Click a model tile, click the Actions icon (…) and click Go to Snapshot Page.