A Quick Overview of Segment Anything

Satwik Gawand
4 min readApr 14, 2023

--

My high-level understanding of the Segment Anything paper published by Meta AI Research.

Cover Image

✨ A Quick Overview

Segment Anything is a project by Meta AI Research with the goal to build a foundation model for image segmentation. It is a prompt-based model, pre-trained on a vast dataset using a task for generalization.

This project has 3 primary components consequential to the model — task, model and data. The paper addresses 3 research questions, each associated with one of the components to develop a segmentation model.

  1. what task will enable zero-shot generalization?
  2. what is the corresponding model architecture?
  3. what data can power this task and model?

They start by defining a promptable segmentation task which is quite general in nature and provides a robust objective for pre-training to enable a wide range of applications. This task requires a model that can adapt to flexible prompts and output segmentation masks in real-time for interactive use. To train a robust model, a vast assorted dataset is crucial.

✨ The Task

Inspired by the recent trend and advancements in prompting techniques, the paper proposes the promptable segmentation task, with the goal to return a valid segmentation mask based on the given segmentation prompt.

A prompt simply specifies what to segment in the provided image. It can include some spatial or textual information that can help in identifying an object.

The requirement for a valid output mask was that the output should be a reasonable mask for at least one object even when a prompt is ambiguous and could refer to multiple objects.

✨ The Model

The promptable segmentation task and the ultimate goal of real-world use impose constraints on the model — the model has to support flexible prompts and it needs to compute the masks in real time for interactivity while being ambiguity-aware.

The researchers concluded that a simple design satisfies all 3 constraints:

  • a powerful image encoder computes all an image embedding
  • a prompt encoder embeds prompt
  • these two information sources are combined in a lightweight mask decoder that predicts the segmentation masks

This is referred to as the Segment Anything Model (SAM). The benefit of separating SAM into these components allows the same image encoding to be reused with different prompts. To make SAM ambiguity-aware, the researchers designed it to predict multiple masks for any given prompt, allowing SAM to naturally handle ambiguity.

✨ The Data Engine

To achieve generalization, it was crucial to train SAM on a large-scale and diverse dataset. Unlike the typical approach for foundation models, the researchers couldn’t obtain the data online as masks are not naturally abundant.

Their approach was to build a ‘data engine’ where they co-developed their model with model-in-the-loop dataset annotation. The data engine has three stages — assisted-manual, semi-automatic, and fully automatic.

In the first stage, SAM assists annotators in annotating the masks, similar to a classic interactive segmentation setup. In the second stage, however, SAM can automatically generate masks for a subset of the objects by prompting it with potential object locations and the annotators focus on the remaining objects. Finally, in the third stage, the researchers prompted SAM with a regular grid of foreground points, with an average yield of ~100 high-quality masks per image.

✨ The Dataset

The final dataset generated is called ‘SA-1B’ which includes over 1.1 billion masks — billion with a b — from around 11 million images. The dataset was collected in entirety usnig the final (third) stage of the data engine and has more than 400 times more masks than any existing segmentation dataset — a huge leap to say the least.

✨ Future Work

It’s great that the Meta AI Research team has made this model open source and available to the public. Although the model performs quite well generally, it’s not a perfect solution. There are models that perform better, but they are either restricted to a certain niche or are computaionally intensive, or both. But SAM works on a diversified use case and efficient. Although SAM isn’t a foundation model yet, with the community now onboard, we will observe some huge advancements based on SAM and SA-1B and SAM will become one of the foundation models for image segmentation.

✨ Source

Segment Anything (by Meta AI Research) [segment-anything.com]

✨ Footnote

Hey there, hope you liked the blog post. This was just an overview of SAM, the paper further discusses each of the components in detail and the methodology utilized for developing SAM. You can give it a read and obtain deeper knowledge.

Consider following me on Medium, Twitter and other platforms to read more about Productivity, Design and Code.

Twitter | Medium | LinkedIn | Bio Link

--

--