You can use Imagen on Vertex AI to generate new images from a text prompt you provide in the Google Cloud console or send in a request to the Vertex AI API .
For more information about writing text prompts for image generation and editing, see the prompt guide.
Locations
A location is a region you can specify in a request to control where data is stored at rest. For a list of available regions, see Generative AI on Vertex AI locations.
Safety filtering
Both input data and output content are checked for offensive material when you send an image generation request to Imagen. This means a text prompt input that's offensive can be blocked. Similarly, offensive output images might also be blocked, affecting the number of generated images you get in a response.
For more information about safety filtering and blocked content handling, see Responsible AI and usage guidelines for Imagen.
Performance and limitations
The following limits apply when you use an Imagen model for image generation:
Limits | Value (Imagen 3) |
---|---|
Maximum number of API requests per minute per project | Imagen 3: 20 Imagen 3 Fast: 200 |
Maximum number of images returned per request (text-to-image generation) | 4 |
Maximum image size uploaded or sent in a request (MB) | 10 MB |
Supported returned image resolution (pixels) |
|
Maximum number of input tokens (text-to-image generation prompt text) | 480 tokens |
Model versions
There are various versions of the image generation model you can use. For general information on Imagen model versioning, see Imagen models and lifecycle.
The following models and their associated features are available for image generation:
Model | Model resource name and version | Launch stage | Features | Aspect ratios | Languages supported | Billing |
---|---|---|---|---|---|---|
Imagen 3 |
Imagen 3: imagen-3.0-generate-001 Imagen 3 Fast: imagen-3.0-fast-generate-001 This is a low-latency model variant you can use for prototyping or low-latency use cases. Imagen 3 Customization and Editing: imagen-3.0-capability-001
|
General Availability |
Supported features:
|
|
General availability:
Preview:
|
Yes, pricing applies for generation. The pricing for the Imagen 3 models is at a new SKU, so pricing differs from other models. To view all features and launch stages, see the
Imagen overview. |
Before you begin
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Vertex AI API.
-
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
-
Make sure that billing is enabled for your Google Cloud project.
-
Enable the Vertex AI API.
-
Set up authentication for your environment.
Select the tab for how you plan to use the samples on this page:
Console
When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.
Java
To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.
Node.js
To use the Node.js samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.
Python
To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.
- Install the Google Cloud CLI.
-
To initialize the gcloud CLI, run the following command:
gcloud init
-
If you're using a local shell, then create local authentication credentials for your user account:
gcloud auth application-default login
You don't need to do this if you're using Cloud Shell.
For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
Generate images with text
You can generate novel images using only descriptive text as an input. The following samples show you basic instructions to generate images, but you can also use additional parameters depending on your use case.
Console
-
In the Google Cloud console, open the Vertex AI Studio > Media tab in the Vertex AI dashboard.
Go to the Vertex AI Studio tab -
In the Write your prompt field, enter a description for the images you want to generate. For details about writing effective prompts, see the prompt guide.
- For example: small boat on water in the morning watercolor illustration
Optional. In the Model options box in the Parameters panel, select the model version to use. For more information, see model versions.
Optional. Change standard and advanced parameters.
-
To generate images, click
Generate.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_VERSION: The
imagegeneration
model version to use. Available values:-
Imagen 3:
imagen-3.0-generate-001
imagen-3.0-fast-generate-001
- Low latency model version.
-
Default model version:
imagegeneration
- Uses the default model version v.006. As a best practice, you should always specify a model version, especially in production environments.
For more information about model versions and features, see model versions.
-
Imagen 3:
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - TEXT_PROMPT: The text prompt guides what images the model generates. This field is required for both generation and editing.
- IMAGE_COUNT: The number of generated images. Accepted integer values: 1-8 (v.002), 1-4 (all other model versions). Default value: 4.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT" } ], "parameters": { "sampleCount": IMAGE_COUNT } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict" | Select-Object -Expand Content
"sampleCount": 2
. The response returns two prediction objects, with
the generated image bytes base64-encoded.
{ "predictions": [ { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" }, { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" } ] }
Python
Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Python API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
In this sample you call the generate_images
method on the
ImageGenerationModel
and save generated
images locally. You then can optionally use the show()
method in a notebook to show you the generated images. For more information on
model versions and features, see model versions.
Java
Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
In this sample, you specify the imagen-3.0-generate-001
model as part of
an EndpointName
. The EndpointName
is passed to the predict
method which is called on a
PredictionServiceClient
. The service generates
images which are then saved locally. For more information on model versions and
features, see model versions.
Node.js
Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.
To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.
In this sample, you call thepredict
method on a
PredictionServiceClient
.
The service generates images which are
then saved locally. For more information on model versions and features, see
model versions.
Use parameters to generate images
When you generate images there are several standard and advanced parameters you can set depending on your use case.
Add or verify an image watermark
By default, a digital watermark is added to any images generated by a model version that supports watermark generation. This features adds a non-visible digital watermark—called a SynthID—to images. You can then verify if an image contains a digital watermark or not.
Generate watermarked images
Use the following samples to generate images with a digital watermark.
Console
-
In the Google Cloud console, open the Vertex AI Studio > Media tab in the Vertex AI dashboard.
Go to the Vertex AI Studio tab -
In the Write your prompt field, enter a description for the images you want to generate. For details about writing effective prompts, see the prompt guide.
- For example: small boat on water in the morning watercolor illustration
Optional. In the Model options box in the Parameters panel, select the model version to use. For more information, see model versions.
Optional. Change standard and advanced parameters.
-
To generate images, click
Generate. -
Model version 006 and greater: A digital watermark is automatically added to generated images. You can't disable digital watermark for image generation using the Google Cloud console.
You can select an image to go to the Image detail window. Watermarked images contain a verify an image watermark.
Digital watermark badge. You can also explicitly
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- MODEL_VERSION: The
imagegeneration
model version to use. Available values:imagen-3.0-generate-001
imagen-3.0-fast-generate-001
- Low latency model version.imagegeneration@006
For more information about model versions and features, see model versions.
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - TEXT_PROMPT: The text prompt guides what images the model generates. This field is required for both generation and editing.
- IMAGE_COUNT: The number of generated images. Accepted integer values: 1-8 (v.002), 1-4 (all other model versions). Default value: 4.
addWatermark
- A boolean to enable a watermark for generated images. Any image generated when the field is set totrue
contains a digital SynthID that you can use to verify a watermarked image. If you omit this field, the default value oftrue
is used; you must set the value tofalse
to disable this feature. You can use theseed
field to get deterministic output only when this field is set tofalse
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT" } ], "parameters": { "sampleCount": IMAGE_COUNT, "addWatermark": true } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_VERSION:predict" | Select-Object -Expand Content
"sampleCount": 2
. The response returns two prediction objects, with
the generated image bytes base64-encoded. The digital watermark is automatically added to images,
so the response is the same as a non-watermarked response.
{ "predictions": [ { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" }, { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" } ] }
Vertex AI SDK for Python
Node.js
Verify a watermarked image
Use the following samples to verify that an image has a watermark.
Console
In the Google Cloud console, open the Vertex AI Studio > Media tab in the Vertex AI dashboard.
In the lower panel, click
Verify.Click Upload image.
Select a locally-saved generated image.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - B64_IMAGE: The image to verify that contains a digital watermark. The image must be specified as a base64-encoded byte string. Size limit: 10 MB.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imageverification@001:predict
Request JSON body:
{ "instances": [ { "image": { "bytesBase64Encoded": "B64_IMAGE" } } ], "parameters": { "watermarkVerification": true } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imageverification@001:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imageverification@001:predict" | Select-Object -Expand Content
ACCEPT
or REJECT
.
{ "predictions": [ { "decision": "ACCEPT" } ] }
Vertex AI SDK for Python
Node.js
Configure Responsible AI (RAI) safety settings
There are several Responsible AI (RAI) filtering parameters you can use with an image generation model. For example, you can let the model report RAI filter codes for blocked content, disable people or face generation using RAI filters, set the level of content filtering, or return rounded RAI scores of list of safety attributes for input and output.
For more detailed information about Responsible AI (RAI), its associated parameters, and their sample output, see Understand and configure Responsible AI for Imagen.
The following samples show you how to set available RAI parameters for image generation.
Console
In the Google Cloud console, open the Vertex AI Studio > Media tab in the Vertex AI dashboard.
Add your text prompt and choose your input parameters.
If not expanded, click Advanced options.
Click Safety settings.
Choose your safety settings:
- Person/face generation: Choose a setting:
Allow (All ages)
Allow (Adults only)
Don't allow
- Safety filter threshold: Choose a setting:
Block low and above
Block medium and above
Block only high
- Person/face generation: Choose a setting:
Click Save.
Click
Generate.
REST
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - TEXT_PROMPT: The text prompt guides what images the model generates. This field is required for both generation and editing.
- IMAGE_COUNT: The number of generated images. Accepted integer values: 1-8 (v.002), 1-4 (all other model versions). Default value: 4.
- SAFETY_SETTING: Optional. A setting that controls safety filter thresholds for
generated images. Available values:
block_low_and_above
: The highest safety threshold, resulting in the largest amount of generated images that are filtered. Previous value:block_most
.block_medium_and_above
(default): A medium safety threshold that balances filtering for potentially harmful and safe content. Previous value:block_some
.block_only_high
: A safety threshold that reduces the number of requests blocked due to safety filters. This setting might increase objectionable content generated by Imagen. Previous value:block_few
.
- PERSON_SETTING: Optional. The safety setting that controls the type of people or
face generation the model allows. Available values:
allow_all
: Allow generation of people of all ages. This option is only available for Offline customers.allow_adult
(default): Allow generation of adults only, except for celebrity generation. Celebrity generation is not allowed for any setting.dont_allow
: Disable the inclusion of people or faces in generated images.
- INCLUDE_RAI_REASON: Optional. A boolean whether to enable the Responsible AI
filtered reason code in responses with blocked input or output. Default value:
false
. - INCLUDE_SAFETY_ATTRIBUTES: Optional. Whether to enable rounded
Responsible AI scores for a list of safety attributes in responses for unfiltered input and
output. Safety attribute categories:
"Death, Harm & Tragedy"
,"Firearms & Weapons"
,"Hate"
,"Health"
,"Illicit Drugs"
,"Politics"
,"Porn"
,"Religion & Belief"
,"Toxic"
,"Violence"
,"Vulgarity"
,"War & Conflict"
. Default value:false
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict
Request JSON body:
{ "instances": [ { "prompt": "TEXT_PROMPT" } ], "parameters": { "sampleCount": IMAGE_COUNT, "safetySetting": "SAFETY_SETTING", "personGeneration": "PERSON_SETTING", "includeRaiReason": INCLUDE_RAI_REASON, "includeSafetyAttributes": INCLUDE_SAFETY_ATTRIBUTES } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict" | Select-Object -Expand Content
Text prompt language
This optional parameter lets you set the language of the input text for image generation or image editing.
Before you begin
Complete the following additional steps before you use this feature:
Use the following command to create a service identity for Vertex AI to use in your project:
gcloud beta services identity create --service=aiplatform.googleapis.com --project=PROJECT_ID
Request feature access. To request access, send an email to the Google Cloud Trusted Testers Access: GenApp Builder group. Reference Multi-Lingual Prompts in your message, and include your project number. The approval process usually takes several hours.
Set text prompt language
The following input text prompt language values are supported:
- Chinese (simplified) (
zh
/zh-CN
) - Chinese (traditional) (
zh-TW
) - English (
en
, default value) - Hindi (
hi
) - Japanese (
ja
) - Korean (
ko
) - Portuguese (
pt
) Spanish (
es
)
Console
If your prompt is in one of the supported languages, Imagen automatically detects and translates your text and returns your generated or edited images.
If your prompt is in an unsupported language, Imagen uses the text verbatim for the request. This might result in unexpected output.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Before using any of the request data, make the following replacements:
- PROJECT_ID: Your Google Cloud project ID.
- TEXT_PROMPT: The text prompt guides what images the model generates. This field is required for both generation and editing.
- PROMPT_LANGUAGE: The language code that corresponds to your text prompt language.
In this example it would be
hi
. Available values:auto
- Automatic detection. If Imagen detects a supported language, the prompt (and optionally, a negative prompt), are translated to English. If the language detected is not supported, Imagen uses the input text verbatim, which might result in unexpected output. No error code is returned.en
- English (default value if omitted)es
- Spanishhi
- Hindija
- Japaneseko
- Koreanpt
- Portuguesezh-TW
- Chinese (traditional)zh
orzh-CN
- Chinese (simplified)
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/imagegeneration@005:predict
Request JSON body:
{ "instances": [ { "prompt": "सूर्यास्त के समय एक समुद्र तट। उड़ते पक्षी, हवा में लहराते नारियल के पेड़। लोग समुद्र तट पर सैर का आनंद ले रहे हैं।" } ], "parameters": { "language": "PROMPT_LANGUAGE" } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/imagegeneration@005:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/imagegeneration@005:predict" | Select-Object -Expand Content
"sampleCount": 2
. The response returns two prediction objects, with
the generated image bytes base64-encoded.
{ "predictions": [ { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" }, { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" } ] }
Aspect ratio
Depending on how you plan to use your generated images, some aspect ratios may work better than others. Choose the aspect ratio that best suits your use case.
Supported aspect ratios and their intended use:
Aspect ratio | Intended use | Image resolution (pixels) | Sample image |
---|---|---|---|
1:1 |
default, square, general use | 1024x1024 (Imagen v.002) 1536x1536 (Imagen 2 v.005, v.006) 1024x1024 (Imagen 3) |
|
3:4 |
TV, media, film | 1344x1792 (Imagen 2 v.006) 896x1280 (Imagen 3) |
|
4:3 |
TV, media, film | 1792x1344 (Imagen 2 v.006) 1280x896 (Imagen 3) |
|
9:16 |
portrait, tall objects, mobile devices | 1134x2016 (Imagen 2 v.005, v.006) 768x1408 (Imagen 3) |
|
16:9 |
landscape | 2016x1134 (Imagen 2 v.006) 1408x768 (Imagen 3) |
Console
Follow the generate image with text instructions to open the Vertex AI Studio and enter your text prompt.
In the Parameters panel, select an aspect ratio from the Aspect ratio menu.
Click
Generate.
REST
Aspect ratio is an optional field in the parameters
object of a JSON
request body.
Follow the generate image with text instructions to replace other request body variables.
Replace the following:
- ASPECT_RATIO: Optional. A generation mode parameter that controls aspect ratio.
Supported ratio values and their intended use:
1:1
(default, square)3:4
(Ads, social media)4:3
(TV, photography)16:9
(landscape)9:16
(portrait)
{ "instances": [ ... ], "parameters": { "sampleCount": IMAGE_COUNT, "aspectRatio": "ASPECT_RATIO" } }
- ASPECT_RATIO: Optional. A generation mode parameter that controls aspect ratio.
Supported ratio values and their intended use:
Follow the generate image with text instructions to send your REST request.
Number of results
Use the number of results parameter to limit the amount of images returned for each request (generate or edit) you send.
Console
Follow the generate image with text instructions to open the Vertex AI Studio and enter your text prompt.
In the Parameters panel, select a valid integer value in the Number of results field.
Click
Generate.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Number of results is a field in the parameters
object of a JSON
request body.
Follow the generate image with text instructions to replace other request body variables.
Replace the following:
- IMAGE_COUNT: The number of generated images. Accepted integer values: 1-8 (v.002), 1-4 (all other model versions). Default value: 4.
{ "instances": [ ... ], "parameters": { "sampleCount": IMAGE_COUNT } }
Follow the generate image with text instructions to send your REST request.
Negative prompt
A negative prompt is a description of what you want to omit in generated images. For example, consider the prompt "a rainy city street at night with no people". The model may interpret "people" as a directive of what include instead of omit. To generate better results, you could use the prompt "a rainy city street at night" with a negative prompt "people".
Imagen generates these images with and without a negative prompt:
Text prompt only
- Text prompt: "a pizza"
Text prompt and negative prompt
- Text prompt: "a pizza"
- Negative prompt: "pepperoni"
Console
Follow the generate image with text instructions to open the Vertex AI Studio and enter your text prompt.
In the Parameters panel, enter a negative prompt in the Negative prompt field.
Click
Generate.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Negative prompt is an optional field in the parameters
object of a JSON
request body.
Follow the generate image with text instructions to replace other request body variables.
Replace the following:
- NEGATIVE_PROMPT: A negative prompt to help generate the images. For example: "animals" (removes animals), "blurry" (makes the image clearer), "text" (removes text), or "cropped" (removes cropped images).
{ "instances": [ ... ], "parameters": { "sampleCount": IMAGE_COUNT, "negativePrompt": "NEGATIVE_PROMPT" } }
Follow the generate image with text instructions to send your REST request.
Seed number
A seed number is a number you add to a request to make generated images deterministic. Adding a seed number with your request is a way to assure you get the same generated images each time. For example, you can provide a prompt, set the number of results to 1, and use a seed number to get the same image each time you use all those same input values. If you send the same request with the number of results set to 8, you will get the same eight images. However, the images aren't necessarily returned in the same order.
Console
Follow the generate image with text instructions to open the Vertex AI Studio and enter your text prompt.
In the Parameters panel, click the
Advanced options expandable section.In the Seed field, enter a seed number.
Click
Generate.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Seed number is an optional field in the parameters
object of a JSON
request body.
Follow the generate image with text instructions to replace other request body variables.
Replace the following:
- SEED_NUMBER: Any non-negative integer you provide to make output images deterministic. Providing the same seed number always results in the same output images. Accepted integer values: 1 - 2147483647.
{ "instances": [ ... ], "parameters": { "sampleCount": IMAGE_COUNT, "seed": SEED_NUMBER, // required for model version 006 and greater only when using a seed number "addWatermark": false } }
Follow the generate image with text instructions to send your REST request.
Predefined style
The style of image you are looking to generate. You can use this feature to create images in popular styles such as digital art, watercolor, or cyberpunk.
Console
Follow the generate image with text instructions to open the Vertex AI Studio and enter your text prompt.
In Style section of the Parameters panel, chose a style from the menu.
Click
Generate.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Predefined style is an optional field in the parameters
object of a JSON
request body.
Follow the generate image with text instructions to replace other request body variables.
Replace the following:
- IMAGE_STYLE: One of the available predefined styles:
photograph
digital_art
landscape
sketch
watercolor
cyberpunk
pop_art
{ "instances": [ ... ], "parameters": { "sampleCount": IMAGE_COUNT, "sampleImageStyle": "IMAGE_STYLE" } }
- IMAGE_STYLE: One of the available predefined styles:
Follow the generate image with text instructions to send your REST request.
Upscale an image
Use upscaling to increase the size of existing, generated, or edited images without losing quality.
Console
Follow the generate image with text instructions to generate images.
Select the image to upscale.
Click
Export.Select Upscale images.
Choose a value from the Scale factor.
Click
Export to save the upscaled image.
REST
For more information about imagegeneration
model requests, see the
imagegeneration
model API reference.
Upscaling mode is an optional field in the parameters
object of a JSON
request body. When you upscale an image using the API, specify
"mode": "upscale"
and upscaleConfig
.
Before using any of the request data, make the following replacements:
- LOCATION: Your project's region. For example,
us-central1
,europe-west2
, orasia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. - PROJECT_ID: Your Google Cloud project ID.
- B64_BASE_IMAGE: The base image to edit or upscale. The image must be specified as a base64-encoded byte string. Size limit: 10 MB.
- IMAGE_SOURCE: The Cloud Storage location of the image you
want to edit or upscale. For example:
gs://output-bucket/source-photos/photo.png
. - UPSCALE_FACTOR: Optional. The factor to which the image will be upscaled. If not
specified, the upscale factor will be determined from the longer side of the input image and
sampleImageSize
. Available values:x2
orx4
.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict
Request JSON body:
{ "instances": [ { "prompt": "", "image": { // use one of the following to specify the image to upscale "bytesBase64Encoded": "B64_BASE_IMAGE" "gcsUri": "IMAGE_SOURCE" // end of base image input options }, } ], "parameters": { "sampleCount": 1, "mode": "upscale", "upscaleConfig": { "upscaleFactor": "UPSCALE_FACTOR" } } }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@002:predict" | Select-Object -Expand Content
You should receive a JSON response similar to the following:
{ "predictions": [ { "mimeType": "image/png", "bytesBase64Encoded": "iVBOR..[base64-encoded-upscaled-image]...YII=" } ] }
What's next
Read articles about Imagen and other Generative AI on Vertex AI products:
- A developer's guide to getting started with Imagen 3 on Vertex AI
- New generative media models and tools, built with and for creators
- New in Gemini: Custom Gems and improved image generation with Imagen 3
- Google DeepMind: Imagen 3 - Our highest quality text-to-image model