Stay organized with collections
Save and categorize content based on your preferences.
An Actions project packages all of your Actions into a single container. You
publish this project to Actions on Google so Google Assistant knows how to discover
and invoke your conversational experiences.
Figure 1. Actions project structure
You use the following low-level components to build your Actions project:
Settings and resources define project metadata
and resources like project icons. Google uses this information to publish
your Actions to the Assistant directory, so that users can discover and invoke
them.
Intents represent a task to be carried out, such as some
user input or a system event that needs processing. The most common type of
intent you'll use are user intents. These intents let you declare training
phrases that are naturally expanded by the NLU (natural language understanding)
engine to include many more, similar phrases. The NLU uses the aggregation of
these phrases to train a language model that the Assistant uses to match user
input. During a conversation, if some user input matches the intent's language
model, the Assistant runtime sends the intent to your Action, so that it can
process it and respond to the user.
Types let you extract structured data from user input. By
annotating training phrases with types, the NLU can extract relevant, structured
data for you, so you don't have to parse open-ended input.
Scenes process intents and are the main logic executors for
your Actions. They can do slot-filling, evaluate conditional logic, return
prompts to the user, and even call on external web services to carry out
business logic. In combination with intents, scenes give you a powerful way to
detect specific user input or system events and to carry out corresponding
logic.
Prompts define static or dynamic responses that you use to
respond back to users.
Webhooks let you delegate extra work to web services
(fulfillment), such as validating data or generating prompts. Your Actions
communicate with your fulfillment through a JSON-based, webhook protocol.
Interactive Canvas lets you create rich
and immersive experiences with web apps that utilize HTML, CSS, and JavaScript.
Create a project
You must create a project in the Actions console before you can develop for
Google Assistant. To create a project:
Enter a name for your project and click Create Project.
In the What kind of Action do you want to build? screen, select
a category that best represents your project and click Next.
In the How do you want to build it screen, select a way to build
and click Start building. For example, you can start with an empty
project or with a sample.
Define project information
Your project's settings and resources define information about your project like
feature and surface support, supported locales, display name, description,
logos, and more. The following table describes the main settings and resources
you provide. Actions on Google uses this
information to deploy and publish your project to the Assistant
directory.
Name
Description
Directory information
Provides information so that Actions on Google can publish your
project to the Assistant directory. Includes metadata and desecriptions
about your project and image resources for logos and banner images.
Location targeting
Configures the locales that your Actions are available in.
Surface capabilities
Configures the surfaces that your Actions are available on.
Company details
Specifies contact information for your company.
Brand verification
Connect a website or Android app that you own to gain extra benefits
such as reserved invocation names and website linking within your Actions.
Release
Configures different testing and production releases for your Action
for testing and production.
Assistant links
Let users invoke your Actions from your web properties.
To define project information:
Test projects in the simulator
The Actions console provides a simulator to preview your Actions in. The
simulator lets you see debug information, set device capabilities, simulate
locale, and more.
Figure 3. The main areas of the simulator: (1) user input,
(2) device view, (3) options and settings, and (4) conversation log.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-18 UTC."],[[["\u003cp\u003eAn Actions project enables the creation of conversational experiences for Google Assistant by packaging all actions into a single container for publishing.\u003c/p\u003e\n"],["\u003cp\u003eBuilding an Actions project involves utilizing components such as intents, types, scenes, prompts, webhooks and more for defining user interactions and logic.\u003c/p\u003e\n"],["\u003cp\u003eDefining project information in the Actions console, including directory information, locales and surfaces, is crucial for publishing and deploying to the Assistant directory.\u003c/p\u003e\n"],["\u003cp\u003eThe Actions console simulator provides a comprehensive environment for testing projects with features such as debugging, device settings, and locale simulation.\u003c/p\u003e\n"]]],[],null,["Actions Builder Actions SDK \n\nAn Actions project packages all of your Actions into a single container. You\npublish this project to Actions on Google so Google Assistant knows how to discover\nand invoke your conversational experiences.\n**Figure 1**. Actions project structure\n\nYou use the following low-level components to build your Actions project:\n\n- [**Settings and resources**](#define_project_information) define project metadata\n and resources like project icons. Google uses this information to publish\n your Actions to the Assistant directory, so that users can discover and invoke\n them.\n\n- [**Intents**](../intents) represent a task to be carried out, such as some\n user input or a system event that needs processing. The most common type of\n intent you'll use are user intents. These intents let you declare training\n phrases that are naturally expanded by the NLU (natural language understanding)\n engine to include many more, similar phrases. The NLU uses the aggregation of\n these phrases to train a language model that the Assistant uses to match user\n input. During a conversation, if some user input matches the intent's language\n model, the Assistant runtime sends the intent to your Action, so that it can\n process it and respond to the user.\n\n- [**Types**](../types) let you extract structured data from user input. By\n annotating training phrases with types, the NLU can extract relevant, structured\n data for you, so you don't have to parse open-ended input.\n\n- [**Scenes**](../scenes) process intents and are the main logic executors for\n your Actions. They can do slot-filling, evaluate conditional logic, return\n prompts to the user, and even call on external web services to carry out\n business logic. In combination with intents, scenes give you a powerful way to\n detect specific user input or system events and to carry out corresponding\n logic.\n\n- [**Prompts**](../prompts) define static or dynamic responses that you use to\n respond back to users.\n\n- [**Webhooks**](../webhooks) let you delegate extra work to web services\n (fulfillment), such as validating data or generating prompts. Your Actions\n communicate with your fulfillment through a JSON-based, webhook protocol.\n\n- [**Interactive Canvas**](/assistant/interactivecanvas) lets you create rich\n and immersive experiences with web apps that utilize HTML, CSS, and JavaScript.\n\nCreate a project\n\nYou must create a project in the Actions console before you can develop for\nGoogle Assistant. To create a project:\n\n1. Go to the [Actions console](//console.actions.google.com/).\n2. Click **New project**.\n3. Enter a name for your project and click **Create Project** .\n4. In the **What kind of Action do you want to build?** screen, select a category that best represents your project and click **Next**.\n5. In the **How do you want to build it** screen, select a way to build and click **Start building**. For example, you can start with an empty project or with a sample.\n\n| **Key Point:** If you are building for Interactive Canvas, follow these additional steps:\n|\n| 1. If you did not select the **Game** card on the **What type of Action\n| do you want to build?** screen, click **Deploy** in the top navigation. Under **Additional Information** , select the **Games \\& fun** category. Click **Save**.\n| 2. Click **Develop** in the top navigation of the Actions console.\n| 3. Click **Interactive Canvas** in the left navigation.\n| 4. Under **Does your Action use Interactive Canvas?** , select **Yes**.\n| 5. **Optional** : Enter your web app URL into the **Set your default web app URL** field. This action adds a default `Canvas` response with the URL field to your Main invocation.\n| 6. Click **Save**.\n\n\u003cbr /\u003e\n\nDefine project information **Note:** See the [Directory information](/assistant/console/directory-information) documentation for more details on how to manage project information.\n\nYour project's settings and resources define information about your project like\nfeature and surface support, supported locales, display name, description,\nlogos, and more. The following table describes the main settings and resources\nyou provide. Actions on Google uses this\ninformation to deploy and publish your project to the [Assistant\ndirectory](//assistant.google.com/explore).\n\n| Name | Description |\n|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Directory information | Provides information so that Actions on Google can publish your project to the Assistant directory. Includes metadata and desecriptions about your project and image resources for logos and banner images. |\n| Location targeting | Configures the locales that your Actions are available in. |\n| Surface capabilities | Configures the surfaces that your Actions are available on. |\n| Company details | Specifies contact information for your company. |\n| Brand verification | Connect a website or Android app that you own to gain extra benefits such as reserved invocation names and website linking within your Actions. |\n| Release | Configures different testing and production releases for your Action for testing and production. |\n| Assistant links | Let users invoke your Actions from your web properties. |\n\nTo define project information:\n\nTest projects in the simulator **Note:** See the [Actions simulator](/assistant/console/simulator) documentation for complete information about testing your projects.\n\nThe Actions console provides a simulator to preview your Actions in. The\nsimulator lets you see debug information, set device capabilities, simulate\nlocale, and more.\n**Figure 3.** The main areas of the simulator: (1) user input, (2) device view, (3) options and settings, and (4) conversation log.\n\nTo test a project:"]]