Buat aplikasi dan fitur web dan seluler yang didukung AI dengan model Gemini dan Imagen menggunakan Firebase AI Logic
Firebase AI Logic memberi Anda akses ke model AI generatif terbaru dari Google: model Gemini dan model Imagen.
Jika perlu memanggil Gemini API atau Imagen API secara langsung
dari aplikasi seluler atau web — bukan sisi server — Anda dapat menggunakan
SDK klien Firebase AI Logic. SDK klien ini dibuat
khusus untuk digunakan dengan aplikasi seluler dan web, yang menawarkan opsi keamanan terhadap
klien yang tidak sah serta integrasi dengan layanan Firebase lainnya.
SDK klien ini tersedia dalam
Swift untuk platform Apple, Kotlin & Java untuk Android, JavaScript untuk web,
Dart untuk Flutter, dan Unity.
Dengan SDK klien ini, Anda dapat menambahkan personalisasi AI ke aplikasi, membangun pengalaman chat AI, membuat pengoptimalan dan otomatisasi yang didukung AI, dan banyak lagi.
Perlu fleksibilitas atau integrasi sisi server yang lebih besar? Genkit adalah framework open source Firebase untuk pengembangan AI sisi server yang canggih dengan akses luas ke model dari Google, OpenAI, Anthropic, dan lainnya. Fitur ini mencakup fitur AI yang lebih canggih dan alat lokal khusus.
Kemampuan utama
Input multimodal dan bahasa alami
Model Gemini bersifat
multimodal, sehingga perintah yang dikirim ke Gemini API dapat mencakup teks,
gambar, PDF, video, dan audio. Beberapa model Gemini juga dapat
menghasilkan output multimodal.
Model Gemini dan Imagen dapat diberi perintah dengan
input bahasa natural.
Firebase AI Logic menyediakan SDK klien, layanan proxy, dan fitur lainnya
yang memungkinkan Anda mengakses model AI generatif Google untuk membuat fitur AI di
aplikasi seluler dan web Anda.
Dukungan untuk model Google dan penyedia "Gemini API"
Kami mendukung semua model Gemini dan model Imagen 3 terbaru, dan Anda dapat memilih penyedia "Gemini API" pilihan Anda untuk mengakses model ini.
Kami mendukung Gemini Developer API dan
Vertex AI Gemini API. Pelajari
perbedaan antara penggunaan dua penyedia API.
Jika memilih untuk menggunakan Gemini Developer API, Anda dapat memanfaatkan "paket gratis" mereka untuk memulai dan menjalankan kampanye dengan cepat.
SDK klien seluler & web
Anda mengirim permintaan ke model langsung dari aplikasi seluler atau web menggunakan SDK klien Firebase AI Logic kami, yang tersedia dalam Swift untuk platform Apple, Kotlin & Java untuk Android, JavaScript untuk Web, Dart untuk Flutter, dan Unity.
Jika Anda telah menyiapkan kedua penyedia Gemini API di project Firebase, Anda dapat beralih antar-penyedia API hanya dengan mengaktifkan API lain dan mengubah beberapa baris kode inisialisasi.
Selain itu, SDK klien kami untuk Web menawarkan akses eksperimental ke
inferensi hybrid dan di perangkat untuk aplikasi web
yang berjalan di Chrome di desktop. Konfigurasi ini memungkinkan aplikasi Anda menggunakan model di perangkat saat tersedia, tetapi kembali dengan lancar ke model yang dihosting di cloud saat diperlukan.
Layanan proxy
Layanan proxy kami bertindak sebagai gateway antara klien dan penyedia
Gemini API pilihan Anda (dan model Google). Layanan ini menyediakan layanan dan integrasi yang penting untuk aplikasi seluler dan web. Misalnya, Anda dapat
menyiapkan Firebase App Check untuk membantu melindungi
penyedia API pilihan Anda dan resource backend Anda dari penyalahgunaan oleh klien yang tidak sah.
Hal ini sangat penting jika Anda memilih untuk menggunakan
Gemini Developer API karena layanan proxy kami dan integrasi App Check ini memastikan bahwa kunci API Gemini Anda tetap berada di server dan
tidak disematkan dalam codebase aplikasi Anda.
Alur implementasi
Siapkan project Firebase dan hubungkan aplikasi Anda ke Firebase
Gunakan alur kerja terpandu di halaman Firebase AI Logic di
konsol Firebase untuk menyiapkan project (termasuk mengaktifkan
API yang diperlukan untuk penyedia Gemini API yang Anda pilih), mendaftarkan aplikasi
ke project Firebase, lalu menambahkan konfigurasi Firebase ke
aplikasi Anda.
Menginstal dan melakukan inisialisasi SDK
Instal Firebase AI Logic SDK yang khusus untuk platform
aplikasi Anda, lalu lakukan inisialisasi layanan dan buat
instance model di aplikasi Anda.
Mengirim permintaan perintah ke model Gemini dan Imagen
Terapkan integrasi penting untuk aplikasi seluler dan web, seperti
melindungi API dari penyalahgunaan dengan
Firebase App Check
dan menggunakan
Firebase Remote Config
untuk memperbarui parameter dalam kode Anda dari jarak jauh (seperti nama model).
Langkah berikutnya
Mulai mengakses model dari aplikasi seluler atau web Anda
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Informasi yang saya butuhkan tidak ada","missingTheInformationINeed","thumb-down"],["Terlalu rumit/langkahnya terlalu banyak","tooComplicatedTooManySteps","thumb-down"],["Sudah usang","outOfDate","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Masalah kode / contoh","samplesCodeIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-19 UTC."],[],[],null,["Gemini API using Firebase AI Logic \nplat_ios plat_android plat_web plat_flutter plat_unity \nBuild AI-powered mobile and web apps and features with the Gemini and Imagen models using Firebase AI Logic\n\nFirebase AI Logic gives you access to the latest generative AI models from\nGoogle: the Gemini models and Imagen models.\n\nIf you need to call the Gemini API or Imagen API directly\nfrom your mobile or web app --- rather than server-side --- you can use the\nFirebase AI Logic client SDKs. These client SDKs are built\nspecifically for use with mobile and web apps, offering security options against\nunauthorized clients as well as integrations with other Firebase services.\n\n**These client SDKs are available in\nSwift for Apple platforms, Kotlin \\& Java for Android, JavaScript for web,\nDart for Flutter, and Unity.**\n\n\n| **Firebase AI Logic and its client SDKs were\n| formerly called \"Vertex AI in Firebase\".** In May 2025, we [renamed and\n| repackaged our services into Firebase AI Logic](/docs/ai-logic/faq-and-troubleshooting#renamed-product) to better reflect our expanded services and features --- for example, we now support the Gemini Developer API!\n\n\u003cbr /\u003e\n\nWith these client SDKs, you can add AI personalization to apps, build an AI chat\nexperience, create AI-powered optimizations and automation, and much more!\n\n[Get started](/docs/ai-logic/get-started)\n\n\u003cbr /\u003e\n\n**Need more flexibility or server-side integration?** \n\n[Genkit](https://genkit.dev/) is Firebase's open-source\nframework for sophisticated server-side AI development with broad access to\nmodels from Google, OpenAI, Anthropic, and more. It includes more advanced AI\nfeatures and dedicated local tooling.\n\nKey capabilities\n\n|---------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| Multimodal and natural language input | The [Gemini models](/docs/ai-logic/models) are multimodal, so prompts sent to the Gemini API can include text, images, PDFs, video, and audio. Some Gemini models can also generate multimodal *output* . Both the Gemini and Imagen models can be prompted with natural language input. |\n| Growing suite of capabilities | With the SDKs, you can call the Gemini API or Imagen API directly from your mobile or web app to [build AI chat experiences](/docs/ai-logic/chat), [generate images,](/docs/ai-logic/generate-images-imagen) use tools (like [function calling](/docs/ai-logic/function-calling) and [grounding with Google Search](/docs/ai-logic/grounding-google-search)), [stream multimodal input and output (including audio)](/docs/ai-logic/live-api), and more. |\n| Security and abuse prevention for production apps | Use [Firebase App Check](/docs/ai-logic/app-check) to help protect the APIs that access the Gemini and Imagen models from abuse by unauthorized clients. Firebase AI Logic also has [rate limits per user](/docs/ai-logic/faq-and-troubleshooting#rate-limits-per-user) *by default*, and these per-user rate limits are fully configurable. |\n| Robust infrastructure | Take advantage of scalable infrastructure that's built for use with mobile and web apps, like [managing files with Cloud Storage for Firebase](/docs/ai-logic/solutions/cloud-storage), managing structured data with Firebase database offerings (like [Cloud Firestore](/docs/firestore)), and dynamically setting run-time configurations with [Firebase Remote Config](/docs/ai-logic/solutions/remote-config). |\n\nHow does it work?\n\nFirebase AI Logic provides client SDKs, a proxy service, and other features\nwhich allow you to access Google's generative AI models to build AI features in\nyour mobile and web apps.\n\nSupport for Google models and \"Gemini API\" providers\n\nWe support all the latest Gemini models and Imagen 3 models,\nand you choose your preferred \"Gemini API\" provider to access these models.\nWe support both the Gemini Developer API and\nVertex AI Gemini API. Learn about the\n[differences between using the two API providers](/docs/ai-logic/faq-and-troubleshooting#differences-between-gemini-api-providers).\n\nAnd if you choose to use the Gemini Developer API, you can take\nadvantage of their \"free tier\" to get you up and running fast.\n\nMobile \\& web client SDKs\n\nYou send requests to the models directly from your mobile or web app using our\nFirebase AI Logic client SDKs, available in\nSwift for Apple platforms, Kotlin \\& Java for Android, JavaScript for Web,\nDart for Flutter, and Unity.\n\nIf you have both of the Gemini API providers set up in your Firebase\nproject, then you can switch between API providers just by enabling the other\nAPI and changing a few lines of initialization code.\n\nAdditionally, our client SDK for Web offers experimental access to\n[hybrid and on-device inference for web apps](/docs/ai-logic/hybrid-on-device-inference)\nrunning on Chrome on desktop. This configuration allows your app to use the\non-device model when it's available, but fall back seamlessly to the\ncloud-hosted model when needed.\n\nProxy service\n\nOur proxy service acts as a gateway between the client and your chosen\nGemini API provider (and Google's models). It provides services and\nintegrations that are important for mobile and web apps. For example, you can\n[set up Firebase App Check](/docs/ai-logic/app-check) to help protect your\nchosen API provider and your backend resources from abuse by unauthorized\nclients.\n\nThis is particularly critical if you chose to use the\nGemini Developer API because our proxy service and this App Check\nintegration make sure that your Gemini API key stays on the server and\nis *not* embedded in your apps' codebase.\n\nImplementation path\n\n|---|---------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| | Set up your Firebase project and connect your app to Firebase | Use the guided workflow in the [**Firebase AI Logic** page](https://console.firebase.google.com/project/_/ailogic) of the Firebase console to set up your project (including enabling the required APIs for your chosen Gemini API provider), register your app with your Firebase project, and then add your Firebase configuration to your app. |\n| | Install the SDK and initialize | Install the Firebase AI Logic SDK that's specific to your app's platform, and then initialize the service and create a model instance in your app. |\n| | Send prompt requests to the Gemini and Imagen models | Use the SDKs to send text-only or multimodal prompts to a Gemini model to generate [text and code](/docs/ai-logic/generate-text), [structured output (like JSON)](/docs/ai-logic/generate-structured-output) and [images](/docs/ai-logic/generate-images-gemini). Alternatively, you can also prompt an Imagen model to [generate images](/docs/ai-logic/generate-images-imagen). Build richer experiences with [multi-turn chat](/docs/ai-logic/chat), [bidirectional streaming of text and audio](/docs/ai-logic/live-api), and [function calling](/docs/ai-logic/function-calling). |\n| | Prepare for production | Implement important integrations for mobile and web apps, like protecting the API from abuse with [Firebase App Check](/docs/ai-logic/app-check) and using [Firebase Remote Config](/docs/ai-logic/solutions/remote-config) to update parameters in your code remotely (like model name). |\n\nNext steps\n\nGet started with accessing a model from your mobile or web app\n\n[Go to Getting Started guide](/docs/ai-logic/get-started)\n\n\nLearn more about the supported models Learn about the [models available for various use cases](/docs/ai-logic/models) and their [quotas](/docs/ai-logic/quotas) and [pricing](/docs/ai-logic/pricing).\n\n\u003cbr /\u003e"]]