Grow with Google: เข้าถึงโมเดล Gemini ผ่าน API และ Google AI
Studio ได้อย่างรวดเร็วสำหรับกรณีการใช้งานในการผลิต หากคุณกำลังมองหาแพลตฟอร์มที่ใช้ Google Cloud
Vertex AI สามารถจัดหาโครงสร้างพื้นฐานเพิ่มเติมที่รองรับได้
Google ให้สิทธิ์เข้าถึงเครดิต Gemini API สำหรับนักวิทยาศาสตร์และนักวิจัยทางวิชาการผ่านโปรแกรม Gemini สำหรับสถาบันการศึกษา เพื่อสนับสนุนการวิจัยทางวิชาการและขับเคลื่อนการวิจัยที่ล้ำสมัย
เริ่มต้นใช้งาน Gemini
Gemini API และ Google AI Studio ช่วยให้คุณเริ่มทำงานกับโมเดลล่าสุดของ Google
และเปลี่ยนไอเดียให้เป็นแอปพลิเคชันที่ปรับขนาดได้
Python
fromgoogleimportgenaiclient=genai.Client()response=client.models.generate_content(model="gemini-2.0-flash",contents="How large is the universe?",)print(response.text)
JavaScript
import{GoogleGenAI}from"@google/genai";constai=newGoogleGenAI({});asyncfunctionmain(){constresponse=awaitai.models.generateContent({model:"gemini-2.0-flash",contents:"How large is the universe?",});console.log(response.text);}awaitmain();
REST
curl"https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent"\
-H"x-goog-api-key: $GEMINI_API_KEY"\
-H'Content-Type: application/json'\
-XPOST\
-d'{ "contents": [{ "parts":[{"text": "How large is the universe?"}] }] }'
[[["เข้าใจง่าย","easyToUnderstand","thumb-up"],["แก้ปัญหาของฉันได้","solvedMyProblem","thumb-up"],["อื่นๆ","otherUp","thumb-up"]],[["ไม่มีข้อมูลที่ฉันต้องการ","missingTheInformationINeed","thumb-down"],["ซับซ้อนเกินไป/มีหลายขั้นตอนมากเกินไป","tooComplicatedTooManySteps","thumb-down"],["ล้าสมัย","outOfDate","thumb-down"],["ปัญหาเกี่ยวกับการแปล","translationIssue","thumb-down"],["ตัวอย่าง/ปัญหาเกี่ยวกับโค้ด","samplesCodeIssue","thumb-down"],["อื่นๆ","otherDown","thumb-down"]],["อัปเดตล่าสุด 2025-08-22 UTC"],[],[],null,["Accelerate discovery with Gemini for Research \n[Get a Gemini API Key](https://aistudio.google.com/apikey)\n\nGemini models can be used to advance foundational research across disciplines.\nHere are ways that you can explore Gemini for your research:\n\n- **Fine-tuning** : You can fine-tune Gemini models for a variety of modalities to advance your research. [Learn more](/gemini-api/docs/model-tuning/tutorial).\n- **Analyze and control model outputs** : For further analysis, you can examine a response candidate generated by the model using tools like `Logprobs` and `CitationMetadata`. You can also configure options for model generation and outputs, such as `responseSchema`, `topP`, and `topK`. [Learn more](/api/generate-content).\n- **Multimodal inputs** : Gemini can process images, audio, and videos, enabling a multitude of exciting research directions. [Learn more](/gemini-api/docs/vision).\n- **Long-context capabilities** : Gemini 1.5 Flash comes with a 1-million-token context window, and Gemini 1.5 Pro comes with a 2-million-token context window. [Learn more](/gemini-api/docs/long-context).\n- **Grow with Google**: Quickly access Gemini models through the API and Google AI Studio for production use cases. If you're looking for a Google Cloud-based platform, Vertex AI can provide additional supporting infrastructure.\n\nTo support academic research and drive cutting-edge research, Google provides\naccess to Gemini API credits for scientists and academic researchers through the\n[Gemini Academic Program](/gemini-api/docs/gemini-for-research#gemini-academic-program).\n\nGet started with Gemini\n\nThe Gemini API and Google AI Studio help you start working with Google's latest\nmodels and turn your ideas into applications that scale. \n\nPython \n\n from google import genai\n\n client = genai.Client()\n response = client.models.generate_content(\n model=\"gemini-2.0-flash\",\n contents=\"How large is the universe?\",\n )\n\n print(response.text)\n\nJavaScript \n\n import { GoogleGenAI } from \"@google/genai\";\n\n const ai = new GoogleGenAI({});\n\n async function main() {\n const response = await ai.models.generateContent({\n model: \"gemini-2.0-flash\",\n contents: \"How large is the universe?\",\n });\n console.log(response.text);\n }\n\n await main();\n\nREST \n\n curl \"https://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash:generateContent\" \\\n -H \"x-goog-api-key: $GEMINI_API_KEY\" \\\n -H 'Content-Type: application/json' \\\n -X POST \\\n -d '{\n \"contents\": [{\n \"parts\":[{\"text\": \"How large is the universe?\"}]\n }]\n }'\n\nFeatured academics \n\"Our research investigates Gemini as a visual language model (VLM) and its agentic behaviors in diverse environments from robustness and safety perspectives. So far, we have evaluated Gemini's robustness against distractions such as pop-up windows when VLM agents perform computer tasks, and have leveraged Gemini to analyze social interaction, temporal events as well as risk factors based on video input.\"\n[](https://cs.stanford.edu/~diyiy/) \n\"Gemini Pro and Flash, with their long context window, have been helping us in OK-Robot, our open-vocabulary mobile manipulation project. Gemini enables complex natural language queries and commands over the robot's \"memory\": in this case, previous observations made by the robot over a long operation duration. Mahi Shafiullah and I are also using Gemini to decompose tasks into code that the robot can execute in the real world.\"\n[](https://www.lerrelpinto.com/)\n\nGemini Academic Program\n\nQualified academic researchers (such as faculty, staff, and PhD students) in [supported\ncountries](/gemini-api/docs/available-regions) can apply to receive Gemini API\ncredits and higher rate limits for research projects. This support enables\nhigher throughput for scientific experiments and advances research.\n\nWe are particularly interested in the research areas in the following section,\nbut we welcome applications from diverse scientific disciplines:\n\n- **Evaluations and benchmarks**: Community-endorsed evaluation methods that\n can provide a strong performance signal in areas such as factuality, safety,\n instruction following, reasoning, and planning.\n\n- **Accelerating scientific discovery to benefit humanity**: Potential\n applications of AI in interdisciplinary scientific research, including areas\n such as rare and neglected diseases, experimental biology, materials science,\n and sustainability.\n\n- **Embodiment and interactions**: Utilizing large language models to\n investigate novel interactions within the fields of embodied AI, ambient\n interactions, robotics, and human-computer interaction.\n\n- **Emergent capabilities**: Exploring new agentic capabilities required to\n enhance reasoning and planning, and how capabilities can be expanded during\n inference (e.g., by utilizing Gemini Flash).\n\n- **Multimodal interaction and understanding**: Identifying gaps and\n opportunities for multimodal foundational models for analysis, reasoning,\n and planning across a variety of tasks.\n\nEligibility: Only individuals (faculty members, researchers or equivalent)\naffiliated with a valid academic institution, or academic research organization\ncan apply. Note that API access and credits will be granted and removed\nat Google's discretion. We review applications on a monthly basis. \n\nStart researching with the Gemini API [Apply now](https://forms.gle/HMviQstU8PxC5iCt5)"]]