From the course: Build Your Own AI Lab
Introducing Amazon Bedrock
From the course: Build Your Own AI Lab
Introducing Amazon Bedrock
- [Instructor] Let's go over Amazon Bedrock or AWS generative AI platform. Bedrock is a fully managed service by, of course, AWS and Amazon that allows you to deploy different types of AI models and features to accelerate your business workflows and, of course, for you to deploy different AI implementations. A few of the key features of Amazon Bedrock is that it offers a curated list of models that you can use, foundation models from top AI organizations including anthropic, Cohere, Stability AI, Meta, and many others, right? So even, of course, Amazon's own models. These models are, of course, pre-trained on extensive data sets, allowing you to perform different tasks and you can combine these with other implementations in an on-premise lab to then experiment and of course, become familiar with these technologies, right? You can come to this section here, the custom models and fine tuning and allows you to, of course, to fine-tune different models and create your own domain-specific model. Or you can also generate synthetic data from a large foundation model, basically, using that as a federated learning environment where you can use a model for, I'm going to put here M1 for the first model to then generate synthetic data to train a new model. I'm going to call it X in the screen here, right? So that's what the distillation techniques and functionality in Bedrock as well as other cloud platforms, they have similar methodologies and similar features. You can, of course, do this on-premise. However, the compute power for some of these may take a significant investment. So that's where you playing with these cloud platforms, it makes it a little bit more budget-friendly instead of you having a bank of GPUs in your environment. The other cool thing that Bedrock actually has is that you have a playground just like the playgrounds that we were covering earlier with different implementations including RAG flow and some other ones. But this is basically included within the environment and you have a single prompt to generate single responses or generate a conversation and inevitably, submitting prompts and replies to see his response so you can evaluate your custom application. It also allows you to create agents. So if you navigate to the agent's section here, you can create an AI agent very quickly. I'm just going to put here omar-test-agent1. And then, of course, you can put a description and even enable a multi-agent collaboration. So basically, if you have more than one agent, they can collaborate and interact with each other. And then once you click on create, it will take you to this screen where you can select specific models By default, I had a Claude 3.5 Sonnet because I already had deployed it in here, but again, you can use a specific service role, create a new one, select one from the pull down menu. I don't have anything created right now. And then you can give a specific instructions for the agent. So basically, system prompts and personas and then associate this with other functionality including code interpreter, even selecting whether the agent can prompt for additional information from the user and so on as well as how to perform encryption in your data sets, right? Even customize encryption settings and many enterprise-grade functionality and applications because at the end of the day, this is, of course, being tailored for very large organizations to be able to take advantage of agentic implementations and so on. You can attach new knowledge bases that you may actually have or add a new one as easy as this, right? You can basically create a new knowledge base and attach that proprietary content for you to allow the AI environment or the AI agent to provide additional context that is relevant for the domain-specific task that you want to give it. Another feature that you have is the flows feature. And Flows is basically the ability for you to create a sequence of events that will correspond to invoking actions in Bedrock and then attach knowledge basis invoking a new or assisting AWS Lambda function and then calling Amazon Lex bot for you to interact with these applications to, of course, take your deployment to the next level. So one of my recommendations, if you're setting up a experimentation lab, I will say that start with just a simple deployment of a model here. See how you can incorporate your knowledge basis, even if it's something that you create by copy and pasting a few documents out there, but at least for you to get familiarized with the retrieval methods or the different types of RAG implementations or retrieval-augmented generation, and then take it to the next level into building agents is creating creative system prompts, working on your prompt engineering tasks, and so on. And then, of course, going to the more advanced sections. But this is something that Amazon is evolving, almost on a weekly basis you see a new feature here. So again, some of these will cost money, right? Especially if you start deploying different lambda functions, if you start playing with different data sets that will require storage and so on. But for quick labs and for learning, it is very, very minimal cost. And you can enhance your career, and enhance your skills by just playing with these feature sets in here and understanding the end-to-end generative AI workflows that exist within these cloud platforms.
Contents
-
-
-
-
(Locked)
Learning objectives38s
-
(Locked)
Pros and cons of cloud-based AI labs and sandboxes8m 18s
-
Introducing Amazon Bedrock6m 59s
-
(Locked)
Surveying Amazon SageMaker12m 56s
-
(Locked)
Exploring Google Vertex AI14m 13s
-
(Locked)
Using Microsoft Azure AI Foundry10m 11s
-
(Locked)
Discussing cost management and security6m 45s
-
(Locked)
Deploying Ollama in the cloud with Terraform3m 28s
-
(Locked)
-
-
-